<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/artificial-developer-intelligence" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Artificial Developer Intelligence</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/artificial-developer-intelligence</itunes:new-feed-url>
    <description>Three engineer friends argue about AI so you don't have to.

Shimin Zhang, Dan Lasky, and Rahul Yadav are working developers who've been watching AI transform their profession in real time, and they got opinions on the robot takeover. Every week the three get together to riff on the latest AI news, geek out over research papers, roast each other's tool choices, and occasionally have an existential crisis about whether the craft is dying or just getting weird.

What you're signing up for:
- AI news without the LinkedIn cringe: model drops, acquisitions, open-source drama, and the other stuff that actually matters if you write code for a living.
- Technique corner: real tips from the trenches: spec-driven development, multi-agent orchestration, Claude.md tricks, and all the ways they've wasted hours so you don't have to.
- Two Minutes to Midnight: the show's running AI bubble tracker, complete with circular funding diagrams, hyperscaler CAPEX math, and a doomsday clock they keep arguing about moving.
- Deep dives that (occasionally) go deep: hallucination neurons, agentic memory, workflow automation economics, LLM architectures the papers nobody else is covering because they're hard.
- Dan's Rant: Dan frequently gets mad about things. It's a whole thing.
- The feelings segment: Yes, Shimin reads Tennyson on a tech podcast. Yes, Rahul wrote an AI-generated country song. No, they're not sorry.

Three friends with strong opinions, questionable metaphors, and genuine love for the craft they're also mourning for. If you want to understand AI deeply, use it without embarrassing yourself, and laugh at the absurdity of it all, pull up a chair.</description>
    <copyright>ADIPod </copyright>
    <podcast:guid>05edf41e-0528-5e48-a8e9-827bef20978b</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <itunes:applepodcastsverify>6119a200-d499-11f0-8f77-1532abfddea6</itunes:applepodcastsverify>
    <language>en</language>
    <pubDate>Mon, 11 May 2026 21:01:25 -0700</pubDate>
    <lastBuildDate>Mon, 11 May 2026 21:02:52 -0700</lastBuildDate>
    <link>https://www.adipod.ai</link>
    
    <itunes:category text="Technology"/>
    <itunes:category text="News">
      <itunes:category text="Tech News"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/yJJt2M4qGGavsZiyukwfPjBDfyFjWfAjMi-kyHw1LMs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mZmI0/NTI3NjcwODZhYmNm/MmViZjE4YmUwMTkz/MzgyZi5wbmc.jpg"/>
    <itunes:summary>Three engineer friends argue about AI so you don't have to.

Shimin Zhang, Dan Lasky, and Rahul Yadav are working developers who've been watching AI transform their profession in real time, and they got opinions on the robot takeover. Every week the three get together to riff on the latest AI news, geek out over research papers, roast each other's tool choices, and occasionally have an existential crisis about whether the craft is dying or just getting weird.

What you're signing up for:
- AI news without the LinkedIn cringe: model drops, acquisitions, open-source drama, and the other stuff that actually matters if you write code for a living.
- Technique corner: real tips from the trenches: spec-driven development, multi-agent orchestration, Claude.md tricks, and all the ways they've wasted hours so you don't have to.
- Two Minutes to Midnight: the show's running AI bubble tracker, complete with circular funding diagrams, hyperscaler CAPEX math, and a doomsday clock they keep arguing about moving.
- Deep dives that (occasionally) go deep: hallucination neurons, agentic memory, workflow automation economics, LLM architectures the papers nobody else is covering because they're hard.
- Dan's Rant: Dan frequently gets mad about things. It's a whole thing.
- The feelings segment: Yes, Shimin reads Tennyson on a tech podcast. Yes, Rahul wrote an AI-generated country song. No, they're not sorry.

Three friends with strong opinions, questionable metaphors, and genuine love for the craft they're also mourning for. If you want to understand AI deeply, use it without embarrassing yourself, and laugh at the absurdity of it all, pull up a chair.</itunes:summary>
    <itunes:subtitle>Three engineer friends argue about AI so you don't have to.</itunes:subtitle>
    <itunes:keywords> AI, software engineering, coding agents, LLM, Claude, GPT, Gemini, vibe coding, AI bubble, developer tools, open source, prompt engineering, Anthropic, Claude,  OpenAI, agentic coding, tech podcast, AI news, deep learning, AI safety, future of programming, MCP, developer productivity, cognitive debt, spec-driven development, AI ethics</itunes:keywords>
    <itunes:owner>
      <itunes:name>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:name>
      <itunes:email>humans@adipod.ai</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>OpenAI's Goblin Problem, 10 Lessons When Code Is Cheap, AI Addiction Loop</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>OpenAI's Goblin Problem, 10 Lessons When Code Is Cheap, AI Addiction Loop</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3c82a93d-5fe0-487d-ae38-4507c07261fa</guid>
      <link>https://www.adipod.ai/24</link>
      <description>
        <![CDATA[<p>Why does the leaked Codex CLI system prompt explicitly tell GPT-5.5 to never mention goblins, gremlins, raccoons, trolls, ogres, or pigeons? Why is OpenAI now gating its cyber model the same way it mocked Anthropic for gating Mythos last month? And what does it mean that Dan tried to write a personal project without Claude — and physically couldn't?</p><p><br></p><p>Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav cover these and more on ADI Pod #24. This week: GPT-5.5 Cyber's gated release, OpenAI's "Where the Goblins Came From" RLHF post-mortem, Adi Osmani's five patterns for long-running agents, Jesse Vincent's adversarial review prompt, Drew Brunig's 10 lessons for agentic coding, Ivan Turkovic's history of failed attempts to eliminate programmers, Nilay Patel's "software brain" thesis, the Nature paper showing warm AI models lose 10–30 percentage points of accuracy, and a $1.1B raise for an AI lab that wants to train without human data.</p><p><br></p><p><strong>##</strong> <strong>In this episode</strong></p><p><br></p><p>▸ <strong>**GPT-5.5 Cyber gating**</strong> — Sam Altman called Mythos's gated release "fear-based marketing" two months ago. Now OpenAI is doing the exact same thing with the GPT-5.5 cyber variant. Multi-tier model access (enterprise, government, research preview, cyber) is becoming the default — and Shimin worries the White House is about to add another gate.</p><p><br></p><p>▸ <strong>**The Goblin Problem**</strong> — OpenAI's Codex CLI prompt was open-sourced and turned out to include "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons." OpenAI's "Where the Goblins Came From" post-mortem reveals a textbook RLHF failure: a "nerdy persona" reward signal trained the model to mention goblins in 66.7% of nerdy responses, and the tic propagated through supervised fine-tuning to non-nerdy responses too.</p><p><br></p><p>▸ <strong>**Long-running agents (Adi Osmani / Elevate)**</strong> — Five patterns for agents that run for hours or days: checkpoints over zero-or-100 outputs, governing memory like microservices, ambient processing without forced human-in-the-loop, fleet orchestration, and budget circuit-breakers. Bonus: the running gag where Rahul realizes the post is essentially an ad for Google Enterprise Agent Platform.</p><p><br></p><p>▸ <strong>**Adversarial review prompts (Jesse Vincent / superpowers)**</strong> — A four-step technique for getting better code review out of agents: invoke "fresh eyes," dispatch competing subagents, promise a reward (a cookie), and threaten disappointment if they don't find N issues.</p><p><br></p><p>▸ <strong>**10 Lessons for Agentic Coding (Drew Brunig)**</strong> — Implement to learn, rebuild often, invest in end-to-end tests, document intent, keep specs in sync, find the hard stuff, automate the easy stuff, develop taste, agents amplify experience, and the kicker: agent code is "free as in puppies" — the puppy is free, but you have to feed it and walk it.</p><p><br></p><p>▸ <strong>**The Eternal Promise (Ivan Turkovic)**</strong> — A history of attempts to eliminate programmers from COBOL through 4GLs, CASE tools, the Japanese 5th Generation project, no-code/low-code, and now LLMs. Each abstraction layer expanded software jobs rather than replacing them. Shimin's reframe: "Software is calcified business process. Someone has to do the calcifying."</p><p><br></p><p>▸ <strong>**People Do Not Yearn for Automation (Nilay Patel / The Verge)**</strong> — Why Gen Z hopefulness about AI dropped to 18% (anger up to 31%), why America is uniquely AI-pessimistic, and what Nilay calls "software brain" — the Silicon Valley assumption that human life can be reduced to data and algorithms. Plus Anuradha Pandey's reframe: stop calling them social media, call them ad platforms.</p><p><br></p><p>▸ <strong>**Warm models lose accuracy**</strong> — A Nature paper finds AI models trained for warmth lose 10–30 percentage points of accuracy. A companion study shows humans trust warm models <em>*more*</em> even when they're wrong. Frontier labs now have an explicit incentive to train the warmest model, not the most accurate one. Plus: Richard Dawkins talks to "Claudia" for three days and concludes AI must be conscious.</p><p><br></p><p>▸ <strong>**Dan's Rant — The AI Addiction Loop**</strong> — Dan tries to build a Home Assistant TypeScript automation without Claude. Can't. "It felt like they had fundamentally broken my arm in a way that I can't do this task as quickly as I wanted to. That scares me a lot." Shimin: "We're running into the social media addiction loop in three months instead of a decade."</p><p><br></p><p>▸ <strong>**Two Minutes to Midnight**</strong> — OpenAI projects ChatGPT Plus dropping from 44M to 9M subscribers in 2026 while scaling the ad-supported tier from 3M to 112M (30×). David Silver raises $1.1B for Ineffable Intelligence — a no-human-data approach inspired by AlphaGo. Scout AI raises $100M for autonomous military vision-language-action models. Bubble Clock held at 4:00 minutes.</p><p><br></p><p><strong>##</strong> <strong>Key takeaways</strong></p><p><br></p><p>— Reward hacking can propagate latent persona quirks through fine-tuning in ways the lab itself only catches when users surface them.</p><p>— Memory drift, not raw context size, is the real ceiling for long-running agents. Govern memory like you govern microservices.</p><p>— Code is free as in puppies, not free as in beer. The cost shifts to maintenance, security, and the new burden of maintaining your own automations.</p><p>— Warm AI is an alignment trap: incentivized for trust over accuracy, weaponizable in authoritarian hands.</p><p>— "You can outsource your thinking, but you can't outsource your understanding." — Karpathy, via Rahul.</p><p>— AI addiction hits in three months. Social media took a decade. We are not ready for the time scale.</p><p><br></p><p><strong>##</strong> <strong>Chapters</strong></p><p></p><ul><li>(00:00) - Cold Open &amp; Welcome</li>
<li>(02:50) - News Threadmill: GPT-5.5 Cyber Gets Mythos-Style Gating</li>
<li>(08:52) - News Threadmill: The Goblin Problem &amp; RLHF Post-Mortem</li>
<li>(13:52) - Tool Shed: Long-Running Agents (Adi Osmani)</li>
<li>(25:52) - Technique Corner: Adversarial Review Prompts (Jesse Vincent)</li>
<li>(30:59) - Technique Corner: 10 Lessons for Agentic Coding (Drew Brunig)</li>
<li>(42:31) - Post-Processing: The Eternal Promise — A History of Attempts to Eliminate Programmers</li>
<li>(01:02:10) - Post-Processing: People Do Not Yearn for Automation</li>
<li>(01:09:08) - Post-Processing: Warm Models &amp; The Sycophancy Trap</li>
<li>(01:13:28) - Dan's Rant: Home Automation &amp; The AI Addiction Loop</li>
<li>(01:20:09) - Two Minutes to Midnight: OpenAI's 30× Ad-Tier, David Silver's $1.1B, Scout AI's Drones</li>
<li>(01:25:55) - Outro</li>
</ul><p><br></p><p><br></p><p><strong>##</strong> <strong>Resources mentioned</strong></p><p><br></p><p><strong>**News Threadmill — GPT-5.5 Cyber &amp; The Goblin Problem**</strong></p><p>• TechCrunch — After dissing Anthropic for limiting Mythos, OpenAI restricts access to cyber too: https://techcrunch.com/2026/04/30/after-dissing-anthropic-for-limiting-mythos-openai-restricts-access-to-cyber-too/</p><p>• Ars Technica — Amid mythos-hyped cybersecurity prowess, researchers find GPT-5.5 is just as good: https://arstechnica.com/ai/2026/05/amid-mythos-hyped-cybersecurity-prowess-researchers-find-gpt-5-5-is-just-as-good/</p><p>• Ars Technica — OpenAI Codex system prompt includes explicit directive to never talk about goblins: https://arstechnica.com/ai/2026/04/openai-codex-system-prompt-includes-explicit-directive-to-never-talk-about-goblins/</p><p>• OpenAI — Where the Goblins Came From: https://openai.com/index/where-the-goblins-came-from/</p>...]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Why does the leaked Codex CLI system prompt explicitly tell GPT-5.5 to never mention goblins, gremlins, raccoons, trolls, ogres, or pigeons? Why is OpenAI now gating its cyber model the same way it mocked Anthropic for gating Mythos last month? And what does it mean that Dan tried to write a personal project without Claude — and physically couldn't?</p><p><br></p><p>Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav cover these and more on ADI Pod #24. This week: GPT-5.5 Cyber's gated release, OpenAI's "Where the Goblins Came From" RLHF post-mortem, Adi Osmani's five patterns for long-running agents, Jesse Vincent's adversarial review prompt, Drew Brunig's 10 lessons for agentic coding, Ivan Turkovic's history of failed attempts to eliminate programmers, Nilay Patel's "software brain" thesis, the Nature paper showing warm AI models lose 10–30 percentage points of accuracy, and a $1.1B raise for an AI lab that wants to train without human data.</p><p><br></p><p><strong>##</strong> <strong>In this episode</strong></p><p><br></p><p>▸ <strong>**GPT-5.5 Cyber gating**</strong> — Sam Altman called Mythos's gated release "fear-based marketing" two months ago. Now OpenAI is doing the exact same thing with the GPT-5.5 cyber variant. Multi-tier model access (enterprise, government, research preview, cyber) is becoming the default — and Shimin worries the White House is about to add another gate.</p><p><br></p><p>▸ <strong>**The Goblin Problem**</strong> — OpenAI's Codex CLI prompt was open-sourced and turned out to include "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons." OpenAI's "Where the Goblins Came From" post-mortem reveals a textbook RLHF failure: a "nerdy persona" reward signal trained the model to mention goblins in 66.7% of nerdy responses, and the tic propagated through supervised fine-tuning to non-nerdy responses too.</p><p><br></p><p>▸ <strong>**Long-running agents (Adi Osmani / Elevate)**</strong> — Five patterns for agents that run for hours or days: checkpoints over zero-or-100 outputs, governing memory like microservices, ambient processing without forced human-in-the-loop, fleet orchestration, and budget circuit-breakers. Bonus: the running gag where Rahul realizes the post is essentially an ad for Google Enterprise Agent Platform.</p><p><br></p><p>▸ <strong>**Adversarial review prompts (Jesse Vincent / superpowers)**</strong> — A four-step technique for getting better code review out of agents: invoke "fresh eyes," dispatch competing subagents, promise a reward (a cookie), and threaten disappointment if they don't find N issues.</p><p><br></p><p>▸ <strong>**10 Lessons for Agentic Coding (Drew Brunig)**</strong> — Implement to learn, rebuild often, invest in end-to-end tests, document intent, keep specs in sync, find the hard stuff, automate the easy stuff, develop taste, agents amplify experience, and the kicker: agent code is "free as in puppies" — the puppy is free, but you have to feed it and walk it.</p><p><br></p><p>▸ <strong>**The Eternal Promise (Ivan Turkovic)**</strong> — A history of attempts to eliminate programmers from COBOL through 4GLs, CASE tools, the Japanese 5th Generation project, no-code/low-code, and now LLMs. Each abstraction layer expanded software jobs rather than replacing them. Shimin's reframe: "Software is calcified business process. Someone has to do the calcifying."</p><p><br></p><p>▸ <strong>**People Do Not Yearn for Automation (Nilay Patel / The Verge)**</strong> — Why Gen Z hopefulness about AI dropped to 18% (anger up to 31%), why America is uniquely AI-pessimistic, and what Nilay calls "software brain" — the Silicon Valley assumption that human life can be reduced to data and algorithms. Plus Anuradha Pandey's reframe: stop calling them social media, call them ad platforms.</p><p><br></p><p>▸ <strong>**Warm models lose accuracy**</strong> — A Nature paper finds AI models trained for warmth lose 10–30 percentage points of accuracy. A companion study shows humans trust warm models <em>*more*</em> even when they're wrong. Frontier labs now have an explicit incentive to train the warmest model, not the most accurate one. Plus: Richard Dawkins talks to "Claudia" for three days and concludes AI must be conscious.</p><p><br></p><p>▸ <strong>**Dan's Rant — The AI Addiction Loop**</strong> — Dan tries to build a Home Assistant TypeScript automation without Claude. Can't. "It felt like they had fundamentally broken my arm in a way that I can't do this task as quickly as I wanted to. That scares me a lot." Shimin: "We're running into the social media addiction loop in three months instead of a decade."</p><p><br></p><p>▸ <strong>**Two Minutes to Midnight**</strong> — OpenAI projects ChatGPT Plus dropping from 44M to 9M subscribers in 2026 while scaling the ad-supported tier from 3M to 112M (30×). David Silver raises $1.1B for Ineffable Intelligence — a no-human-data approach inspired by AlphaGo. Scout AI raises $100M for autonomous military vision-language-action models. Bubble Clock held at 4:00 minutes.</p><p><br></p><p><strong>##</strong> <strong>Key takeaways</strong></p><p><br></p><p>— Reward hacking can propagate latent persona quirks through fine-tuning in ways the lab itself only catches when users surface them.</p><p>— Memory drift, not raw context size, is the real ceiling for long-running agents. Govern memory like you govern microservices.</p><p>— Code is free as in puppies, not free as in beer. The cost shifts to maintenance, security, and the new burden of maintaining your own automations.</p><p>— Warm AI is an alignment trap: incentivized for trust over accuracy, weaponizable in authoritarian hands.</p><p>— "You can outsource your thinking, but you can't outsource your understanding." — Karpathy, via Rahul.</p><p>— AI addiction hits in three months. Social media took a decade. We are not ready for the time scale.</p><p><br></p><p><strong>##</strong> <strong>Chapters</strong></p><p></p><ul><li>(00:00) - Cold Open &amp; Welcome</li>
<li>(02:50) - News Threadmill: GPT-5.5 Cyber Gets Mythos-Style Gating</li>
<li>(08:52) - News Threadmill: The Goblin Problem &amp; RLHF Post-Mortem</li>
<li>(13:52) - Tool Shed: Long-Running Agents (Adi Osmani)</li>
<li>(25:52) - Technique Corner: Adversarial Review Prompts (Jesse Vincent)</li>
<li>(30:59) - Technique Corner: 10 Lessons for Agentic Coding (Drew Brunig)</li>
<li>(42:31) - Post-Processing: The Eternal Promise — A History of Attempts to Eliminate Programmers</li>
<li>(01:02:10) - Post-Processing: People Do Not Yearn for Automation</li>
<li>(01:09:08) - Post-Processing: Warm Models &amp; The Sycophancy Trap</li>
<li>(01:13:28) - Dan's Rant: Home Automation &amp; The AI Addiction Loop</li>
<li>(01:20:09) - Two Minutes to Midnight: OpenAI's 30× Ad-Tier, David Silver's $1.1B, Scout AI's Drones</li>
<li>(01:25:55) - Outro</li>
</ul><p><br></p><p><br></p><p><strong>##</strong> <strong>Resources mentioned</strong></p><p><br></p><p><strong>**News Threadmill — GPT-5.5 Cyber &amp; The Goblin Problem**</strong></p><p>• TechCrunch — After dissing Anthropic for limiting Mythos, OpenAI restricts access to cyber too: https://techcrunch.com/2026/04/30/after-dissing-anthropic-for-limiting-mythos-openai-restricts-access-to-cyber-too/</p><p>• Ars Technica — Amid mythos-hyped cybersecurity prowess, researchers find GPT-5.5 is just as good: https://arstechnica.com/ai/2026/05/amid-mythos-hyped-cybersecurity-prowess-researchers-find-gpt-5-5-is-just-as-good/</p><p>• Ars Technica — OpenAI Codex system prompt includes explicit directive to never talk about goblins: https://arstechnica.com/ai/2026/04/openai-codex-system-prompt-includes-explicit-directive-to-never-talk-about-goblins/</p><p>• OpenAI — Where the Goblins Came From: https://openai.com/index/where-the-goblins-came-from/</p>...]]>
      </content:encoded>
      <pubDate>Fri, 08 May 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/5cc608cc/5f9cffc8.mp3" length="41605151" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>5183</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Why does the leaked Codex CLI system prompt explicitly tell GPT-5.5 to never mention goblins, gremlins, raccoons, trolls, ogres, or pigeons? Why is OpenAI now gating its cyber model the same way it mocked Anthropic for gating Mythos last month? And what does it mean that Dan tried to write a personal project without Claude — and physically couldn't?</p><p><br></p><p>Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav cover these and more on ADI Pod #24. This week: GPT-5.5 Cyber's gated release, OpenAI's "Where the Goblins Came From" RLHF post-mortem, Adi Osmani's five patterns for long-running agents, Jesse Vincent's adversarial review prompt, Drew Brunig's 10 lessons for agentic coding, Ivan Turkovic's history of failed attempts to eliminate programmers, Nilay Patel's "software brain" thesis, the Nature paper showing warm AI models lose 10–30 percentage points of accuracy, and a $1.1B raise for an AI lab that wants to train without human data.</p><p><br></p><p><strong>##</strong> <strong>In this episode</strong></p><p><br></p><p>▸ <strong>**GPT-5.5 Cyber gating**</strong> — Sam Altman called Mythos's gated release "fear-based marketing" two months ago. Now OpenAI is doing the exact same thing with the GPT-5.5 cyber variant. Multi-tier model access (enterprise, government, research preview, cyber) is becoming the default — and Shimin worries the White House is about to add another gate.</p><p><br></p><p>▸ <strong>**The Goblin Problem**</strong> — OpenAI's Codex CLI prompt was open-sourced and turned out to include "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons." OpenAI's "Where the Goblins Came From" post-mortem reveals a textbook RLHF failure: a "nerdy persona" reward signal trained the model to mention goblins in 66.7% of nerdy responses, and the tic propagated through supervised fine-tuning to non-nerdy responses too.</p><p><br></p><p>▸ <strong>**Long-running agents (Adi Osmani / Elevate)**</strong> — Five patterns for agents that run for hours or days: checkpoints over zero-or-100 outputs, governing memory like microservices, ambient processing without forced human-in-the-loop, fleet orchestration, and budget circuit-breakers. Bonus: the running gag where Rahul realizes the post is essentially an ad for Google Enterprise Agent Platform.</p><p><br></p><p>▸ <strong>**Adversarial review prompts (Jesse Vincent / superpowers)**</strong> — A four-step technique for getting better code review out of agents: invoke "fresh eyes," dispatch competing subagents, promise a reward (a cookie), and threaten disappointment if they don't find N issues.</p><p><br></p><p>▸ <strong>**10 Lessons for Agentic Coding (Drew Brunig)**</strong> — Implement to learn, rebuild often, invest in end-to-end tests, document intent, keep specs in sync, find the hard stuff, automate the easy stuff, develop taste, agents amplify experience, and the kicker: agent code is "free as in puppies" — the puppy is free, but you have to feed it and walk it.</p><p><br></p><p>▸ <strong>**The Eternal Promise (Ivan Turkovic)**</strong> — A history of attempts to eliminate programmers from COBOL through 4GLs, CASE tools, the Japanese 5th Generation project, no-code/low-code, and now LLMs. Each abstraction layer expanded software jobs rather than replacing them. Shimin's reframe: "Software is calcified business process. Someone has to do the calcifying."</p><p><br></p><p>▸ <strong>**People Do Not Yearn for Automation (Nilay Patel / The Verge)**</strong> — Why Gen Z hopefulness about AI dropped to 18% (anger up to 31%), why America is uniquely AI-pessimistic, and what Nilay calls "software brain" — the Silicon Valley assumption that human life can be reduced to data and algorithms. Plus Anuradha Pandey's reframe: stop calling them social media, call them ad platforms.</p><p><br></p><p>▸ <strong>**Warm models lose accuracy**</strong> — A Nature paper finds AI models trained for warmth lose 10–30 percentage points of accuracy. A companion study shows humans trust warm models <em>*more*</em> even when they're wrong. Frontier labs now have an explicit incentive to train the warmest model, not the most accurate one. Plus: Richard Dawkins talks to "Claudia" for three days and concludes AI must be conscious.</p><p><br></p><p>▸ <strong>**Dan's Rant — The AI Addiction Loop**</strong> — Dan tries to build a Home Assistant TypeScript automation without Claude. Can't. "It felt like they had fundamentally broken my arm in a way that I can't do this task as quickly as I wanted to. That scares me a lot." Shimin: "We're running into the social media addiction loop in three months instead of a decade."</p><p><br></p><p>▸ <strong>**Two Minutes to Midnight**</strong> — OpenAI projects ChatGPT Plus dropping from 44M to 9M subscribers in 2026 while scaling the ad-supported tier from 3M to 112M (30×). David Silver raises $1.1B for Ineffable Intelligence — a no-human-data approach inspired by AlphaGo. Scout AI raises $100M for autonomous military vision-language-action models. Bubble Clock held at 4:00 minutes.</p><p><br></p><p><strong>##</strong> <strong>Key takeaways</strong></p><p><br></p><p>— Reward hacking can propagate latent persona quirks through fine-tuning in ways the lab itself only catches when users surface them.</p><p>— Memory drift, not raw context size, is the real ceiling for long-running agents. Govern memory like you govern microservices.</p><p>— Code is free as in puppies, not free as in beer. The cost shifts to maintenance, security, and the new burden of maintaining your own automations.</p><p>— Warm AI is an alignment trap: incentivized for trust over accuracy, weaponizable in authoritarian hands.</p><p>— "You can outsource your thinking, but you can't outsource your understanding." — Karpathy, via Rahul.</p><p>— AI addiction hits in three months. Social media took a decade. We are not ready for the time scale.</p><p><br></p><p><strong>##</strong> <strong>Chapters</strong></p><p></p><ul><li>(00:00) - Cold Open &amp; Welcome</li>
<li>(02:50) - News Threadmill: GPT-5.5 Cyber Gets Mythos-Style Gating</li>
<li>(08:52) - News Threadmill: The Goblin Problem &amp; RLHF Post-Mortem</li>
<li>(13:52) - Tool Shed: Long-Running Agents (Adi Osmani)</li>
<li>(25:52) - Technique Corner: Adversarial Review Prompts (Jesse Vincent)</li>
<li>(30:59) - Technique Corner: 10 Lessons for Agentic Coding (Drew Brunig)</li>
<li>(42:31) - Post-Processing: The Eternal Promise — A History of Attempts to Eliminate Programmers</li>
<li>(01:02:10) - Post-Processing: People Do Not Yearn for Automation</li>
<li>(01:09:08) - Post-Processing: Warm Models &amp; The Sycophancy Trap</li>
<li>(01:13:28) - Dan's Rant: Home Automation &amp; The AI Addiction Loop</li>
<li>(01:20:09) - Two Minutes to Midnight: OpenAI's 30× Ad-Tier, David Silver's $1.1B, Scout AI's Drones</li>
<li>(01:25:55) - Outro</li>
</ul><p><br></p><p><br></p><p><strong>##</strong> <strong>Resources mentioned</strong></p><p><br></p><p><strong>**News Threadmill — GPT-5.5 Cyber &amp; The Goblin Problem**</strong></p><p>• TechCrunch — After dissing Anthropic for limiting Mythos, OpenAI restricts access to cyber too: https://techcrunch.com/2026/04/30/after-dissing-anthropic-for-limiting-mythos-openai-restricts-access-to-cyber-too/</p><p>• Ars Technica — Amid mythos-hyped cybersecurity prowess, researchers find GPT-5.5 is just as good: https://arstechnica.com/ai/2026/05/amid-mythos-hyped-cybersecurity-prowess-researchers-find-gpt-5-5-is-just-as-good/</p><p>• Ars Technica — OpenAI Codex system prompt includes explicit directive to never talk about goblins: https://arstechnica.com/ai/2026/04/openai-codex-system-prompt-includes-explicit-directive-to-never-talk-about-goblins/</p><p>• OpenAI — Where the Goblins Came From: https://openai.com/index/where-the-goblins-came-from/</p>...]]>
      </itunes:summary>
      <itunes:keywords>GPT-5.5 cyber, Mythos, model gating, Codex CLI, goblin problem, reward hacking, RLHF, supervised fine-tuning, long-running agents, Elevate, Google Enterprise Agent Platform, memory governance, ambient processing, checkpoints, fleet orchestration, adversarial review, fresh eyes, subagents, superpowers, agentic coding, code is cheap, free as puppies, spec-driven development, taste, Eternal Promise, no-code, programmer elimination, software brain, Gen Z AI anger, warm models, sycophancy, Richard Dawkins, Claudia, AI addiction, two minutes to midnight, AI bubble, ChatGPT Plus, autonomous drones</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5cc608cc/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/5cc608cc/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Why Models Over-Edit Your Code, Meta Keystroke Surveillance, Interviewing Engineers in the AI Age</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>Why Models Over-Edit Your Code, Meta Keystroke Surveillance, Interviewing Engineers in the AI Age</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b88123c4-5ffb-45c4-81ca-89f6eea2a43e</guid>
      <link>https://www.adipod.ai/episodes/23-why-models-over-edit-your-code-meta-keystroke-surveillance-interviewing-engineers-in-the-ai-age/</link>
      <description>
        <![CDATA[<p>Is GPT-5.5 finally a 4.7-tier model? Did DeepSeek V4 just close the gap with Anthropic? And what does it mean that a senior ML engineer says he can't out-code Claude anymore? Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav are joined by special guest Nathan Lubchenco — ML engineer and Substack author of <em>*The future was yesterday* (https://nathanlubchenco.substack.com/)</em> — on ADI Pod #23 (April 28, 2026).</p><p><br></p><p>This episode covers OpenAI's GPT-5.5 release, DeepSeek V4 (1.6T base / 49B active params with 1M context), Meta's new Model Capability Initiative tracking US employee keystrokes and mouse movements, a Levenshtein-distance study on coding-model over-editing, the 2026 Stanford AI Index report, and a deep-dive interview on how to hire software engineers when the agents are already better at coding than the candidates.</p><p><strong>Key takeaways</strong></p><p><br></p><p>— Models are now consistently better at coding than even senior ML engineers, by their own admission. Late-2026 may be when they cross the median software engineer.</p><p>— Coding-model over-editing is measurable (Levenshtein distance on boolean-flip tasks) and instruction-followable — explicit "minimum-edit" prompts close most of the gap.</p><p>— The US is unusually a slow adopter of a major technological wave. Workplace AI usage is highest in emerging economies, not the developed world.</p><p>— "The task is not the job" — humans remain indispensable on the bundling dimensions: catching what customers don't say, and avoiding interactions that end up on social media.</p><p>— Software engineering interviews should include the candidate's personal harness, with company-provided API keys for equity. LeetCode optimizes for the wrong signal in 2026.</p><p>— DeepSeek V4 closing the gap with Mythos in 3–6 months is what makes the bubble too geopolitically important to fail.</p><p><br></p><p><strong>Chapters</strong></p><p><br></p><p></p><ul><li>(00:00) - Cold Open &amp; Welcome</li>
<li>(01:31) - News Threadmill: GPT-5.5, DeepSeek V4, Meta Watches Every Keystroke</li>
<li>(12:28) - Post-Processing: Coding Models Are Doing Too Much</li>
<li>(18:59) - Post-Processing: The Task Is Not the Job (Luis Garicano)</li>
<li>(32:20) - Post-Processing: The 2026 Stanford AI Index Report</li>
<li>(38:11) - Deep Dive: Interviewing Engineers in the AI Age (with Nathan Lubchenco)</li>
<li>(45:05) - Deep Dive: Reforming Software Hiring — Take-Homes, Personal Harness, Equity</li>
<li>(50:15) - Deep Dive: When Models Cross the Median Engineer (Late-2026 Prediction)</li>
<li>(59:29) - Deep Dive: Why Code Review Is the Current Bottleneck</li>
<li>(01:00:21) - Deep Dive: Should PRs Show the Prompt History?</li>
<li>(01:02:27) - Dan's Rant: Anthropic Tested Removing Claude Code from the Pro Plan</li>
<li>(01:05:44) - Rahul's Rampage: The Infinity Machine — Demis Hassabis &amp; Corporate Gravity</li>
<li>(01:14:32) - Two Minutes to Midnight: Bubble Clock Moves Back to 4:00</li>
<li>(01:26:30) - Outro</li>
</ul><p><br></p><p><strong>Resources mentioned</strong></p><p><br></p><p><strong>**Models &amp; news**</strong></p><p>• OpenAI — Introducing GPT-5.5: https://openai.com/index/introducing-gpt-5-5/</p><p>• Engadget — DeepSeek promises its new AI model has world-class reasoning: https://www.engadget.com/ai/deepseek-promises-its-new-ai-model-has-world-class-reasoning-115733512.html</p><p>• Reuters — Meta to start capturing employee mouse movements, keystrokes for AI training data: https://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21/</p><p><br></p><p><strong>**Post-processing articles**</strong></p><p>• "Coding Models Are Doing Too Much" — Levenshtein-distance over-editing study (nrehiew): https://nrehiew.github.io/blog/minimal_editing/</p><p>• Luis Garicano (Silicon Continent) — Why Desk Jobs Survive ("The task is not the job"): https://www.siliconcontinent.com/p/why-desk-jobs-survive-and-amodei</p><p>• 2026 AI Index Report — Stanford Institute for Human-Centered AI: https://hai.stanford.edu/ai-index/2026-ai-index-report</p><p><br></p><p><strong>**Deep dive**</strong></p><p>• Nathan Lubchenco — Interviewing Software Engineers in the Age of AI: https://nathanlubchenco.substack.com/p/interviewing-software-engineers-in</p><p>• Nathan Lubchenco — <em>*The future was yesterday*</em> Substack home: https://nathanlubchenco.substack.com/</p><p><br></p><p><strong>**Dan's rant**</strong></p><p>• Ars Technica — Anthropic tested removing Claude Code from the Pro plan: https://arstechnica.com/ai/2026/04/anthropic-tested-removing-claude-code-from-the-pro-plan/</p><p><br></p><p><strong>**Rahul's rampage**</strong></p><p>• Sebastian Mallaby — <em>*The Infinity Machine*</em> (book on Demis Hassabis and DeepMind)</p><p>• Philipp Dubach — Do Not Disturb My Circles (Archimedes essay): https://philippdubach.com/posts/do-not-disturb-my-circles/</p><p><br></p><p><strong>**Bubble watch**</strong></p><p>• TechCrunch — Two college kids raise $5.1M pre-seed to build an AI social network in iMessage: https://techcrunch.com/2026/04/24/two-college-kids-raise-a-5-1-million-pre-seed-to-build-an-ai-social-network-in-imessage/</p><p>• Toby Ord — Hourly Costs for AI Agents: https://www.tobyord.com/writing/hourly-costs-for-ai-agents</p><p>• CNBC — OpenAI reportedly missed revenue targets, shares of Oracle and chip stocks falling: https://www.cnbc.com/2026/04/28/openai-reportedly-missed-revenue-targets-shares-of-oracle-and-these-chip-stocks-are-falling.html</p><p><br></p><p><strong>About ADI Pod <br></strong><br>ADI Pod (Artificial Developer Intelligence) is a weekly podcast about AI and software development for working developers. Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav go through hundreds of links and dozens of newsletters every week so you don't have to.</p><p><br></p><p>This week's special guest: <strong>**Nathan Lubchenco**</strong> — ML engineer and author of <em>*The future was yesterday*</em> on Substack, where he writes about AI and software engineering.</p><p><br></p><p>• Website: https://www.adipod.ai</p><p>• Email: humans@adipod.ai</p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Is GPT-5.5 finally a 4.7-tier model? Did DeepSeek V4 just close the gap with Anthropic? And what does it mean that a senior ML engineer says he can't out-code Claude anymore? Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav are joined by special guest Nathan Lubchenco — ML engineer and Substack author of <em>*The future was yesterday* (https://nathanlubchenco.substack.com/)</em> — on ADI Pod #23 (April 28, 2026).</p><p><br></p><p>This episode covers OpenAI's GPT-5.5 release, DeepSeek V4 (1.6T base / 49B active params with 1M context), Meta's new Model Capability Initiative tracking US employee keystrokes and mouse movements, a Levenshtein-distance study on coding-model over-editing, the 2026 Stanford AI Index report, and a deep-dive interview on how to hire software engineers when the agents are already better at coding than the candidates.</p><p><strong>Key takeaways</strong></p><p><br></p><p>— Models are now consistently better at coding than even senior ML engineers, by their own admission. Late-2026 may be when they cross the median software engineer.</p><p>— Coding-model over-editing is measurable (Levenshtein distance on boolean-flip tasks) and instruction-followable — explicit "minimum-edit" prompts close most of the gap.</p><p>— The US is unusually a slow adopter of a major technological wave. Workplace AI usage is highest in emerging economies, not the developed world.</p><p>— "The task is not the job" — humans remain indispensable on the bundling dimensions: catching what customers don't say, and avoiding interactions that end up on social media.</p><p>— Software engineering interviews should include the candidate's personal harness, with company-provided API keys for equity. LeetCode optimizes for the wrong signal in 2026.</p><p>— DeepSeek V4 closing the gap with Mythos in 3–6 months is what makes the bubble too geopolitically important to fail.</p><p><br></p><p><strong>Chapters</strong></p><p><br></p><p></p><ul><li>(00:00) - Cold Open &amp; Welcome</li>
<li>(01:31) - News Threadmill: GPT-5.5, DeepSeek V4, Meta Watches Every Keystroke</li>
<li>(12:28) - Post-Processing: Coding Models Are Doing Too Much</li>
<li>(18:59) - Post-Processing: The Task Is Not the Job (Luis Garicano)</li>
<li>(32:20) - Post-Processing: The 2026 Stanford AI Index Report</li>
<li>(38:11) - Deep Dive: Interviewing Engineers in the AI Age (with Nathan Lubchenco)</li>
<li>(45:05) - Deep Dive: Reforming Software Hiring — Take-Homes, Personal Harness, Equity</li>
<li>(50:15) - Deep Dive: When Models Cross the Median Engineer (Late-2026 Prediction)</li>
<li>(59:29) - Deep Dive: Why Code Review Is the Current Bottleneck</li>
<li>(01:00:21) - Deep Dive: Should PRs Show the Prompt History?</li>
<li>(01:02:27) - Dan's Rant: Anthropic Tested Removing Claude Code from the Pro Plan</li>
<li>(01:05:44) - Rahul's Rampage: The Infinity Machine — Demis Hassabis &amp; Corporate Gravity</li>
<li>(01:14:32) - Two Minutes to Midnight: Bubble Clock Moves Back to 4:00</li>
<li>(01:26:30) - Outro</li>
</ul><p><br></p><p><strong>Resources mentioned</strong></p><p><br></p><p><strong>**Models &amp; news**</strong></p><p>• OpenAI — Introducing GPT-5.5: https://openai.com/index/introducing-gpt-5-5/</p><p>• Engadget — DeepSeek promises its new AI model has world-class reasoning: https://www.engadget.com/ai/deepseek-promises-its-new-ai-model-has-world-class-reasoning-115733512.html</p><p>• Reuters — Meta to start capturing employee mouse movements, keystrokes for AI training data: https://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21/</p><p><br></p><p><strong>**Post-processing articles**</strong></p><p>• "Coding Models Are Doing Too Much" — Levenshtein-distance over-editing study (nrehiew): https://nrehiew.github.io/blog/minimal_editing/</p><p>• Luis Garicano (Silicon Continent) — Why Desk Jobs Survive ("The task is not the job"): https://www.siliconcontinent.com/p/why-desk-jobs-survive-and-amodei</p><p>• 2026 AI Index Report — Stanford Institute for Human-Centered AI: https://hai.stanford.edu/ai-index/2026-ai-index-report</p><p><br></p><p><strong>**Deep dive**</strong></p><p>• Nathan Lubchenco — Interviewing Software Engineers in the Age of AI: https://nathanlubchenco.substack.com/p/interviewing-software-engineers-in</p><p>• Nathan Lubchenco — <em>*The future was yesterday*</em> Substack home: https://nathanlubchenco.substack.com/</p><p><br></p><p><strong>**Dan's rant**</strong></p><p>• Ars Technica — Anthropic tested removing Claude Code from the Pro plan: https://arstechnica.com/ai/2026/04/anthropic-tested-removing-claude-code-from-the-pro-plan/</p><p><br></p><p><strong>**Rahul's rampage**</strong></p><p>• Sebastian Mallaby — <em>*The Infinity Machine*</em> (book on Demis Hassabis and DeepMind)</p><p>• Philipp Dubach — Do Not Disturb My Circles (Archimedes essay): https://philippdubach.com/posts/do-not-disturb-my-circles/</p><p><br></p><p><strong>**Bubble watch**</strong></p><p>• TechCrunch — Two college kids raise $5.1M pre-seed to build an AI social network in iMessage: https://techcrunch.com/2026/04/24/two-college-kids-raise-a-5-1-million-pre-seed-to-build-an-ai-social-network-in-imessage/</p><p>• Toby Ord — Hourly Costs for AI Agents: https://www.tobyord.com/writing/hourly-costs-for-ai-agents</p><p>• CNBC — OpenAI reportedly missed revenue targets, shares of Oracle and chip stocks falling: https://www.cnbc.com/2026/04/28/openai-reportedly-missed-revenue-targets-shares-of-oracle-and-these-chip-stocks-are-falling.html</p><p><br></p><p><strong>About ADI Pod <br></strong><br>ADI Pod (Artificial Developer Intelligence) is a weekly podcast about AI and software development for working developers. Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav go through hundreds of links and dozens of newsletters every week so you don't have to.</p><p><br></p><p>This week's special guest: <strong>**Nathan Lubchenco**</strong> — ML engineer and author of <em>*The future was yesterday*</em> on Substack, where he writes about AI and software engineering.</p><p><br></p><p>• Website: https://www.adipod.ai</p><p>• Email: humans@adipod.ai</p><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 01 May 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/687e6e6f/46951f8f.mp3" length="43442697" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>5412</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Is GPT-5.5 finally a 4.7-tier model? Did DeepSeek V4 just close the gap with Anthropic? And what does it mean that a senior ML engineer says he can't out-code Claude anymore? Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav are joined by special guest Nathan Lubchenco — ML engineer and Substack author of <em>*The future was yesterday* (https://nathanlubchenco.substack.com/)</em> — on ADI Pod #23 (April 28, 2026).</p><p><br></p><p>This episode covers OpenAI's GPT-5.5 release, DeepSeek V4 (1.6T base / 49B active params with 1M context), Meta's new Model Capability Initiative tracking US employee keystrokes and mouse movements, a Levenshtein-distance study on coding-model over-editing, the 2026 Stanford AI Index report, and a deep-dive interview on how to hire software engineers when the agents are already better at coding than the candidates.</p><p><strong>Key takeaways</strong></p><p><br></p><p>— Models are now consistently better at coding than even senior ML engineers, by their own admission. Late-2026 may be when they cross the median software engineer.</p><p>— Coding-model over-editing is measurable (Levenshtein distance on boolean-flip tasks) and instruction-followable — explicit "minimum-edit" prompts close most of the gap.</p><p>— The US is unusually a slow adopter of a major technological wave. Workplace AI usage is highest in emerging economies, not the developed world.</p><p>— "The task is not the job" — humans remain indispensable on the bundling dimensions: catching what customers don't say, and avoiding interactions that end up on social media.</p><p>— Software engineering interviews should include the candidate's personal harness, with company-provided API keys for equity. LeetCode optimizes for the wrong signal in 2026.</p><p>— DeepSeek V4 closing the gap with Mythos in 3–6 months is what makes the bubble too geopolitically important to fail.</p><p><br></p><p><strong>Chapters</strong></p><p><br></p><p></p><ul><li>(00:00) - Cold Open &amp; Welcome</li>
<li>(01:31) - News Threadmill: GPT-5.5, DeepSeek V4, Meta Watches Every Keystroke</li>
<li>(12:28) - Post-Processing: Coding Models Are Doing Too Much</li>
<li>(18:59) - Post-Processing: The Task Is Not the Job (Luis Garicano)</li>
<li>(32:20) - Post-Processing: The 2026 Stanford AI Index Report</li>
<li>(38:11) - Deep Dive: Interviewing Engineers in the AI Age (with Nathan Lubchenco)</li>
<li>(45:05) - Deep Dive: Reforming Software Hiring — Take-Homes, Personal Harness, Equity</li>
<li>(50:15) - Deep Dive: When Models Cross the Median Engineer (Late-2026 Prediction)</li>
<li>(59:29) - Deep Dive: Why Code Review Is the Current Bottleneck</li>
<li>(01:00:21) - Deep Dive: Should PRs Show the Prompt History?</li>
<li>(01:02:27) - Dan's Rant: Anthropic Tested Removing Claude Code from the Pro Plan</li>
<li>(01:05:44) - Rahul's Rampage: The Infinity Machine — Demis Hassabis &amp; Corporate Gravity</li>
<li>(01:14:32) - Two Minutes to Midnight: Bubble Clock Moves Back to 4:00</li>
<li>(01:26:30) - Outro</li>
</ul><p><br></p><p><strong>Resources mentioned</strong></p><p><br></p><p><strong>**Models &amp; news**</strong></p><p>• OpenAI — Introducing GPT-5.5: https://openai.com/index/introducing-gpt-5-5/</p><p>• Engadget — DeepSeek promises its new AI model has world-class reasoning: https://www.engadget.com/ai/deepseek-promises-its-new-ai-model-has-world-class-reasoning-115733512.html</p><p>• Reuters — Meta to start capturing employee mouse movements, keystrokes for AI training data: https://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21/</p><p><br></p><p><strong>**Post-processing articles**</strong></p><p>• "Coding Models Are Doing Too Much" — Levenshtein-distance over-editing study (nrehiew): https://nrehiew.github.io/blog/minimal_editing/</p><p>• Luis Garicano (Silicon Continent) — Why Desk Jobs Survive ("The task is not the job"): https://www.siliconcontinent.com/p/why-desk-jobs-survive-and-amodei</p><p>• 2026 AI Index Report — Stanford Institute for Human-Centered AI: https://hai.stanford.edu/ai-index/2026-ai-index-report</p><p><br></p><p><strong>**Deep dive**</strong></p><p>• Nathan Lubchenco — Interviewing Software Engineers in the Age of AI: https://nathanlubchenco.substack.com/p/interviewing-software-engineers-in</p><p>• Nathan Lubchenco — <em>*The future was yesterday*</em> Substack home: https://nathanlubchenco.substack.com/</p><p><br></p><p><strong>**Dan's rant**</strong></p><p>• Ars Technica — Anthropic tested removing Claude Code from the Pro plan: https://arstechnica.com/ai/2026/04/anthropic-tested-removing-claude-code-from-the-pro-plan/</p><p><br></p><p><strong>**Rahul's rampage**</strong></p><p>• Sebastian Mallaby — <em>*The Infinity Machine*</em> (book on Demis Hassabis and DeepMind)</p><p>• Philipp Dubach — Do Not Disturb My Circles (Archimedes essay): https://philippdubach.com/posts/do-not-disturb-my-circles/</p><p><br></p><p><strong>**Bubble watch**</strong></p><p>• TechCrunch — Two college kids raise $5.1M pre-seed to build an AI social network in iMessage: https://techcrunch.com/2026/04/24/two-college-kids-raise-a-5-1-million-pre-seed-to-build-an-ai-social-network-in-imessage/</p><p>• Toby Ord — Hourly Costs for AI Agents: https://www.tobyord.com/writing/hourly-costs-for-ai-agents</p><p>• CNBC — OpenAI reportedly missed revenue targets, shares of Oracle and chip stocks falling: https://www.cnbc.com/2026/04/28/openai-reportedly-missed-revenue-targets-shares-of-oracle-and-these-chip-stocks-are-falling.html</p><p><br></p><p><strong>About ADI Pod <br></strong><br>ADI Pod (Artificial Developer Intelligence) is a weekly podcast about AI and software development for working developers. Co-hosts Shimin Zhang, Dan Lasky, and Rahul Yadav go through hundreds of links and dozens of newsletters every week so you don't have to.</p><p><br></p><p>This week's special guest: <strong>**Nathan Lubchenco**</strong> — ML engineer and author of <em>*The future was yesterday*</em> on Substack, where he writes about AI and software engineering.</p><p><br></p><p>• Website: https://www.adipod.ai</p><p>• Email: humans@adipod.ai</p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>GPT5.5, DeepSeekV4, MetaSurveillance, AIIndex2026, StanfordHAI, AIPodcast, LLM, AICoding, SoftwareEngineering, AIInterviews, ClaudeCode, AIBubble</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/687e6e6f/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/687e6e6f/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Is Claude Opus 4.7 Mythos Distilled, Running Qwen 3.6 Locally, and the AI-On-AI Arena</title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>Is Claude Opus 4.7 Mythos Distilled, Running Qwen 3.6 Locally, and the AI-On-AI Arena</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4c22b3e5-11ef-4839-a049-73e4acc8da58</guid>
      <link>https://www.adipod.ai/episodes/22-is-claude-opus-4-7-mythos-distilled-running-qwen-3-6-locally-and-the-ai-on-ai-arena/</link>
      <description>
        <![CDATA[<p>Is Claude Opus 4.7 really burning tokens? Is open source dead after mythos? Co-hosts Shimin Zhang and Dan Lasky — with recurring guest Rahul Yadav — ran the experiments this week on ADI Pod #22 (April 21, 2026).</p><p>This episode covers Anthropic's Claude Opus 4.7 release (the "mythos slice"), Alibaba's open-source Qwen 3.6 35B A3B, cal.com going closed source for security reasons, and a HIPAA-violating vibe-coded patient portal that is, in Dan's words, the bullshit future already here.</p><p><strong>In this episode</strong></p><p>▸ **Claude Opus 4.7 review** — the new mythos-derived tokenizer (3× bloat on plain English), stricter instruction-following, and why Shimin's SVG experiments suggest the token-burn panic is overblown: 35¢ on Opus 4.7 vs $2 on Opus 4.6 for the same task, with ~40× fewer reasoning tokens.<br>▸ **Qwen 3.6 35B A3B** — Alibaba's open-source mixture-of-experts model (3B active params at any time) running locally on Shimin's laptop at 90–95 tokens/sec via llama.cpp + Unsloth. The first model to break Simon Willison's pelican-on-a-bicycle benchmark against a larger frontier model.<br>▸ **cal.com goes closed source** — why the AI Security Institute's $12,000-per-attempt mythos pentesting data ($125,000 for 10 runs) is changing the open-source calculus, and Drew Breunig's three-phase dev/review/hardening cycle prediction.<br>▸ **Jesse Vincent's "Rules and Gates"** — a coding-agent prompting technique that reformulates optional preferences into directed preconditions, and whether agents can "weasel out" by rewriting the gate itself.<br>▸ **AI vibe coding horror story** — a German doctor who inlined a full patient portal into a single HTML page with database credentials client-side. HIPAA, meet DSGVO.<br>▸ **Kyle Kingsbury's "The Future of Everything is Lies"** — the Jepsen author's 8-step action list on AI's second- and third-order societal effects.<br>▸ **The AI-on-AI Arena** — Shimin's weekend project grading 11 frontier models against each other. The "delusion index" reads almost exactly like Dunning-Kruger in humans: GPT-5.4 scored -1.6 (humble), Gemini 3.1 Pro Preview rated itself well while peers ranked it last.<br>▸ **Two Minutes to Midnight** — Paul Graham's log-scale chart comparing AI capex (~1% of US GDP) to the US railroad peak (~10%). We dialed the AI bubble clock back 45 seconds to 3 min 30 sec.</p><p><strong>Key takeaways</strong></p><p>— Opus 4.7's token-burn reputation may be overblown. Stricter instruction-following can reduce total reasoning tokens by up to 40× vs Opus 4.6 on the same task.<br>— Security-driven closed-sourcing may spread as mythos-class agents make open repos easier to exploit. Hardening could make software capital-intensive again.<br>— Cognitive debt is real: Dan's wake-up call was a production bug a pre-LLM colleague solved in 5 minutes. His first instinct was to double down on the tool.<br>— Shimin's defense against skill atrophy: read 100% of LLM-generated PR lines (except tests).<br>— Weaker models rate themselves higher than stronger ones. Calibration appears to improve with capability.</p><p><strong>Chapters</strong></p><p></p><ul><li>(00:00) - Introduction to AI and Software Development</li>
<li>(02:25) - Alibaba's Quinn 3.6 Model Overview</li>
<li>(08:06) - Anthropic's Claude Opus 4.7 Release</li>
<li>(18:08) - Cal.com Goes Closed Source: Implications for Security</li>
<li>(20:40) - The Future of Vibe Coding</li>
<li>(23:41) - Techniques for Effective AI Utilization</li>
<li>(27:13) - Post-Processing and AI in Real-World Applications</li>
<li>(33:07) - The Cultural Impact of AI and Technology</li>
<li>(41:30) - Navigating Code Review Challenges</li>
<li>(42:57) - Exploring AI's Societal Impact</li>
<li>(45:16) - Evaluating AI Models: Performance and Insights</li>
<li>(49:09) - The Future of Data Centers and AI</li>
<li>(50:54) - Investment Trends and Economic Perspectives</li>
<li>(57:58) - Reflections on Historical Investment Cycles</li>
<li>(59:35) - Optimism Amidst Uncertainty</li>
</ul><br><strong>Resources mentioned</strong><br><strong>Claude Opus 4.7 &amp; Qwen 3.6</strong><br>• Introducing Claude Opus 4.7 (Anthropic): https://www.anthropic.com/news/claude-opus-4-7<br>• Claude Opus 4.7 System Card: https://cdn.sanity.io/files/4zrzovbb/website/037f06850df7fbe871e206dad004c3db5fd50340.pdf<br>• Qwen3.6-35B-A3B: Agentic Coding Power, Now Open to All: https://qwen.ai/blog?id=qwen3.6-35b-a3b<br>• Simon Willison — Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7: https://simonwillison.net/2026/Apr/16/qwen-beats-opus/<br>• Shimin — Opus 4.7 isn't dumb, it's just lazy: https://shimin.io/journal/opus-4-7-just-lazy/<p><strong>Security &amp; open source</strong><br>• Cal.com is going closed source. Here's why: https://cal.com/blog/cal-com-goes-closed-source-why<br>• Drew Breunig — Cybersecurity Looks Like Proof of Work Now: https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-of-work-now.html</p><p><strong>Technique &amp; commentary</strong><br>• Jesse Vincent — Rules and Gates: https://blog.fsck.com/2026/04/07/rules-and-gates/<br>• An AI Vibe Coding Horror Story: https://www.tobru.ch/an-ai-vibe-coding-horror-story/<br>• Kyle Kingsbury (Aphyr) — The Future of Everything is Lies, I Guess: https://aphyr.com/posts/411-the-future-of-everything-is-lies-i-guess</p><p><strong>Shimin's project</strong><br>• AI-on-AI Arena: https://shimin.io/ai-on-ai-arena</p><p><strong>Bubble watch</strong><br>• Ars Technica — Satellite and drone images reveal big delays in US data center construction: https://arstechnica.com/ai/2026/04/construction-delays-hit-40-of-us-data-centers-planned-for-2026/<br>• Epoch AI — OpenAI Stargate: where the US sites stand: https://epochai.substack.com/p/openai-stargate-where-the-us-sites<br>• Paul Graham on US investment cycles (log scale): https://x.com/paulg/status/2045120274551423142/photo/1</p><p><strong>About ADI Pod</strong></p><p>ADI Pod (Artificial Developer Intelligence) is a weekly podcast about AI and software development for working developers. Co-hosts Shimin Zhang and Dan Lasky go through hundreds of links and dozens of newsletters every week so you don't have to. Recurring guest Rahul Yadav joins when he can.</p><p>• Website: https://www.adipod.ai<br>• Email: humans@adipod.ai</p><p>New episodes every Friday. Follow the show to get them automatically.<br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Is Claude Opus 4.7 really burning tokens? Is open source dead after mythos? Co-hosts Shimin Zhang and Dan Lasky — with recurring guest Rahul Yadav — ran the experiments this week on ADI Pod #22 (April 21, 2026).</p><p>This episode covers Anthropic's Claude Opus 4.7 release (the "mythos slice"), Alibaba's open-source Qwen 3.6 35B A3B, cal.com going closed source for security reasons, and a HIPAA-violating vibe-coded patient portal that is, in Dan's words, the bullshit future already here.</p><p><strong>In this episode</strong></p><p>▸ **Claude Opus 4.7 review** — the new mythos-derived tokenizer (3× bloat on plain English), stricter instruction-following, and why Shimin's SVG experiments suggest the token-burn panic is overblown: 35¢ on Opus 4.7 vs $2 on Opus 4.6 for the same task, with ~40× fewer reasoning tokens.<br>▸ **Qwen 3.6 35B A3B** — Alibaba's open-source mixture-of-experts model (3B active params at any time) running locally on Shimin's laptop at 90–95 tokens/sec via llama.cpp + Unsloth. The first model to break Simon Willison's pelican-on-a-bicycle benchmark against a larger frontier model.<br>▸ **cal.com goes closed source** — why the AI Security Institute's $12,000-per-attempt mythos pentesting data ($125,000 for 10 runs) is changing the open-source calculus, and Drew Breunig's three-phase dev/review/hardening cycle prediction.<br>▸ **Jesse Vincent's "Rules and Gates"** — a coding-agent prompting technique that reformulates optional preferences into directed preconditions, and whether agents can "weasel out" by rewriting the gate itself.<br>▸ **AI vibe coding horror story** — a German doctor who inlined a full patient portal into a single HTML page with database credentials client-side. HIPAA, meet DSGVO.<br>▸ **Kyle Kingsbury's "The Future of Everything is Lies"** — the Jepsen author's 8-step action list on AI's second- and third-order societal effects.<br>▸ **The AI-on-AI Arena** — Shimin's weekend project grading 11 frontier models against each other. The "delusion index" reads almost exactly like Dunning-Kruger in humans: GPT-5.4 scored -1.6 (humble), Gemini 3.1 Pro Preview rated itself well while peers ranked it last.<br>▸ **Two Minutes to Midnight** — Paul Graham's log-scale chart comparing AI capex (~1% of US GDP) to the US railroad peak (~10%). We dialed the AI bubble clock back 45 seconds to 3 min 30 sec.</p><p><strong>Key takeaways</strong></p><p>— Opus 4.7's token-burn reputation may be overblown. Stricter instruction-following can reduce total reasoning tokens by up to 40× vs Opus 4.6 on the same task.<br>— Security-driven closed-sourcing may spread as mythos-class agents make open repos easier to exploit. Hardening could make software capital-intensive again.<br>— Cognitive debt is real: Dan's wake-up call was a production bug a pre-LLM colleague solved in 5 minutes. His first instinct was to double down on the tool.<br>— Shimin's defense against skill atrophy: read 100% of LLM-generated PR lines (except tests).<br>— Weaker models rate themselves higher than stronger ones. Calibration appears to improve with capability.</p><p><strong>Chapters</strong></p><p></p><ul><li>(00:00) - Introduction to AI and Software Development</li>
<li>(02:25) - Alibaba's Quinn 3.6 Model Overview</li>
<li>(08:06) - Anthropic's Claude Opus 4.7 Release</li>
<li>(18:08) - Cal.com Goes Closed Source: Implications for Security</li>
<li>(20:40) - The Future of Vibe Coding</li>
<li>(23:41) - Techniques for Effective AI Utilization</li>
<li>(27:13) - Post-Processing and AI in Real-World Applications</li>
<li>(33:07) - The Cultural Impact of AI and Technology</li>
<li>(41:30) - Navigating Code Review Challenges</li>
<li>(42:57) - Exploring AI's Societal Impact</li>
<li>(45:16) - Evaluating AI Models: Performance and Insights</li>
<li>(49:09) - The Future of Data Centers and AI</li>
<li>(50:54) - Investment Trends and Economic Perspectives</li>
<li>(57:58) - Reflections on Historical Investment Cycles</li>
<li>(59:35) - Optimism Amidst Uncertainty</li>
</ul><br><strong>Resources mentioned</strong><br><strong>Claude Opus 4.7 &amp; Qwen 3.6</strong><br>• Introducing Claude Opus 4.7 (Anthropic): https://www.anthropic.com/news/claude-opus-4-7<br>• Claude Opus 4.7 System Card: https://cdn.sanity.io/files/4zrzovbb/website/037f06850df7fbe871e206dad004c3db5fd50340.pdf<br>• Qwen3.6-35B-A3B: Agentic Coding Power, Now Open to All: https://qwen.ai/blog?id=qwen3.6-35b-a3b<br>• Simon Willison — Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7: https://simonwillison.net/2026/Apr/16/qwen-beats-opus/<br>• Shimin — Opus 4.7 isn't dumb, it's just lazy: https://shimin.io/journal/opus-4-7-just-lazy/<p><strong>Security &amp; open source</strong><br>• Cal.com is going closed source. Here's why: https://cal.com/blog/cal-com-goes-closed-source-why<br>• Drew Breunig — Cybersecurity Looks Like Proof of Work Now: https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-of-work-now.html</p><p><strong>Technique &amp; commentary</strong><br>• Jesse Vincent — Rules and Gates: https://blog.fsck.com/2026/04/07/rules-and-gates/<br>• An AI Vibe Coding Horror Story: https://www.tobru.ch/an-ai-vibe-coding-horror-story/<br>• Kyle Kingsbury (Aphyr) — The Future of Everything is Lies, I Guess: https://aphyr.com/posts/411-the-future-of-everything-is-lies-i-guess</p><p><strong>Shimin's project</strong><br>• AI-on-AI Arena: https://shimin.io/ai-on-ai-arena</p><p><strong>Bubble watch</strong><br>• Ars Technica — Satellite and drone images reveal big delays in US data center construction: https://arstechnica.com/ai/2026/04/construction-delays-hit-40-of-us-data-centers-planned-for-2026/<br>• Epoch AI — OpenAI Stargate: where the US sites stand: https://epochai.substack.com/p/openai-stargate-where-the-us-sites<br>• Paul Graham on US investment cycles (log scale): https://x.com/paulg/status/2045120274551423142/photo/1</p><p><strong>About ADI Pod</strong></p><p>ADI Pod (Artificial Developer Intelligence) is a weekly podcast about AI and software development for working developers. Co-hosts Shimin Zhang and Dan Lasky go through hundreds of links and dozens of newsletters every week so you don't have to. Recurring guest Rahul Yadav joins when he can.</p><p>• Website: https://www.adipod.ai<br>• Email: humans@adipod.ai</p><p>New episodes every Friday. Follow the show to get them automatically.<br></p>]]>
      </content:encoded>
      <pubDate>Fri, 24 Apr 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/955a2bc9/84dfdb78.mp3" length="29934556" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3724</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Is Claude Opus 4.7 really burning tokens? Is open source dead after mythos? Co-hosts Shimin Zhang and Dan Lasky — with recurring guest Rahul Yadav — ran the experiments this week on ADI Pod #22 (April 21, 2026).</p><p>This episode covers Anthropic's Claude Opus 4.7 release (the "mythos slice"), Alibaba's open-source Qwen 3.6 35B A3B, cal.com going closed source for security reasons, and a HIPAA-violating vibe-coded patient portal that is, in Dan's words, the bullshit future already here.</p><p><strong>In this episode</strong></p><p>▸ **Claude Opus 4.7 review** — the new mythos-derived tokenizer (3× bloat on plain English), stricter instruction-following, and why Shimin's SVG experiments suggest the token-burn panic is overblown: 35¢ on Opus 4.7 vs $2 on Opus 4.6 for the same task, with ~40× fewer reasoning tokens.<br>▸ **Qwen 3.6 35B A3B** — Alibaba's open-source mixture-of-experts model (3B active params at any time) running locally on Shimin's laptop at 90–95 tokens/sec via llama.cpp + Unsloth. The first model to break Simon Willison's pelican-on-a-bicycle benchmark against a larger frontier model.<br>▸ **cal.com goes closed source** — why the AI Security Institute's $12,000-per-attempt mythos pentesting data ($125,000 for 10 runs) is changing the open-source calculus, and Drew Breunig's three-phase dev/review/hardening cycle prediction.<br>▸ **Jesse Vincent's "Rules and Gates"** — a coding-agent prompting technique that reformulates optional preferences into directed preconditions, and whether agents can "weasel out" by rewriting the gate itself.<br>▸ **AI vibe coding horror story** — a German doctor who inlined a full patient portal into a single HTML page with database credentials client-side. HIPAA, meet DSGVO.<br>▸ **Kyle Kingsbury's "The Future of Everything is Lies"** — the Jepsen author's 8-step action list on AI's second- and third-order societal effects.<br>▸ **The AI-on-AI Arena** — Shimin's weekend project grading 11 frontier models against each other. The "delusion index" reads almost exactly like Dunning-Kruger in humans: GPT-5.4 scored -1.6 (humble), Gemini 3.1 Pro Preview rated itself well while peers ranked it last.<br>▸ **Two Minutes to Midnight** — Paul Graham's log-scale chart comparing AI capex (~1% of US GDP) to the US railroad peak (~10%). We dialed the AI bubble clock back 45 seconds to 3 min 30 sec.</p><p><strong>Key takeaways</strong></p><p>— Opus 4.7's token-burn reputation may be overblown. Stricter instruction-following can reduce total reasoning tokens by up to 40× vs Opus 4.6 on the same task.<br>— Security-driven closed-sourcing may spread as mythos-class agents make open repos easier to exploit. Hardening could make software capital-intensive again.<br>— Cognitive debt is real: Dan's wake-up call was a production bug a pre-LLM colleague solved in 5 minutes. His first instinct was to double down on the tool.<br>— Shimin's defense against skill atrophy: read 100% of LLM-generated PR lines (except tests).<br>— Weaker models rate themselves higher than stronger ones. Calibration appears to improve with capability.</p><p><strong>Chapters</strong></p><p></p><ul><li>(00:00) - Introduction to AI and Software Development</li>
<li>(02:25) - Alibaba's Quinn 3.6 Model Overview</li>
<li>(08:06) - Anthropic's Claude Opus 4.7 Release</li>
<li>(18:08) - Cal.com Goes Closed Source: Implications for Security</li>
<li>(20:40) - The Future of Vibe Coding</li>
<li>(23:41) - Techniques for Effective AI Utilization</li>
<li>(27:13) - Post-Processing and AI in Real-World Applications</li>
<li>(33:07) - The Cultural Impact of AI and Technology</li>
<li>(41:30) - Navigating Code Review Challenges</li>
<li>(42:57) - Exploring AI's Societal Impact</li>
<li>(45:16) - Evaluating AI Models: Performance and Insights</li>
<li>(49:09) - The Future of Data Centers and AI</li>
<li>(50:54) - Investment Trends and Economic Perspectives</li>
<li>(57:58) - Reflections on Historical Investment Cycles</li>
<li>(59:35) - Optimism Amidst Uncertainty</li>
</ul><br><strong>Resources mentioned</strong><br><strong>Claude Opus 4.7 &amp; Qwen 3.6</strong><br>• Introducing Claude Opus 4.7 (Anthropic): https://www.anthropic.com/news/claude-opus-4-7<br>• Claude Opus 4.7 System Card: https://cdn.sanity.io/files/4zrzovbb/website/037f06850df7fbe871e206dad004c3db5fd50340.pdf<br>• Qwen3.6-35B-A3B: Agentic Coding Power, Now Open to All: https://qwen.ai/blog?id=qwen3.6-35b-a3b<br>• Simon Willison — Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7: https://simonwillison.net/2026/Apr/16/qwen-beats-opus/<br>• Shimin — Opus 4.7 isn't dumb, it's just lazy: https://shimin.io/journal/opus-4-7-just-lazy/<p><strong>Security &amp; open source</strong><br>• Cal.com is going closed source. Here's why: https://cal.com/blog/cal-com-goes-closed-source-why<br>• Drew Breunig — Cybersecurity Looks Like Proof of Work Now: https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-of-work-now.html</p><p><strong>Technique &amp; commentary</strong><br>• Jesse Vincent — Rules and Gates: https://blog.fsck.com/2026/04/07/rules-and-gates/<br>• An AI Vibe Coding Horror Story: https://www.tobru.ch/an-ai-vibe-coding-horror-story/<br>• Kyle Kingsbury (Aphyr) — The Future of Everything is Lies, I Guess: https://aphyr.com/posts/411-the-future-of-everything-is-lies-i-guess</p><p><strong>Shimin's project</strong><br>• AI-on-AI Arena: https://shimin.io/ai-on-ai-arena</p><p><strong>Bubble watch</strong><br>• Ars Technica — Satellite and drone images reveal big delays in US data center construction: https://arstechnica.com/ai/2026/04/construction-delays-hit-40-of-us-data-centers-planned-for-2026/<br>• Epoch AI — OpenAI Stargate: where the US sites stand: https://epochai.substack.com/p/openai-stargate-where-the-us-sites<br>• Paul Graham on US investment cycles (log scale): https://x.com/paulg/status/2045120274551423142/photo/1</p><p><strong>About ADI Pod</strong></p><p>ADI Pod (Artificial Developer Intelligence) is a weekly podcast about AI and software development for working developers. Co-hosts Shimin Zhang and Dan Lasky go through hundreds of links and dozens of newsletters every week so you don't have to. Recurring guest Rahul Yadav joins when he can.</p><p>• Website: https://www.adipod.ai<br>• Email: humans@adipod.ai</p><p>New episodes every Friday. Follow the show to get them automatically.<br></p>]]>
      </itunes:summary>
      <itunes:keywords>Claude Opus 4.7, mythos slice, Qwen 3.6 35B, MoE, llama.cpp, Unsloth, GGUF, cal.com, closed source, security, AI Security Institute, Jesse Vincent, rules and gates, hooks, superpowers, vibe coding, HIPAA, DSGVO, Kyle Kingsbury, Jepsen, future of lies, AI on AI arena, Pelican benchmark, Simon Willison, Stargate, Epoch AI, data center construction, Paul Graham, railroad bubble, Anthropic, Pi Agent, cognitive debt, delusion index, Grok, Gemini 3.1, GPT-5.4, two minutes to midnight</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/955a2bc9/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/955a2bc9/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Anthropic Mythos &amp; Project Glasswing, Recursive Improving Agents, and Your Parallel Agent Limit</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>Anthropic Mythos &amp; Project Glasswing, Recursive Improving Agents, and Your Parallel Agent Limit</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c98cffb3-29d8-415f-9db1-5b08906b16da</guid>
      <link>https://www.adipod.ai/episodes/21-anthropic-mythos-project-glasswing-recursive-improving-agents-and-your-parallel-agent-limit/</link>
      <description>
        <![CDATA[<p>Shimin and Dan cover Minimax's M2.7 model — the first public experimental result in recursive self-improvement (RSI) — and unpack Anthropic's shock announcement of Mythos, a model so capable at finding security vulnerabilities that Anthropic is withholding public release while partnering with Amazon, Apple, Cisco, CrowdStrike, the Fed and major banks under 'Project Glasswing' to patch infrastructure first. They also debate AI's frontend weakness, discuss Addy Osmani's parallel agent limits piece, and move the AI bubble clock back.</p><p><strong>Takeaways:</strong> </p><ul><li>RSI is now experimentally demonstrated (not just theorized); reframes model improvement as capital competition, not PhD hiring.</li><li>If AI finds vulns at scale, open source gets *more* secure long-term — but short-term this is a nuclear-test-equivalent event that may rewrite security, money, and trust assumptions.</li><li>'Frontend will be first automated' was wrong; backend may be easier because visual taste and pixel-perfect feedback loops aren't in training data</li><li>Agent orchestration has a personal ceiling; finding it requires blowing past it. Tight scope + time-boxing + new contexts beats monolithic long sessions.</li><li>'Code is cheap' is really about industrialization — the people who industrialize outcompete those who don't; learn the tools or be left behind.</li><li>OpenAI's CRO going public on a competitor's accounting is itself a bearish signal about OpenAI's enterprise position.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://firethering.com/minimax-m2-7-agentic-model/">MiniMax M2.7: The Agentic Model That Helped Build Itself</a><br><a href="https://techcrunch.com/2026/04/07/anthropic-mythos-ai-model-preview-security/">Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative</a><br><a href="https://red.anthropic.com/2026/mythos-preview/">Assessing Claude Mythos Preview’s cybersecurity capabilities</a><br><a href="https://nerdy.dev/why-ai-sucks-at-front-end">Why AI Sucks At Front End</a><br><a href="https://addyosmani.com/blog/cognitive-parallel-agents/">Your parallel Agent limit</a><br><a href="https://perevillega.com/posts/2026-03-16-code-is-cheap-now/">Code Is Cheap Now, And That Changes Everything </a><br><a href="https://techcrunch.com/2026/04/07/the-ai-gold-rush-is-pulling-private-wealth-into-riskier-earlier-bets/">The AI gold rush is pulling private wealth into riskier, earlier bets </a><br><a href="https://www.implicator.ai/openai-cro-tells-staff-anthropic-inflates-run-rate-by-8-billion/">OpenAI CRO Tells Staff Anthropic Inflates Run Rate by $8 Billion</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI and Software Development</li>
<li>(02:45) - Minimax M 2.7 Model and Recursive Self-Improvement</li>
<li>(05:04) - Anthropic's Mythos Model and Security Vulnerabilities</li>
<li>(08:15) - AI's Limitations in Front-End Development</li>
<li>(18:13) - Cognitive Debt and Managing Multiple AI Agents</li>
<li>(32:01) - Managing Multiple Agents Effectively</li>
<li>(34:42) - The Evolution of Code Value</li>
<li>(38:29) - The Industrialization of Coding</li>
<li>(41:00) - Navigating Cloud Code Challenges</li>
<li>(45:39) - Ranting About Technology Installations</li>
<li>(50:16) - The State of the AI Bubble</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Shimin and Dan cover Minimax's M2.7 model — the first public experimental result in recursive self-improvement (RSI) — and unpack Anthropic's shock announcement of Mythos, a model so capable at finding security vulnerabilities that Anthropic is withholding public release while partnering with Amazon, Apple, Cisco, CrowdStrike, the Fed and major banks under 'Project Glasswing' to patch infrastructure first. They also debate AI's frontend weakness, discuss Addy Osmani's parallel agent limits piece, and move the AI bubble clock back.</p><p><strong>Takeaways:</strong> </p><ul><li>RSI is now experimentally demonstrated (not just theorized); reframes model improvement as capital competition, not PhD hiring.</li><li>If AI finds vulns at scale, open source gets *more* secure long-term — but short-term this is a nuclear-test-equivalent event that may rewrite security, money, and trust assumptions.</li><li>'Frontend will be first automated' was wrong; backend may be easier because visual taste and pixel-perfect feedback loops aren't in training data</li><li>Agent orchestration has a personal ceiling; finding it requires blowing past it. Tight scope + time-boxing + new contexts beats monolithic long sessions.</li><li>'Code is cheap' is really about industrialization — the people who industrialize outcompete those who don't; learn the tools or be left behind.</li><li>OpenAI's CRO going public on a competitor's accounting is itself a bearish signal about OpenAI's enterprise position.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://firethering.com/minimax-m2-7-agentic-model/">MiniMax M2.7: The Agentic Model That Helped Build Itself</a><br><a href="https://techcrunch.com/2026/04/07/anthropic-mythos-ai-model-preview-security/">Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative</a><br><a href="https://red.anthropic.com/2026/mythos-preview/">Assessing Claude Mythos Preview’s cybersecurity capabilities</a><br><a href="https://nerdy.dev/why-ai-sucks-at-front-end">Why AI Sucks At Front End</a><br><a href="https://addyosmani.com/blog/cognitive-parallel-agents/">Your parallel Agent limit</a><br><a href="https://perevillega.com/posts/2026-03-16-code-is-cheap-now/">Code Is Cheap Now, And That Changes Everything </a><br><a href="https://techcrunch.com/2026/04/07/the-ai-gold-rush-is-pulling-private-wealth-into-riskier-earlier-bets/">The AI gold rush is pulling private wealth into riskier, earlier bets </a><br><a href="https://www.implicator.ai/openai-cro-tells-staff-anthropic-inflates-run-rate-by-8-billion/">OpenAI CRO Tells Staff Anthropic Inflates Run Rate by $8 Billion</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI and Software Development</li>
<li>(02:45) - Minimax M 2.7 Model and Recursive Self-Improvement</li>
<li>(05:04) - Anthropic's Mythos Model and Security Vulnerabilities</li>
<li>(08:15) - AI's Limitations in Front-End Development</li>
<li>(18:13) - Cognitive Debt and Managing Multiple AI Agents</li>
<li>(32:01) - Managing Multiple Agents Effectively</li>
<li>(34:42) - The Evolution of Code Value</li>
<li>(38:29) - The Industrialization of Coding</li>
<li>(41:00) - Navigating Cloud Code Challenges</li>
<li>(45:39) - Ranting About Technology Installations</li>
<li>(50:16) - The State of the AI Bubble</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 17 Apr 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang &amp; Dan Lasky</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/04fd8545/47678bbf.mp3" length="29165424" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang &amp; Dan Lasky</itunes:author>
      <itunes:duration>3628</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Shimin and Dan cover Minimax's M2.7 model — the first public experimental result in recursive self-improvement (RSI) — and unpack Anthropic's shock announcement of Mythos, a model so capable at finding security vulnerabilities that Anthropic is withholding public release while partnering with Amazon, Apple, Cisco, CrowdStrike, the Fed and major banks under 'Project Glasswing' to patch infrastructure first. They also debate AI's frontend weakness, discuss Addy Osmani's parallel agent limits piece, and move the AI bubble clock back.</p><p><strong>Takeaways:</strong> </p><ul><li>RSI is now experimentally demonstrated (not just theorized); reframes model improvement as capital competition, not PhD hiring.</li><li>If AI finds vulns at scale, open source gets *more* secure long-term — but short-term this is a nuclear-test-equivalent event that may rewrite security, money, and trust assumptions.</li><li>'Frontend will be first automated' was wrong; backend may be easier because visual taste and pixel-perfect feedback loops aren't in training data</li><li>Agent orchestration has a personal ceiling; finding it requires blowing past it. Tight scope + time-boxing + new contexts beats monolithic long sessions.</li><li>'Code is cheap' is really about industrialization — the people who industrialize outcompete those who don't; learn the tools or be left behind.</li><li>OpenAI's CRO going public on a competitor's accounting is itself a bearish signal about OpenAI's enterprise position.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://firethering.com/minimax-m2-7-agentic-model/">MiniMax M2.7: The Agentic Model That Helped Build Itself</a><br><a href="https://techcrunch.com/2026/04/07/anthropic-mythos-ai-model-preview-security/">Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative</a><br><a href="https://red.anthropic.com/2026/mythos-preview/">Assessing Claude Mythos Preview’s cybersecurity capabilities</a><br><a href="https://nerdy.dev/why-ai-sucks-at-front-end">Why AI Sucks At Front End</a><br><a href="https://addyosmani.com/blog/cognitive-parallel-agents/">Your parallel Agent limit</a><br><a href="https://perevillega.com/posts/2026-03-16-code-is-cheap-now/">Code Is Cheap Now, And That Changes Everything </a><br><a href="https://techcrunch.com/2026/04/07/the-ai-gold-rush-is-pulling-private-wealth-into-riskier-earlier-bets/">The AI gold rush is pulling private wealth into riskier, earlier bets </a><br><a href="https://www.implicator.ai/openai-cro-tells-staff-anthropic-inflates-run-rate-by-8-billion/">OpenAI CRO Tells Staff Anthropic Inflates Run Rate by $8 Billion</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI and Software Development</li>
<li>(02:45) - Minimax M 2.7 Model and Recursive Self-Improvement</li>
<li>(05:04) - Anthropic's Mythos Model and Security Vulnerabilities</li>
<li>(08:15) - AI's Limitations in Front-End Development</li>
<li>(18:13) - Cognitive Debt and Managing Multiple AI Agents</li>
<li>(32:01) - Managing Multiple Agents Effectively</li>
<li>(34:42) - The Evolution of Code Value</li>
<li>(38:29) - The Industrialization of Coding</li>
<li>(41:00) - Navigating Cloud Code Challenges</li>
<li>(45:39) - Ranting About Technology Installations</li>
<li>(50:16) - The State of the AI Bubble</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Minimax, M2.7, RSI, recursive self-improvement, Anthropic, Mythos, Project Glasswing, security vulnerabilities, zero-day, AI frontend, parallel agents, cognitive debt, comprehension debt, code is cheap, industrialization, Pi Agent, Claude Code, OAuth, Astro, CodeDrop, WSL, OpenAI CRO, </itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/04fd8545/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/04fd8545/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Ep 20: Claude Code Source Leak, Emotion Concepts in LLMs, and Surprising Facts AIs Know About Us.</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Ep 20: Claude Code Source Leak, Emotion Concepts in LLMs, and Surprising Facts AIs Know About Us.</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">32a0cf06-3945-431b-a484-c45125073834</guid>
      <link>https://www.adipod.ai/episodes/20-claude-code-source-leak-emotion-concepts-in-llms-and-surprising-facts-ais-know-about-us/</link>
      <description>
        <![CDATA[<p>This week Rahul, Shimin, and Dan returns after a two-week break to cover the leaked Claude Code CLI source code, new model releases (Qwen 3.6 and Gemma 4), Mario Zechner's essay on slowing down with AI-assisted coding, a fun segment on unexpected things AI knows about each host, and two deep dives: Anthropic's research on emotion concepts in LLMs and a paper on how sycophantic AI decreases pro-social intentions.</p><p><strong>Takeaways:</strong></p><ul><li>Claude Code's dual-track permission system uses both rule-based and ML classifier for destructive bash command</li><li>"Cognitive bankruptcy" — when cognitive debt interest payments come due and you can't pay</li><li>AI sycophancy parallels social media echo chambers; no market incentive to fix it</li><li>On-device models like Gemma 4 could save cloud costs by handling routine tasks (e.g., agent heartbeats)</li><li>Copilot's terms of service classify it as "for entertainment purposes only"</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/ai/2026/03/entire-claude-code-cli-source-code-leaks-thanks-to-exposed-map-file/">Entire Claude Code CLI source code leaks thanks to exposed map file </a><br><a href="https://victorantos.com/posts/i-read-the-leaked-claude-code-source-heres-what-i-found/">I Read the Leaked Claude Code Source — Here's What I Found</a><br><a href="https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/">The Claude Code Source Leak: fake tools, frustration regexes, undercover mode, and more</a><br><a href="https://ccunpacked.dev/">Claude Code Unpacked</a><br><a href="https://qwen.ai/blog?id=qwen3.6">Qwen3.6-Plus: Towards Real World Agents</a><br><a href="https://deepmind.google/models/gemma/gemma-4/">Gemma 4 Announcement</a><br><a href="https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/">Thoughts on slowing the fuck down</a><br><a href="https://www.anthropic.com/research/emotion-concepts-function">Emotion concepts and their function in a large language model</a><br><a href="https://www.science.org/doi/10.1126/science.aec8352">Sycophantic AI decreases prosocial intentions and promotes dependence</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction and Host Updates</li>
<li>(01:45) - Cloud Code Source Code Leak</li>
<li>(12:49) - New Model News and Open Source Developments</li>
<li>(20:51) - Post-Processing and AI Anxiety</li>
<li>(25:35) - Unexpected Insights from AI</li>
<li>(33:12) - Exploring Emotional Concepts in AI</li>
<li>(39:15) - The Dangers of Sycophantic AI</li>
<li>(52:39) - Concluding Thoughts and Future Considerations</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p>xqwkClUFaJkBwEMD1lUn</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week Rahul, Shimin, and Dan returns after a two-week break to cover the leaked Claude Code CLI source code, new model releases (Qwen 3.6 and Gemma 4), Mario Zechner's essay on slowing down with AI-assisted coding, a fun segment on unexpected things AI knows about each host, and two deep dives: Anthropic's research on emotion concepts in LLMs and a paper on how sycophantic AI decreases pro-social intentions.</p><p><strong>Takeaways:</strong></p><ul><li>Claude Code's dual-track permission system uses both rule-based and ML classifier for destructive bash command</li><li>"Cognitive bankruptcy" — when cognitive debt interest payments come due and you can't pay</li><li>AI sycophancy parallels social media echo chambers; no market incentive to fix it</li><li>On-device models like Gemma 4 could save cloud costs by handling routine tasks (e.g., agent heartbeats)</li><li>Copilot's terms of service classify it as "for entertainment purposes only"</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/ai/2026/03/entire-claude-code-cli-source-code-leaks-thanks-to-exposed-map-file/">Entire Claude Code CLI source code leaks thanks to exposed map file </a><br><a href="https://victorantos.com/posts/i-read-the-leaked-claude-code-source-heres-what-i-found/">I Read the Leaked Claude Code Source — Here's What I Found</a><br><a href="https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/">The Claude Code Source Leak: fake tools, frustration regexes, undercover mode, and more</a><br><a href="https://ccunpacked.dev/">Claude Code Unpacked</a><br><a href="https://qwen.ai/blog?id=qwen3.6">Qwen3.6-Plus: Towards Real World Agents</a><br><a href="https://deepmind.google/models/gemma/gemma-4/">Gemma 4 Announcement</a><br><a href="https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/">Thoughts on slowing the fuck down</a><br><a href="https://www.anthropic.com/research/emotion-concepts-function">Emotion concepts and their function in a large language model</a><br><a href="https://www.science.org/doi/10.1126/science.aec8352">Sycophantic AI decreases prosocial intentions and promotes dependence</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction and Host Updates</li>
<li>(01:45) - Cloud Code Source Code Leak</li>
<li>(12:49) - New Model News and Open Source Developments</li>
<li>(20:51) - Post-Processing and AI Anxiety</li>
<li>(25:35) - Unexpected Insights from AI</li>
<li>(33:12) - Exploring Emotional Concepts in AI</li>
<li>(39:15) - The Dangers of Sycophantic AI</li>
<li>(52:39) - Concluding Thoughts and Future Considerations</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p>xqwkClUFaJkBwEMD1lUn</p>]]>
      </content:encoded>
      <pubDate>Fri, 10 Apr 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/b2049295/0a63dd12.mp3" length="25909682" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3221</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week Rahul, Shimin, and Dan returns after a two-week break to cover the leaked Claude Code CLI source code, new model releases (Qwen 3.6 and Gemma 4), Mario Zechner's essay on slowing down with AI-assisted coding, a fun segment on unexpected things AI knows about each host, and two deep dives: Anthropic's research on emotion concepts in LLMs and a paper on how sycophantic AI decreases pro-social intentions.</p><p><strong>Takeaways:</strong></p><ul><li>Claude Code's dual-track permission system uses both rule-based and ML classifier for destructive bash command</li><li>"Cognitive bankruptcy" — when cognitive debt interest payments come due and you can't pay</li><li>AI sycophancy parallels social media echo chambers; no market incentive to fix it</li><li>On-device models like Gemma 4 could save cloud costs by handling routine tasks (e.g., agent heartbeats)</li><li>Copilot's terms of service classify it as "for entertainment purposes only"</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/ai/2026/03/entire-claude-code-cli-source-code-leaks-thanks-to-exposed-map-file/">Entire Claude Code CLI source code leaks thanks to exposed map file </a><br><a href="https://victorantos.com/posts/i-read-the-leaked-claude-code-source-heres-what-i-found/">I Read the Leaked Claude Code Source — Here's What I Found</a><br><a href="https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/">The Claude Code Source Leak: fake tools, frustration regexes, undercover mode, and more</a><br><a href="https://ccunpacked.dev/">Claude Code Unpacked</a><br><a href="https://qwen.ai/blog?id=qwen3.6">Qwen3.6-Plus: Towards Real World Agents</a><br><a href="https://deepmind.google/models/gemma/gemma-4/">Gemma 4 Announcement</a><br><a href="https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/">Thoughts on slowing the fuck down</a><br><a href="https://www.anthropic.com/research/emotion-concepts-function">Emotion concepts and their function in a large language model</a><br><a href="https://www.science.org/doi/10.1126/science.aec8352">Sycophantic AI decreases prosocial intentions and promotes dependence</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction and Host Updates</li>
<li>(01:45) - Cloud Code Source Code Leak</li>
<li>(12:49) - New Model News and Open Source Developments</li>
<li>(20:51) - Post-Processing and AI Anxiety</li>
<li>(25:35) - Unexpected Insights from AI</li>
<li>(33:12) - Exploring Emotional Concepts in AI</li>
<li>(39:15) - The Dangers of Sycophantic AI</li>
<li>(52:39) - Concluding Thoughts and Future Considerations</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p>xqwkClUFaJkBwEMD1lUn</p>]]>
      </itunes:summary>
      <itunes:keywords>Claude Code source leak, Claude Code, Kairos, dream mode, swarm mode, Qwen 3.6, Gemma 4, on-device AI, Mario Zechner, Pi agent, cognitive debt, cognitive bankruptcy, emotion concepts, LLM emotions, sycophantic AI, r/AmITheAsshole, AI memory, open weight models, Allen Institute, Arcee, anti-distillation</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b2049295/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/b2049295/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Ep 19: Thinking Fast Slow and Artificial, Meta's Trouble with Rogue Agents, and FOMO in the Age of AI</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>Ep 19: Thinking Fast Slow and Artificial, Meta's Trouble with Rogue Agents, and FOMO in the Age of AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a91f9db8-7ea3-4808-82a1-49be9af45b2c</guid>
      <link>https://www.adipod.ai/episodes/19-thinking-fast-slow-and-artificial-meta-s-trouble-with-rogue-agents-and-fomo-in-the-age-of-ai/</link>
      <description>
        <![CDATA[<p>This week, Rahul, Shimin, and Dan covers Claude Code's new channels and scheduling features, a Meta security incident caused by AI-generated advice, Anthropic's survey of 81,000 people on AI expectations, Dan's vibe-coded vector memory CLI project, a deep dive on the paper "Thinking, Fast, Slow and Artificial" about cognitive surrender to AI, a rant about AI tokens as employee compensation, and bubble watch updates including NVIDIA's trillion-dollar demand projections and OpenAI shutting down Sora.</p><p><strong>Takeaways:</strong></p><ul><li>Claude Code is rapidly absorbing community-developed workflows — the moat may only be in the general model capabilities, not tooling</li><li>The Meta incident illustrates the emerging pattern of AI-caused production incidents and the need for process guardrails around agent usage</li><li>Cognitive surrender to AI creates a widening gap: those with high need-for-cognition benefit more while those who dislike effortful thinking defer even more</li><li>AI confidence inflation (12 percentage point boost) may stem from treating AI like authoritative reference material (encyclopedias, Wikipedia)</li><li>Historical technology resistance (Socrates on writing, farmers on tractors) suggests the battle against AI adoption may already be lost</li><li>OpenAI shutting Sora just 4 months after a 3-year Disney partnership signals deeper financial or strategic issues</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://code.claude.com/docs/en/channels">Push events into a running session with channels</a><br><a href="https://simonwillison.net/2026/Mar/9/not-so-boring/">Perhaps not Boring Technology after all</a><strong><br></strong><a href="https://techcrunch.com/2026/03/18/meta-is-having-trouble-with-rogue-ai-agents/">Meta is having trouble with rogue AI agents</a><strong><br></strong><a href="https://www.anthropic.com/features/81k-interviews">What 81,000 people want from AI</a><strong><br></strong><a href="https://github.com/dlasky/vec-memory-cli">Dan's vec-memory-cli</a><strong><br></strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking—Fast, Slow, and Artificial</a><strong><br></strong><a href="https://techcrunch.com/2026/03/21/are-ai-tokens-the-new-signing-bonus-or-just-a-cost-of-doing-business/">Are AI tokens the new signing bonus or just a cost of doing business?</a><strong><br></strong><a href="https://techcrunch.com/2026/03/16/jensen-just-put-nvidias-blackwell-and-vera-rubin-sales-projections-into-the-1-trillion-stratosphere/">Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere</a><strong><br></strong><a href="https://www.0xsid.com/blog/accelerated-ai-fomo">Accelerated FOMO in the Age of AI</a><strong><br></strong><a href="https://www.theguardian.com/technology/2026/mar/24/openai-ai-video-sora">OpenAI shutters AI video generator Sora in abrupt announcement</a><strong><br></strong><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week, Rahul, Shimin, and Dan covers Claude Code's new channels and scheduling features, a Meta security incident caused by AI-generated advice, Anthropic's survey of 81,000 people on AI expectations, Dan's vibe-coded vector memory CLI project, a deep dive on the paper "Thinking, Fast, Slow and Artificial" about cognitive surrender to AI, a rant about AI tokens as employee compensation, and bubble watch updates including NVIDIA's trillion-dollar demand projections and OpenAI shutting down Sora.</p><p><strong>Takeaways:</strong></p><ul><li>Claude Code is rapidly absorbing community-developed workflows — the moat may only be in the general model capabilities, not tooling</li><li>The Meta incident illustrates the emerging pattern of AI-caused production incidents and the need for process guardrails around agent usage</li><li>Cognitive surrender to AI creates a widening gap: those with high need-for-cognition benefit more while those who dislike effortful thinking defer even more</li><li>AI confidence inflation (12 percentage point boost) may stem from treating AI like authoritative reference material (encyclopedias, Wikipedia)</li><li>Historical technology resistance (Socrates on writing, farmers on tractors) suggests the battle against AI adoption may already be lost</li><li>OpenAI shutting Sora just 4 months after a 3-year Disney partnership signals deeper financial or strategic issues</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://code.claude.com/docs/en/channels">Push events into a running session with channels</a><br><a href="https://simonwillison.net/2026/Mar/9/not-so-boring/">Perhaps not Boring Technology after all</a><strong><br></strong><a href="https://techcrunch.com/2026/03/18/meta-is-having-trouble-with-rogue-ai-agents/">Meta is having trouble with rogue AI agents</a><strong><br></strong><a href="https://www.anthropic.com/features/81k-interviews">What 81,000 people want from AI</a><strong><br></strong><a href="https://github.com/dlasky/vec-memory-cli">Dan's vec-memory-cli</a><strong><br></strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking—Fast, Slow, and Artificial</a><strong><br></strong><a href="https://techcrunch.com/2026/03/21/are-ai-tokens-the-new-signing-bonus-or-just-a-cost-of-doing-business/">Are AI tokens the new signing bonus or just a cost of doing business?</a><strong><br></strong><a href="https://techcrunch.com/2026/03/16/jensen-just-put-nvidias-blackwell-and-vera-rubin-sales-projections-into-the-1-trillion-stratosphere/">Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere</a><strong><br></strong><a href="https://www.0xsid.com/blog/accelerated-ai-fomo">Accelerated FOMO in the Age of AI</a><strong><br></strong><a href="https://www.theguardian.com/technology/2026/mar/24/openai-ai-video-sora">OpenAI shutters AI video generator Sora in abrupt announcement</a><strong><br></strong><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 27 Mar 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/cf74d6a9/8301d2c7.mp3" length="35375299" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>4404</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week, Rahul, Shimin, and Dan covers Claude Code's new channels and scheduling features, a Meta security incident caused by AI-generated advice, Anthropic's survey of 81,000 people on AI expectations, Dan's vibe-coded vector memory CLI project, a deep dive on the paper "Thinking, Fast, Slow and Artificial" about cognitive surrender to AI, a rant about AI tokens as employee compensation, and bubble watch updates including NVIDIA's trillion-dollar demand projections and OpenAI shutting down Sora.</p><p><strong>Takeaways:</strong></p><ul><li>Claude Code is rapidly absorbing community-developed workflows — the moat may only be in the general model capabilities, not tooling</li><li>The Meta incident illustrates the emerging pattern of AI-caused production incidents and the need for process guardrails around agent usage</li><li>Cognitive surrender to AI creates a widening gap: those with high need-for-cognition benefit more while those who dislike effortful thinking defer even more</li><li>AI confidence inflation (12 percentage point boost) may stem from treating AI like authoritative reference material (encyclopedias, Wikipedia)</li><li>Historical technology resistance (Socrates on writing, farmers on tractors) suggests the battle against AI adoption may already be lost</li><li>OpenAI shutting Sora just 4 months after a 3-year Disney partnership signals deeper financial or strategic issues</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://code.claude.com/docs/en/channels">Push events into a running session with channels</a><br><a href="https://simonwillison.net/2026/Mar/9/not-so-boring/">Perhaps not Boring Technology after all</a><strong><br></strong><a href="https://techcrunch.com/2026/03/18/meta-is-having-trouble-with-rogue-ai-agents/">Meta is having trouble with rogue AI agents</a><strong><br></strong><a href="https://www.anthropic.com/features/81k-interviews">What 81,000 people want from AI</a><strong><br></strong><a href="https://github.com/dlasky/vec-memory-cli">Dan's vec-memory-cli</a><strong><br></strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking—Fast, Slow, and Artificial</a><strong><br></strong><a href="https://techcrunch.com/2026/03/21/are-ai-tokens-the-new-signing-bonus-or-just-a-cost-of-doing-business/">Are AI tokens the new signing bonus or just a cost of doing business?</a><strong><br></strong><a href="https://techcrunch.com/2026/03/16/jensen-just-put-nvidias-blackwell-and-vera-rubin-sales-projections-into-the-1-trillion-stratosphere/">Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere</a><strong><br></strong><a href="https://www.0xsid.com/blog/accelerated-ai-fomo">Accelerated FOMO in the Age of AI</a><strong><br></strong><a href="https://www.theguardian.com/technology/2026/mar/24/openai-ai-video-sora">OpenAI shutters AI video generator Sora in abrupt announcement</a><strong><br></strong><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Claude Code, channels, Claude Code plugins, scheduling, OpenClaw, Meta, security incident, credential leak, agent trust, Simon Willison, AIX, agentic discovery, GEO, AEO, Anthropic survey, 81000 people, light and shade, cognitive surrender, System 3, Thinking Fast and Slow, Kahneman, vector memory, SQLite, Ollama, embeddings, MCP, CLI, token compensation, signing bonus, Jensen Huang, NVIDIA, Blackwell, Vera Rubin, Sora, OpenAI FOMO, space data center</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/cf74d6a9/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Ep 18: 8 Levels of AI Engineering, Meta AI Delays, and LLM Neuroanatomy</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>Ep 18: 8 Levels of AI Engineering, Meta AI Delays, and LLM Neuroanatomy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2a5a5615-46f5-44b7-a116-7ae0f7c211de</guid>
      <link>https://www.adipod.ai/18</link>
      <description>
        <![CDATA[<p>This week, Dan, Shimin &amp; Rahul covers Meta's struggles with its delayed "Avocado" AI model and potential Gemini licensing, NVIDIA's enterprise-ready NemoClaw fork of OpenClaw, SWE-bench analysis showing PRs wouldn't pass human review, prompting superstitions and developer identity, the 8 levels of agentic engineering, mainstream media framing of AI coding, legal liability for agent-written code, and a deep dive into LLM neuroanatomy where a researcher topped leaderboards by repeating model layers without changing weights.</p><p><strong>Takeaways:</strong></p><ul><li>Meta may end up licensing Gemini despite massive AI investment — mirroring Apple's path</li><li>SWE-bench failures were mostly code quality, not functionality — suggesting good enough may be good enough with proper agents.md</li><li>A coworker analyzed 4.5 years of PRs to create a personalized coding style document for AI priming</li><li>The fastest software paradigm adoption cycle ever may be the claw/agent paradigm</li><li>Legal frameworks and insurance haven't caught up to agent-written code shipping to production</li><li>Repeating later model layers (the "thinking" layers) can boost performance without fine-tuning — raising questions about whether chain-of-thought reasoning is essentially exercising these layers repeatedly</li><li>Developers compared to ancient Egyptian scribes — language literacy as leverage</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.nytimes.com/2026/03/12/technology/meta-avocado-ai-model-delayed.html">Meta Delays Rollout of New A.I. Model After Performance Concerns</a><strong><br></strong><a href="https://www.nvidia.com/en-us/ai/nemoclaw/">NVIDIA NemoClaw</a><strong><br></strong><a href="https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs-would-not-be-merged-into-main/">Research note: Many SWE-bench-Passing PRs Would Not Be Merged into Main</a><strong><br></strong><a href="https://worksonmymachine.ai/p/the-collective-superstitions-of-people">The Collective Superstitions of People Who Talk to Machines</a><strong><br></strong><a href="https://www.bassimeledath.com/blog/levels-of-agentic-engineering">The 8 Levels of Agentic Engineering</a><strong><br></strong><a href="https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?">Coding After Coders: The End of Computer Programming as We Know It</a><strong><br></strong><a href="https://law.stanford.edu/2026/02/08/built-by-agents-tested-by-agents-trusted-by-whom/">Built by Agents, Tested by Agents, Trusted by Whom?</a><strong><br></strong><a href="https://dnhkng.github.io/posts/rys/">LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Development</li>
<li>(02:42) - Meta's AI Model Delays and Market Position</li>
<li>(09:51) - NVIDIA's New AI Developments</li>
<li>(13:58) - Benchmarking AI Models and Code Quality</li>
<li>(19:00) - Techniques Corner: AI Prompting and Creativity</li>
<li>(22:56) - The Evolution of Coding and Creativity</li>
<li>(28:46) - Levels of Agentic Engineering</li>
<li>(34:58) - Mainstream Perspectives on AI and Software Development</li>
<li>(43:00) - Trusting AI-Generated Code</li>
<li>(44:40) - Metrics for Success in Autonomous Teams</li>
<li>(46:59) - Legal and Ethical Implications of Autonomous Code</li>
<li>(50:21) - Innovations in Language Model Architectures</li>
<li>(01:01:02) - User Experience Challenges in Tech Development</li>
<li>(01:03:47) - Market Predictions and Financial Insights</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week, Dan, Shimin &amp; Rahul covers Meta's struggles with its delayed "Avocado" AI model and potential Gemini licensing, NVIDIA's enterprise-ready NemoClaw fork of OpenClaw, SWE-bench analysis showing PRs wouldn't pass human review, prompting superstitions and developer identity, the 8 levels of agentic engineering, mainstream media framing of AI coding, legal liability for agent-written code, and a deep dive into LLM neuroanatomy where a researcher topped leaderboards by repeating model layers without changing weights.</p><p><strong>Takeaways:</strong></p><ul><li>Meta may end up licensing Gemini despite massive AI investment — mirroring Apple's path</li><li>SWE-bench failures were mostly code quality, not functionality — suggesting good enough may be good enough with proper agents.md</li><li>A coworker analyzed 4.5 years of PRs to create a personalized coding style document for AI priming</li><li>The fastest software paradigm adoption cycle ever may be the claw/agent paradigm</li><li>Legal frameworks and insurance haven't caught up to agent-written code shipping to production</li><li>Repeating later model layers (the "thinking" layers) can boost performance without fine-tuning — raising questions about whether chain-of-thought reasoning is essentially exercising these layers repeatedly</li><li>Developers compared to ancient Egyptian scribes — language literacy as leverage</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.nytimes.com/2026/03/12/technology/meta-avocado-ai-model-delayed.html">Meta Delays Rollout of New A.I. Model After Performance Concerns</a><strong><br></strong><a href="https://www.nvidia.com/en-us/ai/nemoclaw/">NVIDIA NemoClaw</a><strong><br></strong><a href="https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs-would-not-be-merged-into-main/">Research note: Many SWE-bench-Passing PRs Would Not Be Merged into Main</a><strong><br></strong><a href="https://worksonmymachine.ai/p/the-collective-superstitions-of-people">The Collective Superstitions of People Who Talk to Machines</a><strong><br></strong><a href="https://www.bassimeledath.com/blog/levels-of-agentic-engineering">The 8 Levels of Agentic Engineering</a><strong><br></strong><a href="https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?">Coding After Coders: The End of Computer Programming as We Know It</a><strong><br></strong><a href="https://law.stanford.edu/2026/02/08/built-by-agents-tested-by-agents-trusted-by-whom/">Built by Agents, Tested by Agents, Trusted by Whom?</a><strong><br></strong><a href="https://dnhkng.github.io/posts/rys/">LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Development</li>
<li>(02:42) - Meta's AI Model Delays and Market Position</li>
<li>(09:51) - NVIDIA's New AI Developments</li>
<li>(13:58) - Benchmarking AI Models and Code Quality</li>
<li>(19:00) - Techniques Corner: AI Prompting and Creativity</li>
<li>(22:56) - The Evolution of Coding and Creativity</li>
<li>(28:46) - Levels of Agentic Engineering</li>
<li>(34:58) - Mainstream Perspectives on AI and Software Development</li>
<li>(43:00) - Trusting AI-Generated Code</li>
<li>(44:40) - Metrics for Success in Autonomous Teams</li>
<li>(46:59) - Legal and Ethical Implications of Autonomous Code</li>
<li>(50:21) - Innovations in Language Model Architectures</li>
<li>(01:01:02) - User Experience Challenges in Tech Development</li>
<li>(01:03:47) - Market Predictions and Financial Insights</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 20 Mar 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/9eb3baef/6edcddd7.mp3" length="32888599" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>4093</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week, Dan, Shimin &amp; Rahul covers Meta's struggles with its delayed "Avocado" AI model and potential Gemini licensing, NVIDIA's enterprise-ready NemoClaw fork of OpenClaw, SWE-bench analysis showing PRs wouldn't pass human review, prompting superstitions and developer identity, the 8 levels of agentic engineering, mainstream media framing of AI coding, legal liability for agent-written code, and a deep dive into LLM neuroanatomy where a researcher topped leaderboards by repeating model layers without changing weights.</p><p><strong>Takeaways:</strong></p><ul><li>Meta may end up licensing Gemini despite massive AI investment — mirroring Apple's path</li><li>SWE-bench failures were mostly code quality, not functionality — suggesting good enough may be good enough with proper agents.md</li><li>A coworker analyzed 4.5 years of PRs to create a personalized coding style document for AI priming</li><li>The fastest software paradigm adoption cycle ever may be the claw/agent paradigm</li><li>Legal frameworks and insurance haven't caught up to agent-written code shipping to production</li><li>Repeating later model layers (the "thinking" layers) can boost performance without fine-tuning — raising questions about whether chain-of-thought reasoning is essentially exercising these layers repeatedly</li><li>Developers compared to ancient Egyptian scribes — language literacy as leverage</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.nytimes.com/2026/03/12/technology/meta-avocado-ai-model-delayed.html">Meta Delays Rollout of New A.I. Model After Performance Concerns</a><strong><br></strong><a href="https://www.nvidia.com/en-us/ai/nemoclaw/">NVIDIA NemoClaw</a><strong><br></strong><a href="https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs-would-not-be-merged-into-main/">Research note: Many SWE-bench-Passing PRs Would Not Be Merged into Main</a><strong><br></strong><a href="https://worksonmymachine.ai/p/the-collective-superstitions-of-people">The Collective Superstitions of People Who Talk to Machines</a><strong><br></strong><a href="https://www.bassimeledath.com/blog/levels-of-agentic-engineering">The 8 Levels of Agentic Engineering</a><strong><br></strong><a href="https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?">Coding After Coders: The End of Computer Programming as We Know It</a><strong><br></strong><a href="https://law.stanford.edu/2026/02/08/built-by-agents-tested-by-agents-trusted-by-whom/">Built by Agents, Tested by Agents, Trusted by Whom?</a><strong><br></strong><a href="https://dnhkng.github.io/posts/rys/">LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Development</li>
<li>(02:42) - Meta's AI Model Delays and Market Position</li>
<li>(09:51) - NVIDIA's New AI Developments</li>
<li>(13:58) - Benchmarking AI Models and Code Quality</li>
<li>(19:00) - Techniques Corner: AI Prompting and Creativity</li>
<li>(22:56) - The Evolution of Coding and Creativity</li>
<li>(28:46) - Levels of Agentic Engineering</li>
<li>(34:58) - Mainstream Perspectives on AI and Software Development</li>
<li>(43:00) - Trusting AI-Generated Code</li>
<li>(44:40) - Metrics for Success in Autonomous Teams</li>
<li>(46:59) - Legal and Ethical Implications of Autonomous Code</li>
<li>(50:21) - Innovations in Language Model Architectures</li>
<li>(01:01:02) - User Experience Challenges in Tech Development</li>
<li>(01:03:47) - Market Predictions and Financial Insights</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Meta Avocado, Gemini licensing, NVIDIA NemoClaw, OpenClaw, SWE-bench, PR review, code quality, agents.md, agentic engineering levels, context engineering, harness engineering, background agents, NYT coding, glue code, Goodhart's Law, agent liability, LLM neuroanatomy, layer repetition, model circuits, mechanistic interpretability, developer identity</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9eb3baef/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/9eb3baef/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Ep 17: Slop Garbage Collection, Cleanroom Rewrites, and Will Claude Ruin our Teams?</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>Ep 17: Slop Garbage Collection, Cleanroom Rewrites, and Will Claude Ruin our Teams?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c59e6d5c-92e7-455f-b5bb-b1b8dae7e7b1</guid>
      <link>https://www.adipod.ai/episodes/17-slop-garbage-collection-cleanroom-rewrites-and-will-claude-ruin-our-teams/</link>
      <description>
        <![CDATA[<p>In this episode, Dan and Shimin follow-up on the Anthropic Pentagon drama (supply chain risk designation, lawsuit, and big tech backing Anthropic), open-source licensing controversy around AI-generated clean room rewrites, team dynamics in the age of AI coding tools, OpenAI's harness engineering blog post, two vibe-and-tell segments (Dan building custom Arch Linux images for a TuringPie cluster board, Shimin building FlatterProof — an AI sycophancy training app), and a bubble clock update driven by Oracle job cuts and AWS AI-related downtime.</p><p><strong>Takeaways</strong></p><ul><li>AI as a force multiplier for team culture: good teams move faster, bad teams explode faster</li><li>Prompt debt is now a real concern alongside technical debt — agents.md files rot just like code</li><li>Code garbage collection (periodic AI-driven cleanup) is emerging as a best practice</li><li>Cross-functional pair programming with AI (PM + engineer) represents a bright future for team collaboration</li><li>Senior engineers now required to sign off on AI-assisted changes at Amazon, but review fatigue is unsustainable</li><li>AI-generated SVG icons are surprisingly good and practical for real projects</li><li>Oracle may be the canary in the coal mine for the AI bubble, not the frontier labs themselves</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://futurism.com/artificial-intelligence/pentagon-ai-claude-bombing-elementary-school">Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target</a><br><a href="https://www.reuters.com/world/anthropic-sues-block-pentagon-blacklisting-over-ai-use-restrictions-2026-03-09/">Anthropic sues to block Pentagon blacklisting over AI use restrictions</a><br><a href="https://techcrunch.com/2026/03/03/alibabas-qwen-tech-lead-steps-down-after-major-ai-push/">Alibaba’s Qwen tech lead steps down after major AI push</a><br><a href="https://venturebeat.com/technology/did-alibaba-just-kneecap-its-powerful-qwen-ai-team-key-figures-depart-in">Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release</a><br><a href="https://simonwillison.net/2026/Mar/5/chardet/">Can coding agents relicense open source through a “clean room” implementation of code?</a><br><a href="https://antirez.com/news/162">GNU and the AI reimplementations</a><br><a href="https://justinjackson.ca/claude-code-ruin">Will Claude Code ruin our team?</a><br><a href="https://openai.com/index/harness-engineering/">Harness engineering: leveraging Codex in an agent-first world</a><br><a href="https://www.reuters.com/business/oracle-plans-thousands-job-cuts-data-center-costs-rise-bloomberg-news-reports-2026-03-05/">Oracle plans thousands of job cuts as data center costs rise, Bloomberg News reports</a><br><a href="https://arstechnica.com/ai/2026/03/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes/"> After outages, Amazon to make senior engineers sign off on AI-assisted changes </a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction</li>
<li>(02:50) - Anthropic and Pentagon Drama</li>
<li>(05:01) - Alibaba's Qwen Development Team Changes</li>
<li>(07:34) - Open Source Drama with CharDat Library</li>
<li>(24:08) - The Impact of AI on Team Dynamics</li>
<li>(29:15) - Harness Engineering and Codecs in AI Development</li>
<li>(31:39) - Empowering Agents with Tools</li>
<li>(34:06) - The Importance of Documentation</li>
<li>(36:26) - Architectural Boundaries and Testing</li>
<li>(38:29) - Innovative Projects and Personal Experiments</li>
<li>(46:56) - Flatterproof: Combating AI Synchrofancy</li>
<li>(54:27) - The AI Bubble Clock: Current State of Affairs</li>
<li>(01:02:29) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Dan and Shimin follow-up on the Anthropic Pentagon drama (supply chain risk designation, lawsuit, and big tech backing Anthropic), open-source licensing controversy around AI-generated clean room rewrites, team dynamics in the age of AI coding tools, OpenAI's harness engineering blog post, two vibe-and-tell segments (Dan building custom Arch Linux images for a TuringPie cluster board, Shimin building FlatterProof — an AI sycophancy training app), and a bubble clock update driven by Oracle job cuts and AWS AI-related downtime.</p><p><strong>Takeaways</strong></p><ul><li>AI as a force multiplier for team culture: good teams move faster, bad teams explode faster</li><li>Prompt debt is now a real concern alongside technical debt — agents.md files rot just like code</li><li>Code garbage collection (periodic AI-driven cleanup) is emerging as a best practice</li><li>Cross-functional pair programming with AI (PM + engineer) represents a bright future for team collaboration</li><li>Senior engineers now required to sign off on AI-assisted changes at Amazon, but review fatigue is unsustainable</li><li>AI-generated SVG icons are surprisingly good and practical for real projects</li><li>Oracle may be the canary in the coal mine for the AI bubble, not the frontier labs themselves</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://futurism.com/artificial-intelligence/pentagon-ai-claude-bombing-elementary-school">Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target</a><br><a href="https://www.reuters.com/world/anthropic-sues-block-pentagon-blacklisting-over-ai-use-restrictions-2026-03-09/">Anthropic sues to block Pentagon blacklisting over AI use restrictions</a><br><a href="https://techcrunch.com/2026/03/03/alibabas-qwen-tech-lead-steps-down-after-major-ai-push/">Alibaba’s Qwen tech lead steps down after major AI push</a><br><a href="https://venturebeat.com/technology/did-alibaba-just-kneecap-its-powerful-qwen-ai-team-key-figures-depart-in">Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release</a><br><a href="https://simonwillison.net/2026/Mar/5/chardet/">Can coding agents relicense open source through a “clean room” implementation of code?</a><br><a href="https://antirez.com/news/162">GNU and the AI reimplementations</a><br><a href="https://justinjackson.ca/claude-code-ruin">Will Claude Code ruin our team?</a><br><a href="https://openai.com/index/harness-engineering/">Harness engineering: leveraging Codex in an agent-first world</a><br><a href="https://www.reuters.com/business/oracle-plans-thousands-job-cuts-data-center-costs-rise-bloomberg-news-reports-2026-03-05/">Oracle plans thousands of job cuts as data center costs rise, Bloomberg News reports</a><br><a href="https://arstechnica.com/ai/2026/03/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes/"> After outages, Amazon to make senior engineers sign off on AI-assisted changes </a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction</li>
<li>(02:50) - Anthropic and Pentagon Drama</li>
<li>(05:01) - Alibaba's Qwen Development Team Changes</li>
<li>(07:34) - Open Source Drama with CharDat Library</li>
<li>(24:08) - The Impact of AI on Team Dynamics</li>
<li>(29:15) - Harness Engineering and Codecs in AI Development</li>
<li>(31:39) - Empowering Agents with Tools</li>
<li>(34:06) - The Importance of Documentation</li>
<li>(36:26) - Architectural Boundaries and Testing</li>
<li>(38:29) - Innovative Projects and Personal Experiments</li>
<li>(46:56) - Flatterproof: Combating AI Synchrofancy</li>
<li>(54:27) - The AI Bubble Clock: Current State of Affairs</li>
<li>(01:02:29) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 13 Mar 2026 05:00:00 -0700</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/e8c67141/c0b4a027.mp3" length="30152344" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3751</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Dan and Shimin follow-up on the Anthropic Pentagon drama (supply chain risk designation, lawsuit, and big tech backing Anthropic), open-source licensing controversy around AI-generated clean room rewrites, team dynamics in the age of AI coding tools, OpenAI's harness engineering blog post, two vibe-and-tell segments (Dan building custom Arch Linux images for a TuringPie cluster board, Shimin building FlatterProof — an AI sycophancy training app), and a bubble clock update driven by Oracle job cuts and AWS AI-related downtime.</p><p><strong>Takeaways</strong></p><ul><li>AI as a force multiplier for team culture: good teams move faster, bad teams explode faster</li><li>Prompt debt is now a real concern alongside technical debt — agents.md files rot just like code</li><li>Code garbage collection (periodic AI-driven cleanup) is emerging as a best practice</li><li>Cross-functional pair programming with AI (PM + engineer) represents a bright future for team collaboration</li><li>Senior engineers now required to sign off on AI-assisted changes at Amazon, but review fatigue is unsustainable</li><li>AI-generated SVG icons are surprisingly good and practical for real projects</li><li>Oracle may be the canary in the coal mine for the AI bubble, not the frontier labs themselves</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://futurism.com/artificial-intelligence/pentagon-ai-claude-bombing-elementary-school">Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target</a><br><a href="https://www.reuters.com/world/anthropic-sues-block-pentagon-blacklisting-over-ai-use-restrictions-2026-03-09/">Anthropic sues to block Pentagon blacklisting over AI use restrictions</a><br><a href="https://techcrunch.com/2026/03/03/alibabas-qwen-tech-lead-steps-down-after-major-ai-push/">Alibaba’s Qwen tech lead steps down after major AI push</a><br><a href="https://venturebeat.com/technology/did-alibaba-just-kneecap-its-powerful-qwen-ai-team-key-figures-depart-in">Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release</a><br><a href="https://simonwillison.net/2026/Mar/5/chardet/">Can coding agents relicense open source through a “clean room” implementation of code?</a><br><a href="https://antirez.com/news/162">GNU and the AI reimplementations</a><br><a href="https://justinjackson.ca/claude-code-ruin">Will Claude Code ruin our team?</a><br><a href="https://openai.com/index/harness-engineering/">Harness engineering: leveraging Codex in an agent-first world</a><br><a href="https://www.reuters.com/business/oracle-plans-thousands-job-cuts-data-center-costs-rise-bloomberg-news-reports-2026-03-05/">Oracle plans thousands of job cuts as data center costs rise, Bloomberg News reports</a><br><a href="https://arstechnica.com/ai/2026/03/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes/"> After outages, Amazon to make senior engineers sign off on AI-assisted changes </a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction</li>
<li>(02:50) - Anthropic and Pentagon Drama</li>
<li>(05:01) - Alibaba's Qwen Development Team Changes</li>
<li>(07:34) - Open Source Drama with CharDat Library</li>
<li>(24:08) - The Impact of AI on Team Dynamics</li>
<li>(29:15) - Harness Engineering and Codecs in AI Development</li>
<li>(31:39) - Empowering Agents with Tools</li>
<li>(34:06) - The Importance of Documentation</li>
<li>(36:26) - Architectural Boundaries and Testing</li>
<li>(38:29) - Innovative Projects and Personal Experiments</li>
<li>(46:56) - Flatterproof: Combating AI Synchrofancy</li>
<li>(54:27) - The AI Bubble Clock: Current State of Affairs</li>
<li>(01:02:29) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Pentagon AI, Anthropic DOD, supply chain risk, clean room rewrite, Mosh, open source licensing, VSDD, verified spec-driven development, Claude Code, team dynamics, cross-functional teams, OpenAI harness engineering, agents.md, progressive disclosure, code garbage collection, invariants, linters, TuringPie, Arch Linux, RK1, cluster board, FlatterProof, AI sycophancy, Google Stitch, concurrent agents, worktrees, Oracle job cuts, Stargate, AWS outage, Kiro, bubble clock, two minutes to midnight, Superpower, ReactOS, GNU</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e8c67141/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/e8c67141/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Pentagon Anthropic Drama, Verified Spec-Driven Development, and Interview with Martin Alderson!</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>Pentagon Anthropic Drama, Verified Spec-Driven Development, and Interview with Martin Alderson!</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2fb534b9-d6e0-4a02-be49-8f6df05515d9</guid>
      <link>https://www.adipod.ai/episodes/16-pentagon-anthropic-drama-verified-spec-driven-development-and-interview-with-martin-alderson/</link>
      <description>
        <![CDATA[<p>In this episode, Dan, Shimin and Rahul cover the Pentagon drama between Anthropic/OpenAI and the Department of Defense over AI usage red lines, introduces Sterling 8B — the first inherently interpretable language model — and explores verified spec-driven development (VSDD). The episode features the show's first interview, with Martin Alderson discussing which web frameworks are most token-efficient for AI agents</p><p><strong>Takeaways</strong></p><ul><li>Pentagon AI drama: Anthropic's contract red lines (no mass domestic surveillance, no autonomous weapons), the Department of Defense threatening to label Anthropic a supply chain risk, OpenAI swooping in with a competing contract under vague 'lawful use' terms, and Sam Altman's statements</li><li>Sterling 8B by Guide Labs: first inherently interpretable LLM with concept attribution, input context tracing, and training data attribution; uses a concept head with orthogonal loss functions to create non-overlapping interpretable concepts</li><li>Verified Spec-Driven Development (VSDD): a methodology by DollSpace combining spec-driven development, TDD, and adversarial verification gates at each phase; Shimin tested it on a side project using Claude Code</li><li>Interview with Martin Alderson: web framework token efficiency experiment (19 frameworks, minimal frameworks like Flask/Express most efficient), new framework discovery in the AI age, using Open Code for CI/CD PR reviews, keeping Claude.md files updated via scheduled tasks, and building internal CLIs for agent access</li><li>Two Minutes to Midnight: Citadel Securities report on AI adoption S-curves vs recursive improvement, Substack post that shook the S&amp;P 500 about white collar job crisis, Block laying off 45% of workforce citing AI productivity gains</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://thezvi.substack.com/p/anthropic-and-the-department-of-war">Anthropic and the Department of War</a><strong><br></strong><a href="https://x.com/sama/status/2027578580159631610">Sam Altman's Tweet</a><br><a href="https://openai.com/index/our-agreement-with-the-department-of-war/">Our agreement with the Department of War</a><br><a href="https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you">"All Lawful Use": Much More Than You Wanted To Know</a><br><a href="https://www.guidelabs.ai/post/steerling-8b-base-model-release/">Steerling-8B: The First Inherently Interpretable Language Model</a><br><a href="https://gist.github.com/dollspace-gay/d8d3bc3ecf4188df049d7a4726bb2a00">Verified Spec-Driven Development (VSDD)</a><br><a href="https://martinalderson.com/posts/which-web-frameworks-are-most-token-efficient-for-ai-agents/">Which web frameworks are most token-efficient for AI agents?</a><br><a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">The 2026 Global Intelligence Crisis</a><br><a href="https://www.theguardian.com/technology/2026/feb/24/feedback-loop-no-brake-how-ai-doomsday-report-rattled-markets">‘A feedback loop with no brake’: how an AI doomsday report shook US markets</a><br><a href="https://www.cnbc.com/2026/02/26/block-laying-off-about-4000-employees-nearly-half-of-its-workforce.html">Block shares soar as much as 24% as company slashes workforce by nearly half</a><br><a href="https://x.com/elidourado/status/2026060408055021752">Eli Dourado's Tweet</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to ADI</li>
<li>(02:55) - Pentagon Drama and AI Models</li>
<li>(21:36) - OpenAI vs Anthropic: The Contract Controversy</li>
<li>(28:19) - Innovations in AI: Interpretable Language Models</li>
<li>(28:42) - Scaling Language Models and Their Implications</li>
<li>(29:09) - Introduction to Verified Spec Driven Development</li>
<li>(33:47) - Interview with Martin Alderson</li>
<li>(55:21) - AI Bubble Watch: Current Trends and Predictions</li>
<li>(58:47) - The Impact of AI on Job Markets</li>
<li>(01:04:00) - Reflections on AI's Role in the Economy</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Dan, Shimin and Rahul cover the Pentagon drama between Anthropic/OpenAI and the Department of Defense over AI usage red lines, introduces Sterling 8B — the first inherently interpretable language model — and explores verified spec-driven development (VSDD). The episode features the show's first interview, with Martin Alderson discussing which web frameworks are most token-efficient for AI agents</p><p><strong>Takeaways</strong></p><ul><li>Pentagon AI drama: Anthropic's contract red lines (no mass domestic surveillance, no autonomous weapons), the Department of Defense threatening to label Anthropic a supply chain risk, OpenAI swooping in with a competing contract under vague 'lawful use' terms, and Sam Altman's statements</li><li>Sterling 8B by Guide Labs: first inherently interpretable LLM with concept attribution, input context tracing, and training data attribution; uses a concept head with orthogonal loss functions to create non-overlapping interpretable concepts</li><li>Verified Spec-Driven Development (VSDD): a methodology by DollSpace combining spec-driven development, TDD, and adversarial verification gates at each phase; Shimin tested it on a side project using Claude Code</li><li>Interview with Martin Alderson: web framework token efficiency experiment (19 frameworks, minimal frameworks like Flask/Express most efficient), new framework discovery in the AI age, using Open Code for CI/CD PR reviews, keeping Claude.md files updated via scheduled tasks, and building internal CLIs for agent access</li><li>Two Minutes to Midnight: Citadel Securities report on AI adoption S-curves vs recursive improvement, Substack post that shook the S&amp;P 500 about white collar job crisis, Block laying off 45% of workforce citing AI productivity gains</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://thezvi.substack.com/p/anthropic-and-the-department-of-war">Anthropic and the Department of War</a><strong><br></strong><a href="https://x.com/sama/status/2027578580159631610">Sam Altman's Tweet</a><br><a href="https://openai.com/index/our-agreement-with-the-department-of-war/">Our agreement with the Department of War</a><br><a href="https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you">"All Lawful Use": Much More Than You Wanted To Know</a><br><a href="https://www.guidelabs.ai/post/steerling-8b-base-model-release/">Steerling-8B: The First Inherently Interpretable Language Model</a><br><a href="https://gist.github.com/dollspace-gay/d8d3bc3ecf4188df049d7a4726bb2a00">Verified Spec-Driven Development (VSDD)</a><br><a href="https://martinalderson.com/posts/which-web-frameworks-are-most-token-efficient-for-ai-agents/">Which web frameworks are most token-efficient for AI agents?</a><br><a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">The 2026 Global Intelligence Crisis</a><br><a href="https://www.theguardian.com/technology/2026/feb/24/feedback-loop-no-brake-how-ai-doomsday-report-rattled-markets">‘A feedback loop with no brake’: how an AI doomsday report shook US markets</a><br><a href="https://www.cnbc.com/2026/02/26/block-laying-off-about-4000-employees-nearly-half-of-its-workforce.html">Block shares soar as much as 24% as company slashes workforce by nearly half</a><br><a href="https://x.com/elidourado/status/2026060408055021752">Eli Dourado's Tweet</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to ADI</li>
<li>(02:55) - Pentagon Drama and AI Models</li>
<li>(21:36) - OpenAI vs Anthropic: The Contract Controversy</li>
<li>(28:19) - Innovations in AI: Interpretable Language Models</li>
<li>(28:42) - Scaling Language Models and Their Implications</li>
<li>(29:09) - Introduction to Verified Spec Driven Development</li>
<li>(33:47) - Interview with Martin Alderson</li>
<li>(55:21) - AI Bubble Watch: Current Trends and Predictions</li>
<li>(58:47) - The Impact of AI on Job Markets</li>
<li>(01:04:00) - Reflections on AI's Role in the Economy</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 06 Mar 2026 05:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, Rahul Yadav &amp; Martin Alderson</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/1c631084/31e0d33e.mp3" length="33070700" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, Rahul Yadav &amp; Martin Alderson</itunes:author>
      <itunes:duration>4116</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Dan, Shimin and Rahul cover the Pentagon drama between Anthropic/OpenAI and the Department of Defense over AI usage red lines, introduces Sterling 8B — the first inherently interpretable language model — and explores verified spec-driven development (VSDD). The episode features the show's first interview, with Martin Alderson discussing which web frameworks are most token-efficient for AI agents</p><p><strong>Takeaways</strong></p><ul><li>Pentagon AI drama: Anthropic's contract red lines (no mass domestic surveillance, no autonomous weapons), the Department of Defense threatening to label Anthropic a supply chain risk, OpenAI swooping in with a competing contract under vague 'lawful use' terms, and Sam Altman's statements</li><li>Sterling 8B by Guide Labs: first inherently interpretable LLM with concept attribution, input context tracing, and training data attribution; uses a concept head with orthogonal loss functions to create non-overlapping interpretable concepts</li><li>Verified Spec-Driven Development (VSDD): a methodology by DollSpace combining spec-driven development, TDD, and adversarial verification gates at each phase; Shimin tested it on a side project using Claude Code</li><li>Interview with Martin Alderson: web framework token efficiency experiment (19 frameworks, minimal frameworks like Flask/Express most efficient), new framework discovery in the AI age, using Open Code for CI/CD PR reviews, keeping Claude.md files updated via scheduled tasks, and building internal CLIs for agent access</li><li>Two Minutes to Midnight: Citadel Securities report on AI adoption S-curves vs recursive improvement, Substack post that shook the S&amp;P 500 about white collar job crisis, Block laying off 45% of workforce citing AI productivity gains</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://thezvi.substack.com/p/anthropic-and-the-department-of-war">Anthropic and the Department of War</a><strong><br></strong><a href="https://x.com/sama/status/2027578580159631610">Sam Altman's Tweet</a><br><a href="https://openai.com/index/our-agreement-with-the-department-of-war/">Our agreement with the Department of War</a><br><a href="https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you">"All Lawful Use": Much More Than You Wanted To Know</a><br><a href="https://www.guidelabs.ai/post/steerling-8b-base-model-release/">Steerling-8B: The First Inherently Interpretable Language Model</a><br><a href="https://gist.github.com/dollspace-gay/d8d3bc3ecf4188df049d7a4726bb2a00">Verified Spec-Driven Development (VSDD)</a><br><a href="https://martinalderson.com/posts/which-web-frameworks-are-most-token-efficient-for-ai-agents/">Which web frameworks are most token-efficient for AI agents?</a><br><a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">The 2026 Global Intelligence Crisis</a><br><a href="https://www.theguardian.com/technology/2026/feb/24/feedback-loop-no-brake-how-ai-doomsday-report-rattled-markets">‘A feedback loop with no brake’: how an AI doomsday report shook US markets</a><br><a href="https://www.cnbc.com/2026/02/26/block-laying-off-about-4000-employees-nearly-half-of-its-workforce.html">Block shares soar as much as 24% as company slashes workforce by nearly half</a><br><a href="https://x.com/elidourado/status/2026060408055021752">Eli Dourado's Tweet</a><strong><br></strong><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to ADI</li>
<li>(02:55) - Pentagon Drama and AI Models</li>
<li>(21:36) - OpenAI vs Anthropic: The Contract Controversy</li>
<li>(28:19) - Innovations in AI: Interpretable Language Models</li>
<li>(28:42) - Scaling Language Models and Their Implications</li>
<li>(29:09) - Introduction to Verified Spec Driven Development</li>
<li>(33:47) - Interview with Martin Alderson</li>
<li>(55:21) - AI Bubble Watch: Current Trends and Predictions</li>
<li>(58:47) - The Impact of AI on Job Markets</li>
<li>(01:04:00) - Reflections on AI's Role in the Economy</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Department of Defense, Anthropic, OpenAI, Palantir, autonomous weapons, mass surveillance, Fourth Amendment, supply chain risk, Defense Production Act, Sterling 8B, Guide Labs, interpretable AI, concept attribution, VSDD, verified spec-driven development, TDD, adversarial verification, Martin Alderson, web frameworks, token efficiency, Next.js, Claude Code, Open Code, CI/CD, PR review, Claude.md, agents.md, AI bubble,  Block layoffs</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1c631084/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/1c631084/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Convincing AI the Earth is Flat, Inference at 17k tokens/sec, and an Agile Manifesto for the Agentic Age?</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>Convincing AI the Earth is Flat, Inference at 17k tokens/sec, and an Agile Manifesto for the Agentic Age?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">292cfa04-dfc1-4cca-b11c-09ff91d055fe</guid>
      <link>https://www.adipod.ai/episodes/15-convincing-ai-the-earth-is-flat-inference-at-17k-tokens-sec-and-an-agile-manifesto-for-the-agentic-age/</link>
      <description>
        <![CDATA[<p>This episode covers Sonnet 4.6 and Gemini 3.1 Pro model releases, Taalas Labs FPGA-based 17K tokens/sec hardware, the Meta-AMD chip partnership, Steven Sinofsky's argument against "software is dead," a deep dive into the ThoughtWorks Future of Software Engineering retreat findings (from Agile Manifesto signers), Chris Roth's elite AI engineering culture article, a Vibe &amp; Tell segment testing agent sycophancy across three models, and AI bubble economics.</p><p><strong>Takeaways</strong></p><ul><li>Sonnet 4.6: Opus-level reasoning at Sonnet pricing; 72.5 on OS World (vs 61.4 for Sonnet 4.5); outperforms Opus 4.6 on agentic financial analysis; trained for computer use</li><li>Taalas Labs FPGA hardware: 17K tokens/sec for Llama 3.1 8B; Chat Jimmy demo; custom hardware as future of inference</li><li>Steven Sinofsky "Death of Software: Nah": Historical parallels (PC didnt kill mainframes, e-commerce didnt kill retail in 20 years, media death premature); predictions: more software, AI moves up stack, domain expertise more important; Jevons paradox applied to software</li><li>ThoughtWorks Future of Software Engineering retreat: Agile Manifesto 25th anniversary; where rigor goes (spec-driven development, red-green tests); risk tiering for code review; loss of mentoring through code review; DevEx vs agent experience decoupling; security as afterthought; the "middle loop" (overseeing agents); cognitive debt; agent topology mirroring org structure; knowledge graphs rediscovered; future roles converging; revenge of juniors (IBM hiring); self-healing systems (2-5 year horizon)</li><li>Vibe &amp; Tell — Agent sycophancy testing: Flat earth test (all three models resisted); workplace bias scenario (Jim/Jane); GPT 5.1 Instant best (refused all manipulation); Claude Haiku second (too empathetic, admitted to nudging); Gemini 3 worst (agreed with bias claim); AI as therapist risks; radical candor vs ruinous empathy</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/news/claude-sonnet-4-6">Introducing Claude Sonnet 4.6</a><br><a href="https://taalas.com/the-path-to-ubiquitous-ai/">The path to ubiquitous AI</a><br><a href="https://arstechnica.com/ai/2026/02/openai-sidesteps-nvidia-with-unusually-fast-coding-model-on-plate-sized-chips/">OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips</a><br><a href="https://hardcoresoftware.learningbyshipping.com/p/238-death-of-software-nah">Death of Software. Nah.</a><br><a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">The future of software engineering</a><br><a href="https://www.cjroth.com/blog/2026-02-18-building-an-elite-engineering-culture">Building An Elite AI Engineering Culture In 2026</a><br><a href="https://oriongemini.substack.com/p/the-number-is-going-up?r=fix4n&amp;utm_medium=ios&amp;triedRedirect=true">The Number Is Going Up</a><br><a href="https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/">An AI coding bot took down Amazon Web Services </a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(01:13) - Latest AI Models and Hardware Innovations</li>
<li>(04:10) - The Future of AI Hardware</li>
<li>(10:01) - The Death of Software Debate</li>
<li>(19:35) - The Agile Manifesto and Its Evolution</li>
<li>(33:39) - The Impact of AI on Development Teams</li>
<li>(34:52) - The Future of Junior Developers</li>
<li>(37:11) - Self-Healing Systems and AI Assistance</li>
<li>(39:33) - Building an Elite AI Engineering Culture</li>
<li>(45:27) - AI Experiment and AI Sycophancy</li>
<li>(55:33) - The AI Bubble Clock and Economic Implications</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers Sonnet 4.6 and Gemini 3.1 Pro model releases, Taalas Labs FPGA-based 17K tokens/sec hardware, the Meta-AMD chip partnership, Steven Sinofsky's argument against "software is dead," a deep dive into the ThoughtWorks Future of Software Engineering retreat findings (from Agile Manifesto signers), Chris Roth's elite AI engineering culture article, a Vibe &amp; Tell segment testing agent sycophancy across three models, and AI bubble economics.</p><p><strong>Takeaways</strong></p><ul><li>Sonnet 4.6: Opus-level reasoning at Sonnet pricing; 72.5 on OS World (vs 61.4 for Sonnet 4.5); outperforms Opus 4.6 on agentic financial analysis; trained for computer use</li><li>Taalas Labs FPGA hardware: 17K tokens/sec for Llama 3.1 8B; Chat Jimmy demo; custom hardware as future of inference</li><li>Steven Sinofsky "Death of Software: Nah": Historical parallels (PC didnt kill mainframes, e-commerce didnt kill retail in 20 years, media death premature); predictions: more software, AI moves up stack, domain expertise more important; Jevons paradox applied to software</li><li>ThoughtWorks Future of Software Engineering retreat: Agile Manifesto 25th anniversary; where rigor goes (spec-driven development, red-green tests); risk tiering for code review; loss of mentoring through code review; DevEx vs agent experience decoupling; security as afterthought; the "middle loop" (overseeing agents); cognitive debt; agent topology mirroring org structure; knowledge graphs rediscovered; future roles converging; revenge of juniors (IBM hiring); self-healing systems (2-5 year horizon)</li><li>Vibe &amp; Tell — Agent sycophancy testing: Flat earth test (all three models resisted); workplace bias scenario (Jim/Jane); GPT 5.1 Instant best (refused all manipulation); Claude Haiku second (too empathetic, admitted to nudging); Gemini 3 worst (agreed with bias claim); AI as therapist risks; radical candor vs ruinous empathy</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/news/claude-sonnet-4-6">Introducing Claude Sonnet 4.6</a><br><a href="https://taalas.com/the-path-to-ubiquitous-ai/">The path to ubiquitous AI</a><br><a href="https://arstechnica.com/ai/2026/02/openai-sidesteps-nvidia-with-unusually-fast-coding-model-on-plate-sized-chips/">OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips</a><br><a href="https://hardcoresoftware.learningbyshipping.com/p/238-death-of-software-nah">Death of Software. Nah.</a><br><a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">The future of software engineering</a><br><a href="https://www.cjroth.com/blog/2026-02-18-building-an-elite-engineering-culture">Building An Elite AI Engineering Culture In 2026</a><br><a href="https://oriongemini.substack.com/p/the-number-is-going-up?r=fix4n&amp;utm_medium=ios&amp;triedRedirect=true">The Number Is Going Up</a><br><a href="https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/">An AI coding bot took down Amazon Web Services </a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(01:13) - Latest AI Models and Hardware Innovations</li>
<li>(04:10) - The Future of AI Hardware</li>
<li>(10:01) - The Death of Software Debate</li>
<li>(19:35) - The Agile Manifesto and Its Evolution</li>
<li>(33:39) - The Impact of AI on Development Teams</li>
<li>(34:52) - The Future of Junior Developers</li>
<li>(37:11) - Self-Healing Systems and AI Assistance</li>
<li>(39:33) - Building an Elite AI Engineering Culture</li>
<li>(45:27) - AI Experiment and AI Sycophancy</li>
<li>(55:33) - The AI Bubble Clock and Economic Implications</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 27 Feb 2026 05:00:00 -0800</pubDate>
      <author>Shimin Zhang &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/a49a6ed7/30686feb.mp3" length="30930323" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3849</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers Sonnet 4.6 and Gemini 3.1 Pro model releases, Taalas Labs FPGA-based 17K tokens/sec hardware, the Meta-AMD chip partnership, Steven Sinofsky's argument against "software is dead," a deep dive into the ThoughtWorks Future of Software Engineering retreat findings (from Agile Manifesto signers), Chris Roth's elite AI engineering culture article, a Vibe &amp; Tell segment testing agent sycophancy across three models, and AI bubble economics.</p><p><strong>Takeaways</strong></p><ul><li>Sonnet 4.6: Opus-level reasoning at Sonnet pricing; 72.5 on OS World (vs 61.4 for Sonnet 4.5); outperforms Opus 4.6 on agentic financial analysis; trained for computer use</li><li>Taalas Labs FPGA hardware: 17K tokens/sec for Llama 3.1 8B; Chat Jimmy demo; custom hardware as future of inference</li><li>Steven Sinofsky "Death of Software: Nah": Historical parallels (PC didnt kill mainframes, e-commerce didnt kill retail in 20 years, media death premature); predictions: more software, AI moves up stack, domain expertise more important; Jevons paradox applied to software</li><li>ThoughtWorks Future of Software Engineering retreat: Agile Manifesto 25th anniversary; where rigor goes (spec-driven development, red-green tests); risk tiering for code review; loss of mentoring through code review; DevEx vs agent experience decoupling; security as afterthought; the "middle loop" (overseeing agents); cognitive debt; agent topology mirroring org structure; knowledge graphs rediscovered; future roles converging; revenge of juniors (IBM hiring); self-healing systems (2-5 year horizon)</li><li>Vibe &amp; Tell — Agent sycophancy testing: Flat earth test (all three models resisted); workplace bias scenario (Jim/Jane); GPT 5.1 Instant best (refused all manipulation); Claude Haiku second (too empathetic, admitted to nudging); Gemini 3 worst (agreed with bias claim); AI as therapist risks; radical candor vs ruinous empathy</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/news/claude-sonnet-4-6">Introducing Claude Sonnet 4.6</a><br><a href="https://taalas.com/the-path-to-ubiquitous-ai/">The path to ubiquitous AI</a><br><a href="https://arstechnica.com/ai/2026/02/openai-sidesteps-nvidia-with-unusually-fast-coding-model-on-plate-sized-chips/">OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips</a><br><a href="https://hardcoresoftware.learningbyshipping.com/p/238-death-of-software-nah">Death of Software. Nah.</a><br><a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">The future of software engineering</a><br><a href="https://www.cjroth.com/blog/2026-02-18-building-an-elite-engineering-culture">Building An Elite AI Engineering Culture In 2026</a><br><a href="https://oriongemini.substack.com/p/the-number-is-going-up?r=fix4n&amp;utm_medium=ios&amp;triedRedirect=true">The Number Is Going Up</a><br><a href="https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/">An AI coding bot took down Amazon Web Services </a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(01:13) - Latest AI Models and Hardware Innovations</li>
<li>(04:10) - The Future of AI Hardware</li>
<li>(10:01) - The Death of Software Debate</li>
<li>(19:35) - The Agile Manifesto and Its Evolution</li>
<li>(33:39) - The Impact of AI on Development Teams</li>
<li>(34:52) - The Future of Junior Developers</li>
<li>(37:11) - Self-Healing Systems and AI Assistance</li>
<li>(39:33) - Building an Elite AI Engineering Culture</li>
<li>(45:27) - AI Experiment and AI Sycophancy</li>
<li>(55:33) - The AI Bubble Clock and Economic Implications</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>onnet 4.6, Opus 4.6, Gemini 3.1, Taalas Labs, FPGA, custom chips, Cerabus, Chat Jimmy, Nvidia, Steven Sinofsky, software is dead, Jevons paradox, Agile Manifesto, ThoughtWorks retreat, spec-driven development, risk tiering, middle loop, cognitive debt, agent topology, design engineer, productivity paradox, story points, disposable code, agent sycophancy, flat earth, pretty privilege, AI therapy, radical candor, AI bubble, GDP CAPEX, circular economy, AWS outage, self-healing systems</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a49a6ed7/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/a49a6ed7/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Crabby Rathbun, Model Councils &amp; Why You Want More Tech Debt</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Crabby Rathbun, Model Councils &amp; Why You Want More Tech Debt</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bbd49f1e-c0a9-429c-a95f-e2ae95e30069</guid>
      <link>https://www.adipod.ai/episodes/14-crabby-rathbun-model-councils-why-you-want-more-tech-debt/</link>
      <description>
        <![CDATA[<p>This episode covers the Krabby Rathbun AI bot drama (automated PRs, fabricated hit piece, Ars Technica retraction), safety team shakeups at OpenAI and Anthropic, Gemini model distillation/cloning attempts, Perplexity model councils, and a heavily economics-flavored discussion on AI job displacement, tech debt as strategy, cognitive debt, and workflow automation convexity.</p><p><strong>Takeaways</strong></p><ul><li>Ars Technica AI-generating an article about an AI bot drama — and getting caught fabricating quotes — is peak 2026 irony</li><li>Distillation/cloning is an unsolvable problem for frontier labs — they cant restrict usage without banning legitimate users</li><li>Model councils (running multiple models + synthesis) becoming practical; strongest model as judge, not necessarily the one generating answers</li><li>Cognitive debt may be more dangerous than tech debt — teams hit a wall when no one understands the codebase, usually around week 7-8 of heavy AI-assisted development</li><li>Workflow automation follows convexity: long period of minimal AI impact on jobs, then sudden full automation when AI can handle entire connected workflows, not just individual tasks</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/">An AI Agent Published a Hit Piece on Me</a><br><a href="https://www.nickolinger.com/blog/2026-02-13-ai-bot-crabby-rathbun-is-still-going/">AI Bot crabby-rathbun is still going</a><br><a href="https://www.platformer.news/openai-mission-alignment-team-joshua-achiam/">Exclusive: OpenAI disbanded its mission alignment team</a><br><a href="https://x.com/MrinankSharma/status/2020881722003583421?s=20">Mrinank Sharmas Departure Letter</a><br><a href="https://arstechnica.com/ai/2026/02/attackers-prompted-gemini-over-100000-times-while-trying-to-clone-it-google-says/">Attackers prompted Gemini over 100,000 times while trying to clone it, Google says </a><br><a href="https://www.perplexity.ai/hub/blog/introducing-model-council">Introducing Model Council</a><br><a href="https://github.com/karpathy/llm-council">llm-council</a><br><a href="https://davidoks.blog/p/why-im-not-worried-about-ai-job-loss">Why I’m not worried about AI job loss</a><br><a href="https://singularitea.bearblog.dev/tech-debt/">You’re Not Taking On Enough Tech Debt</a><br><a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/">How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt</a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/"><br></a><a href="https://philiptrammell.com/static/Workflows_and_Automation.pdf">Workflows and Automation</a><br><a href="https://www.wheresyoured.at/data-center-crisis/">Premium: The AI Data Center Financial Crisis</a><br><a href="https://philippdubach.com/posts/the-saaspocalypse-paradox/">The SaaSpocalypse Paradox</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction and Lunar New Year Celebrations</li>
<li>(02:44) - AI Bot Controversy: Krabby Rathburn</li>
<li>(05:04) - AI Alignment and Departures in Major Labs</li>
<li>(07:39) - Google's Gemini and AI Cloning Concerns</li>
<li>(10:08) - Tool Shed: Exploring Model Consoles</li>
<li>(12:28) - Distillation and AI Model Development</li>
<li>(21:11) - Model Pledge Drive and Console Approaches</li>
<li>(23:07) - Post-Processing and AI's Impact on Work</li>
<li>(23:57) - AI's Role in Job Security and Economic Productivity</li>
<li>(30:30) - Reverse Centaurs and Naming Conventions</li>
<li>(32:09) - Tech Debt and Cognitive Debt</li>
<li>(36:24) - Cognitive Debt in AI-Assisted Programming</li>
<li>(48:16) - Cultural Shifts in Responsibility</li>
<li>(49:13) - Exploring Workflow and Automation</li>
<li>(52:07) - The Impact of AI on Job Structures</li>
<li>(54:23) - Tolerance for AI Mistakes</li>
<li>(56:59) - Documenting Knowledge for AI</li>
<li>(57:24) - Bifurcation of Tasks and Automation</li>
<li>(59:34) - The Future of Meetings in an AI World</li>
<li>(01:00:21) - State of the AI Bubble</li>
<li>(01:03:41) - Market Dynamics and Investment Strategies</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers the Krabby Rathbun AI bot drama (automated PRs, fabricated hit piece, Ars Technica retraction), safety team shakeups at OpenAI and Anthropic, Gemini model distillation/cloning attempts, Perplexity model councils, and a heavily economics-flavored discussion on AI job displacement, tech debt as strategy, cognitive debt, and workflow automation convexity.</p><p><strong>Takeaways</strong></p><ul><li>Ars Technica AI-generating an article about an AI bot drama — and getting caught fabricating quotes — is peak 2026 irony</li><li>Distillation/cloning is an unsolvable problem for frontier labs — they cant restrict usage without banning legitimate users</li><li>Model councils (running multiple models + synthesis) becoming practical; strongest model as judge, not necessarily the one generating answers</li><li>Cognitive debt may be more dangerous than tech debt — teams hit a wall when no one understands the codebase, usually around week 7-8 of heavy AI-assisted development</li><li>Workflow automation follows convexity: long period of minimal AI impact on jobs, then sudden full automation when AI can handle entire connected workflows, not just individual tasks</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/">An AI Agent Published a Hit Piece on Me</a><br><a href="https://www.nickolinger.com/blog/2026-02-13-ai-bot-crabby-rathbun-is-still-going/">AI Bot crabby-rathbun is still going</a><br><a href="https://www.platformer.news/openai-mission-alignment-team-joshua-achiam/">Exclusive: OpenAI disbanded its mission alignment team</a><br><a href="https://x.com/MrinankSharma/status/2020881722003583421?s=20">Mrinank Sharmas Departure Letter</a><br><a href="https://arstechnica.com/ai/2026/02/attackers-prompted-gemini-over-100000-times-while-trying-to-clone-it-google-says/">Attackers prompted Gemini over 100,000 times while trying to clone it, Google says </a><br><a href="https://www.perplexity.ai/hub/blog/introducing-model-council">Introducing Model Council</a><br><a href="https://github.com/karpathy/llm-council">llm-council</a><br><a href="https://davidoks.blog/p/why-im-not-worried-about-ai-job-loss">Why I’m not worried about AI job loss</a><br><a href="https://singularitea.bearblog.dev/tech-debt/">You’re Not Taking On Enough Tech Debt</a><br><a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/">How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt</a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/"><br></a><a href="https://philiptrammell.com/static/Workflows_and_Automation.pdf">Workflows and Automation</a><br><a href="https://www.wheresyoured.at/data-center-crisis/">Premium: The AI Data Center Financial Crisis</a><br><a href="https://philippdubach.com/posts/the-saaspocalypse-paradox/">The SaaSpocalypse Paradox</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction and Lunar New Year Celebrations</li>
<li>(02:44) - AI Bot Controversy: Krabby Rathburn</li>
<li>(05:04) - AI Alignment and Departures in Major Labs</li>
<li>(07:39) - Google's Gemini and AI Cloning Concerns</li>
<li>(10:08) - Tool Shed: Exploring Model Consoles</li>
<li>(12:28) - Distillation and AI Model Development</li>
<li>(21:11) - Model Pledge Drive and Console Approaches</li>
<li>(23:07) - Post-Processing and AI's Impact on Work</li>
<li>(23:57) - AI's Role in Job Security and Economic Productivity</li>
<li>(30:30) - Reverse Centaurs and Naming Conventions</li>
<li>(32:09) - Tech Debt and Cognitive Debt</li>
<li>(36:24) - Cognitive Debt in AI-Assisted Programming</li>
<li>(48:16) - Cultural Shifts in Responsibility</li>
<li>(49:13) - Exploring Workflow and Automation</li>
<li>(52:07) - The Impact of AI on Job Structures</li>
<li>(54:23) - Tolerance for AI Mistakes</li>
<li>(56:59) - Documenting Knowledge for AI</li>
<li>(57:24) - Bifurcation of Tasks and Automation</li>
<li>(59:34) - The Future of Meetings in an AI World</li>
<li>(01:00:21) - State of the AI Bubble</li>
<li>(01:03:41) - Market Dynamics and Investment Strategies</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 20 Feb 2026 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/43906320/e6826aa2.mp3" length="34523489" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, Rahul Yadav</itunes:author>
      <itunes:duration>4298</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers the Krabby Rathbun AI bot drama (automated PRs, fabricated hit piece, Ars Technica retraction), safety team shakeups at OpenAI and Anthropic, Gemini model distillation/cloning attempts, Perplexity model councils, and a heavily economics-flavored discussion on AI job displacement, tech debt as strategy, cognitive debt, and workflow automation convexity.</p><p><strong>Takeaways</strong></p><ul><li>Ars Technica AI-generating an article about an AI bot drama — and getting caught fabricating quotes — is peak 2026 irony</li><li>Distillation/cloning is an unsolvable problem for frontier labs — they cant restrict usage without banning legitimate users</li><li>Model councils (running multiple models + synthesis) becoming practical; strongest model as judge, not necessarily the one generating answers</li><li>Cognitive debt may be more dangerous than tech debt — teams hit a wall when no one understands the codebase, usually around week 7-8 of heavy AI-assisted development</li><li>Workflow automation follows convexity: long period of minimal AI impact on jobs, then sudden full automation when AI can handle entire connected workflows, not just individual tasks</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/">An AI Agent Published a Hit Piece on Me</a><br><a href="https://www.nickolinger.com/blog/2026-02-13-ai-bot-crabby-rathbun-is-still-going/">AI Bot crabby-rathbun is still going</a><br><a href="https://www.platformer.news/openai-mission-alignment-team-joshua-achiam/">Exclusive: OpenAI disbanded its mission alignment team</a><br><a href="https://x.com/MrinankSharma/status/2020881722003583421?s=20">Mrinank Sharmas Departure Letter</a><br><a href="https://arstechnica.com/ai/2026/02/attackers-prompted-gemini-over-100000-times-while-trying-to-clone-it-google-says/">Attackers prompted Gemini over 100,000 times while trying to clone it, Google says </a><br><a href="https://www.perplexity.ai/hub/blog/introducing-model-council">Introducing Model Council</a><br><a href="https://github.com/karpathy/llm-council">llm-council</a><br><a href="https://davidoks.blog/p/why-im-not-worried-about-ai-job-loss">Why I’m not worried about AI job loss</a><br><a href="https://singularitea.bearblog.dev/tech-debt/">You’re Not Taking On Enough Tech Debt</a><br><a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/">How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt</a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/"><br></a><a href="https://philiptrammell.com/static/Workflows_and_Automation.pdf">Workflows and Automation</a><br><a href="https://www.wheresyoured.at/data-center-crisis/">Premium: The AI Data Center Financial Crisis</a><br><a href="https://philippdubach.com/posts/the-saaspocalypse-paradox/">The SaaSpocalypse Paradox</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction and Lunar New Year Celebrations</li>
<li>(02:44) - AI Bot Controversy: Krabby Rathburn</li>
<li>(05:04) - AI Alignment and Departures in Major Labs</li>
<li>(07:39) - Google's Gemini and AI Cloning Concerns</li>
<li>(10:08) - Tool Shed: Exploring Model Consoles</li>
<li>(12:28) - Distillation and AI Model Development</li>
<li>(21:11) - Model Pledge Drive and Console Approaches</li>
<li>(23:07) - Post-Processing and AI's Impact on Work</li>
<li>(23:57) - AI's Role in Job Security and Economic Productivity</li>
<li>(30:30) - Reverse Centaurs and Naming Conventions</li>
<li>(32:09) - Tech Debt and Cognitive Debt</li>
<li>(36:24) - Cognitive Debt in AI-Assisted Programming</li>
<li>(48:16) - Cultural Shifts in Responsibility</li>
<li>(49:13) - Exploring Workflow and Automation</li>
<li>(52:07) - The Impact of AI on Job Structures</li>
<li>(54:23) - Tolerance for AI Mistakes</li>
<li>(56:59) - Documenting Knowledge for AI</li>
<li>(57:24) - Bifurcation of Tasks and Automation</li>
<li>(59:34) - The Future of Meetings in an AI World</li>
<li>(01:00:21) - State of the AI Bubble</li>
<li>(01:03:41) - Market Dynamics and Investment Strategies</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Krabby Rathbun, AI bot, OpenAI alignment team, Anthropic safety, Gemini cloning, GLM, model council, Perplexity, tech debt strategy, cognitive debt</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/43906320/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/43906320/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Episode 13: Pi Coding Agent, Dark Factories &amp; the Furniture Makers of Carolina</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Episode 13: Pi Coding Agent, Dark Factories &amp; the Furniture Makers of Carolina</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5c56a4db-4982-4513-883a-72885d616f5c</guid>
      <link>https://www.adipod.ai/episodes/13-pi-coding-agent-dark-factories-the-furniture-makers-of-carolina/</link>
      <description>
        <![CDATA[<p>This episode covers the simultaneous release of Claude Opus 4.6 and GPT Codex 5.3, a deep dive into the Pi coding agent framework and why Shimin prefers it over Claude Code, AI security industry criticism, software dark factories, an emotional segment mourning the craft of programming, Claude Code's new /insights command, and AI bubble economics including Anthropic's $20B raise, Google's 100-year bond, and Oracle's $50B debt plans.</p><p><br><strong>Takeaways</strong></p><ul><li> The biggest compliment for Codex 5.3 is that it feels like Claude Code now</li><li> Opus 4.6 auto-drops into plan mode and offers to clear context after planning — writes plan.md it can follow across interruptions</li><li>Pi agent's skill-based approach may represent the bitter lesson of AI tooling — less scaffolding, more model intelligence</li><li>The "everyone is a manager now" framing for agentic coding resonates — reduced dopamine from not doing work with your own hands</li><li>Context switching burnout from running multiple agent instances is an emerging problem</li><li>AI may freeze software innovation at whatever paradigm the training data captures (jQuery → React, but what comes after?)</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/news/claude-opus-4-6">Introducing Claude Opus 4.6</a><br><a href="https://openai.com/index/introducing-gpt-5-3-codex/">Introducing GPT-5.3-Codex</a><br><a href="https://www.anthropic.com/news/claude-opus-4-6%09https://www.interconnects.ai/p/opus-46-vs-codex-53%20https://openai.com/index/introducing-gpt-5-3-codex/">Opus 4.6, Codex 5.3, and the post-benchmark era</a><br><a href="https://mariozechner.at/posts/2025-11-30-pi-coding-agent/">Pi coding agent</a><br><a href="https://sanderschulhoff.substack.com/p/the-ai-security-industry-is-bullshit">The AI Security Industry is Bullshit</a><br><a href="https://factory.strongdm.ai/">Software Factories And The Agentic Moment</a><br><a href="https://nolanlawson.com/2026/02/07/we-mourn-our-craft/">We mourn our craft</a><br><a href="https://techcrunch.com/2026/02/09/anthropic-closes-in-on-20b-round/">Anthropic closes in on $20B round</a><br><a href="https://www.reuters.com/business/oracle-plans-raise-45-billion-50-billion-2026-2026-02-01/">Oracle says it plans to raise up to $50 billion in debt and equity this year </a><br><a href="https://om.co/2026/02/02/openai-and-the-announcement-economy/">The New Announcement Economy</a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/"><br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Development</li>
<li>(03:02) - Latest AI Model Releases and Comparisons</li>
<li>(06:03) - Exploring AI Coding Agents</li>
<li>(08:55) - The Rise of Py Coding Agent</li>
<li>(12:08) - AI's Impact on Job Security</li>
<li>(15:01) - AI Security Concerns and Industry Insights</li>
<li>(33:08) - The Rise of AI Security Concerns</li>
<li>(36:30) - De-risking AI: Strategies and Challenges</li>
<li>(38:29) - The Emergence of Software Factories</li>
<li>(41:19) - Cloning Software: The Digital Twin Universe</li>
<li>(44:39) - In-house Development vs. SaaS Solutions</li>
<li>(46:57) - The Future of Compliance and Audit Industries</li>
<li>(51:52) - The Impact of AI on Software Development</li>
<li>(56:37) - Navigating the Emotional Landscape of AI Development</li>
<li>(01:07:55) - Mourning the Craft: A Country Song Reflection</li>
<li>(01:09:51) - Building Beyond Loss: Tennyson's Ulysses</li>
<li>(01:12:47) - Cloud Code Insights: Enhancing Development Workflows</li>
<li>(01:19:09) - The AI Bubble: Current Trends and Predictions</li>
<li>(01:24:00) - The Announcement Economy: News in the Age of AI</li>
<li>(01:30:04) - The Future of AI: Investment and Market Dynamics</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This episode covers the simultaneous release of Claude Opus 4.6 and GPT Codex 5.3, a deep dive into the Pi coding agent framework and why Shimin prefers it over Claude Code, AI security industry criticism, software dark factories, an emotional segment mourning the craft of programming, Claude Code's new /insights command, and AI bubble economics including Anthropic's $20B raise, Google's 100-year bond, and Oracle's $50B debt plans.</p><p><br><strong>Takeaways</strong></p><ul><li> The biggest compliment for Codex 5.3 is that it feels like Claude Code now</li><li> Opus 4.6 auto-drops into plan mode and offers to clear context after planning — writes plan.md it can follow across interruptions</li><li>Pi agent's skill-based approach may represent the bitter lesson of AI tooling — less scaffolding, more model intelligence</li><li>The "everyone is a manager now" framing for agentic coding resonates — reduced dopamine from not doing work with your own hands</li><li>Context switching burnout from running multiple agent instances is an emerging problem</li><li>AI may freeze software innovation at whatever paradigm the training data captures (jQuery → React, but what comes after?)</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/news/claude-opus-4-6">Introducing Claude Opus 4.6</a><br><a href="https://openai.com/index/introducing-gpt-5-3-codex/">Introducing GPT-5.3-Codex</a><br><a href="https://www.anthropic.com/news/claude-opus-4-6%09https://www.interconnects.ai/p/opus-46-vs-codex-53%20https://openai.com/index/introducing-gpt-5-3-codex/">Opus 4.6, Codex 5.3, and the post-benchmark era</a><br><a href="https://mariozechner.at/posts/2025-11-30-pi-coding-agent/">Pi coding agent</a><br><a href="https://sanderschulhoff.substack.com/p/the-ai-security-industry-is-bullshit">The AI Security Industry is Bullshit</a><br><a href="https://factory.strongdm.ai/">Software Factories And The Agentic Moment</a><br><a href="https://nolanlawson.com/2026/02/07/we-mourn-our-craft/">We mourn our craft</a><br><a href="https://techcrunch.com/2026/02/09/anthropic-closes-in-on-20b-round/">Anthropic closes in on $20B round</a><br><a href="https://www.reuters.com/business/oracle-plans-raise-45-billion-50-billion-2026-2026-02-01/">Oracle says it plans to raise up to $50 billion in debt and equity this year </a><br><a href="https://om.co/2026/02/02/openai-and-the-announcement-economy/">The New Announcement Economy</a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/"><br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Development</li>
<li>(03:02) - Latest AI Model Releases and Comparisons</li>
<li>(06:03) - Exploring AI Coding Agents</li>
<li>(08:55) - The Rise of Py Coding Agent</li>
<li>(12:08) - AI's Impact on Job Security</li>
<li>(15:01) - AI Security Concerns and Industry Insights</li>
<li>(33:08) - The Rise of AI Security Concerns</li>
<li>(36:30) - De-risking AI: Strategies and Challenges</li>
<li>(38:29) - The Emergence of Software Factories</li>
<li>(41:19) - Cloning Software: The Digital Twin Universe</li>
<li>(44:39) - In-house Development vs. SaaS Solutions</li>
<li>(46:57) - The Future of Compliance and Audit Industries</li>
<li>(51:52) - The Impact of AI on Software Development</li>
<li>(56:37) - Navigating the Emotional Landscape of AI Development</li>
<li>(01:07:55) - Mourning the Craft: A Country Song Reflection</li>
<li>(01:09:51) - Building Beyond Loss: Tennyson's Ulysses</li>
<li>(01:12:47) - Cloud Code Insights: Enhancing Development Workflows</li>
<li>(01:19:09) - The AI Bubble: Current Trends and Predictions</li>
<li>(01:24:00) - The Announcement Economy: News in the Age of AI</li>
<li>(01:30:04) - The Future of AI: Investment and Market Dynamics</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 13 Feb 2026 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/b544d651/f69824d1.mp3" length="40758599" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky &amp; Rahul Yadav</itunes:author>
      <itunes:duration>5078</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This episode covers the simultaneous release of Claude Opus 4.6 and GPT Codex 5.3, a deep dive into the Pi coding agent framework and why Shimin prefers it over Claude Code, AI security industry criticism, software dark factories, an emotional segment mourning the craft of programming, Claude Code's new /insights command, and AI bubble economics including Anthropic's $20B raise, Google's 100-year bond, and Oracle's $50B debt plans.</p><p><br><strong>Takeaways</strong></p><ul><li> The biggest compliment for Codex 5.3 is that it feels like Claude Code now</li><li> Opus 4.6 auto-drops into plan mode and offers to clear context after planning — writes plan.md it can follow across interruptions</li><li>Pi agent's skill-based approach may represent the bitter lesson of AI tooling — less scaffolding, more model intelligence</li><li>The "everyone is a manager now" framing for agentic coding resonates — reduced dopamine from not doing work with your own hands</li><li>Context switching burnout from running multiple agent instances is an emerging problem</li><li>AI may freeze software innovation at whatever paradigm the training data captures (jQuery → React, but what comes after?)</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/news/claude-opus-4-6">Introducing Claude Opus 4.6</a><br><a href="https://openai.com/index/introducing-gpt-5-3-codex/">Introducing GPT-5.3-Codex</a><br><a href="https://www.anthropic.com/news/claude-opus-4-6%09https://www.interconnects.ai/p/opus-46-vs-codex-53%20https://openai.com/index/introducing-gpt-5-3-codex/">Opus 4.6, Codex 5.3, and the post-benchmark era</a><br><a href="https://mariozechner.at/posts/2025-11-30-pi-coding-agent/">Pi coding agent</a><br><a href="https://sanderschulhoff.substack.com/p/the-ai-security-industry-is-bullshit">The AI Security Industry is Bullshit</a><br><a href="https://factory.strongdm.ai/">Software Factories And The Agentic Moment</a><br><a href="https://nolanlawson.com/2026/02/07/we-mourn-our-craft/">We mourn our craft</a><br><a href="https://techcrunch.com/2026/02/09/anthropic-closes-in-on-20b-round/">Anthropic closes in on $20B round</a><br><a href="https://www.reuters.com/business/oracle-plans-raise-45-billion-50-billion-2026-2026-02-01/">Oracle says it plans to raise up to $50 billion in debt and equity this year </a><br><a href="https://om.co/2026/02/02/openai-and-the-announcement-economy/">The New Announcement Economy</a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/"><br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Development</li>
<li>(03:02) - Latest AI Model Releases and Comparisons</li>
<li>(06:03) - Exploring AI Coding Agents</li>
<li>(08:55) - The Rise of Py Coding Agent</li>
<li>(12:08) - AI's Impact on Job Security</li>
<li>(15:01) - AI Security Concerns and Industry Insights</li>
<li>(33:08) - The Rise of AI Security Concerns</li>
<li>(36:30) - De-risking AI: Strategies and Challenges</li>
<li>(38:29) - The Emergence of Software Factories</li>
<li>(41:19) - Cloning Software: The Digital Twin Universe</li>
<li>(44:39) - In-house Development vs. SaaS Solutions</li>
<li>(46:57) - The Future of Compliance and Audit Industries</li>
<li>(51:52) - The Impact of AI on Software Development</li>
<li>(56:37) - Navigating the Emotional Landscape of AI Development</li>
<li>(01:07:55) - Mourning the Craft: A Country Song Reflection</li>
<li>(01:09:51) - Building Beyond Loss: Tennyson's Ulysses</li>
<li>(01:12:47) - Cloud Code Insights: Enhancing Development Workflows</li>
<li>(01:19:09) - The AI Bubble: Current Trends and Predictions</li>
<li>(01:24:00) - The Announcement Economy: News in the Age of AI</li>
<li>(01:30:04) - The Future of AI: Investment and Market Dynamics</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>laude Opus 4.6, GPT Codex 5.3, Pi coding agent, Armin Ronacher, skills, agentic coding, AI security, prompt injection, credential proxy, software dark factory, gene transfusion, digital twin, mourning craft, developer identity, context switching, /insights, Accelerando, Charles Stross, open source innovation, 100-year bond, Oracle debt, Stargate, announcement economy, bitter lesson</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b544d651/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/b544d651/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Episode 12: The OpenClaw Saga, How AI Affects Programming Skills, and How Vibe Coding is Addictive like Gambling</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Episode 12: The OpenClaw Saga, How AI Affects Programming Skills, and How Vibe Coding is Addictive like Gambling</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3a891023-44cc-4059-82bd-100d2abe4758</guid>
      <link>https://www.adipod.ai/episodes/12-the-openclaw-saga-how-ai-affects-programming-skills-and-how-vibe-coding-is-addictive-like-gambling/</link>
      <description>
        <![CDATA[<p>In this episode, Dan and Shimin discuss the evolving landscape of AI programming, focusing on Anthropic's AI Constitution, OpenAI's new product Prism, and the implications of AI tools on coding skills. They explore the financial viability of AI companies, the concept of vibe coding, and the potential risks of an AI bubble. The conversation highlights the importance of understanding AI's impact on jobs and the ethical considerations surrounding AI development.</p><p><br><strong>Takeaways</strong></p><ul><li>Anthropic's AI Constitution raises questions about AI agency.</li><li>AI tools can enhance or hinder coding skill development.</li><li>The financial viability of AI companies is under scrutiny.</li><li>Vibe coding can lead to a false sense of accomplishment.</li></ul><p><br></p><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/information-technology/2026/01/does-anthropic-believe-its-ai-is-conscious-or-is-that-just-what-it-wants-claude-to-think/"> Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?<br></a><a href="https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/"> Exclusive: Pentagon clashes with Anthropic over military AI use, sources say<br></a><a href="https://techcrunch.com/2026/01/27/openai-launches-prism-a-new-ai-workspace-for-scientists/">OpenAI launches Prism, a new AI workspace for scientists<br></a><a href="https://allenai.org/blog/open-coding-agents">Open Coding Agents: Fast, accessible coding agents that adapt to any repo<br></a><a href="https://www.arcee.ai/blog/trinity-large">Trinity Large<br></a><a href="https://www.clawhub.ai/">ClawHub<br></a><a href="https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto">ClawdBot Skills Just Ganked Your Crypto<br></a><a href="https://www.moltbook.com/">MoltBook<br></a><a href="https://blog.fsck.com/2025/10/09/superpowers/">Superpowers: How I'm using coding agents in October 2025 <br></a><a href="https://dev-tester.com/my-five-stages-of-ai-grief/">My Five Stages of AI Grief<br></a><a href="https://www.anthropic.com/research/AI-assistance-coding-skills">How AI assistance impacts the formation of coding skills<br></a><a href="https://www.fast.ai/posts/2026-01-28-dark-flow/">Breaking the Spell of Vibe Coding<br></a><a href="https://www.cnbc.com/2026/02/02/nvidia-stock-price-openai-funding.html">Nvidia shares are down after a report that its OpenAI investment stalled. Here's what's happening<br></a><a href="https://www.exponentialview.co/p/inside-openais-unit-economics-epoch-exponentialview">Inside OpenAI's unit economics<br></a><strong><br></strong><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Dan and Shimin discuss the evolving landscape of AI programming, focusing on Anthropic's AI Constitution, OpenAI's new product Prism, and the implications of AI tools on coding skills. They explore the financial viability of AI companies, the concept of vibe coding, and the potential risks of an AI bubble. The conversation highlights the importance of understanding AI's impact on jobs and the ethical considerations surrounding AI development.</p><p><br><strong>Takeaways</strong></p><ul><li>Anthropic's AI Constitution raises questions about AI agency.</li><li>AI tools can enhance or hinder coding skill development.</li><li>The financial viability of AI companies is under scrutiny.</li><li>Vibe coding can lead to a false sense of accomplishment.</li></ul><p><br></p><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/information-technology/2026/01/does-anthropic-believe-its-ai-is-conscious-or-is-that-just-what-it-wants-claude-to-think/"> Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?<br></a><a href="https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/"> Exclusive: Pentagon clashes with Anthropic over military AI use, sources say<br></a><a href="https://techcrunch.com/2026/01/27/openai-launches-prism-a-new-ai-workspace-for-scientists/">OpenAI launches Prism, a new AI workspace for scientists<br></a><a href="https://allenai.org/blog/open-coding-agents">Open Coding Agents: Fast, accessible coding agents that adapt to any repo<br></a><a href="https://www.arcee.ai/blog/trinity-large">Trinity Large<br></a><a href="https://www.clawhub.ai/">ClawHub<br></a><a href="https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto">ClawdBot Skills Just Ganked Your Crypto<br></a><a href="https://www.moltbook.com/">MoltBook<br></a><a href="https://blog.fsck.com/2025/10/09/superpowers/">Superpowers: How I'm using coding agents in October 2025 <br></a><a href="https://dev-tester.com/my-five-stages-of-ai-grief/">My Five Stages of AI Grief<br></a><a href="https://www.anthropic.com/research/AI-assistance-coding-skills">How AI assistance impacts the formation of coding skills<br></a><a href="https://www.fast.ai/posts/2026-01-28-dark-flow/">Breaking the Spell of Vibe Coding<br></a><a href="https://www.cnbc.com/2026/02/02/nvidia-stock-price-openai-funding.html">Nvidia shares are down after a report that its OpenAI investment stalled. Here's what's happening<br></a><a href="https://www.exponentialview.co/p/inside-openais-unit-economics-epoch-exponentialview">Inside OpenAI's unit economics<br></a><strong><br></strong><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Feb 2026 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/e7999b5d/f79c19fb.mp3" length="33878303" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>4218</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Dan and Shimin discuss the evolving landscape of AI programming, focusing on Anthropic's AI Constitution, OpenAI's new product Prism, and the implications of AI tools on coding skills. They explore the financial viability of AI companies, the concept of vibe coding, and the potential risks of an AI bubble. The conversation highlights the importance of understanding AI's impact on jobs and the ethical considerations surrounding AI development.</p><p><br><strong>Takeaways</strong></p><ul><li>Anthropic's AI Constitution raises questions about AI agency.</li><li>AI tools can enhance or hinder coding skill development.</li><li>The financial viability of AI companies is under scrutiny.</li><li>Vibe coding can lead to a false sense of accomplishment.</li></ul><p><br></p><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/information-technology/2026/01/does-anthropic-believe-its-ai-is-conscious-or-is-that-just-what-it-wants-claude-to-think/"> Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?<br></a><a href="https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/"> Exclusive: Pentagon clashes with Anthropic over military AI use, sources say<br></a><a href="https://techcrunch.com/2026/01/27/openai-launches-prism-a-new-ai-workspace-for-scientists/">OpenAI launches Prism, a new AI workspace for scientists<br></a><a href="https://allenai.org/blog/open-coding-agents">Open Coding Agents: Fast, accessible coding agents that adapt to any repo<br></a><a href="https://www.arcee.ai/blog/trinity-large">Trinity Large<br></a><a href="https://www.clawhub.ai/">ClawHub<br></a><a href="https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto">ClawdBot Skills Just Ganked Your Crypto<br></a><a href="https://www.moltbook.com/">MoltBook<br></a><a href="https://blog.fsck.com/2025/10/09/superpowers/">Superpowers: How I'm using coding agents in October 2025 <br></a><a href="https://dev-tester.com/my-five-stages-of-ai-grief/">My Five Stages of AI Grief<br></a><a href="https://www.anthropic.com/research/AI-assistance-coding-skills">How AI assistance impacts the formation of coding skills<br></a><a href="https://www.fast.ai/posts/2026-01-28-dark-flow/">Breaking the Spell of Vibe Coding<br></a><a href="https://www.cnbc.com/2026/02/02/nvidia-stock-price-openai-funding.html">Nvidia shares are down after a report that its OpenAI investment stalled. Here's what's happening<br></a><a href="https://www.exponentialview.co/p/inside-openais-unit-economics-epoch-exponentialview">Inside OpenAI's unit economics<br></a><strong><br></strong><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords> AI, software engineering, coding agents, LLM, Claude, GPT, Gemini, vibe coding, AI bubble, developer tools, open source, prompt engineering, Anthropic, Claude,  OpenAI, agentic coding, tech podcast, AI news, deep learning, AI safety, future of programming, MCP, developer productivity, cognitive debt, spec-driven development, AI ethics</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e7999b5d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Episode 11: AI Fluency Pyramid, Unrolling the Codex Agent Loop, and Claude Code's Secret Swarm Mode</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Episode 11: AI Fluency Pyramid, Unrolling the Codex Agent Loop, and Claude Code's Secret Swarm Mode</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ca25324b-1953-4f9d-9cd6-2ff9d38e09c5</guid>
      <link>https://www.adipod.ai/episodes/11-ai-fluency-pyramid-unrolling-the-codex-agent-loop-and-claude-code-s-secret-swarm-mode/</link>
      <description>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, Shimin, Dan and Rahul discuss the evolving landscape of AI in programming and business. They explore Brex's AI strategy, the AI fluency pyramid, the state of open models, and innovations in AI tools like Cloud Code. </p><p><br><strong>Takeaways</strong></p><ul><li>The AI fluency pyramid helps assess AI integration levels.</li><li>Open models are still dominated by Chinese companies.</li><li>Claude Code is evolving with new swarm feature.</li><li>The Claude Constitution aims to define ethical AI behavior.</li><li>Economic disruption is a significant risk associated with AI.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.latent.space/p/brex">Brex’s AI Hail Mary — With CTO James Reggio<br></a><a href="https://techcrunch.com/2026/01/23/whos-behind-ami-labs-yann-lecuns-world-model-startup/">Who’s behind AMI Labs, Yann LeCun’s ‘world model’ startup<br></a><a href="https://www.interconnects.ai/p/8-plots-that-explain-the-state-of">8 plots that explain the state of open models<br></a><a href="https://www.phoronix.com/news/GNOME-AI-Newelle-1.2">GNOME's AI Assistant Newelle Adds Llama.cpp Support, Command Execution Tool<br></a><a href="https://www.getagentcraft.com/">https://www.getagentcraft.com/<br></a><a href="https://x.com/NicerInPerson/status/2014989679796347375">Claude Code Swarms<br></a><a href="https://openai.com/index/unrolling-the-codex-agent-loop/">Unrolling the Codex agent loop<br></a><a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">The Adolescence of Technology<br></a><a href="https://www.anthropic.com/constitution">Claude’s Constitution<br></a><a href="https://deadneurons.substack.com/p/what-if-ai-is-both-really-good-and">What if AI is both really good and not that disruptive?<br></a><a href="https://techcrunch.com/2026/01/24/a-new-test-for-ai-labs-are-you-even-trying-to-make-money/">A new test for AI labs: Are you even trying to make money?<br></a><a href="https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-doubts/">Are AI agents ready for the workplace? A new benchmark raises doubts<br></a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/">Microsoft CEO warns that we must 'do something useful' with AI or they'll lose 'social permission' to burn electricity on it<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artifical Developer Intelligence</li>
<li>(02:44) - Brex's AI Transformation Journey</li>
<li>(05:16) - AI Fluency Pyramid and Corporate Culture</li>
<li>(07:56) - World Models and AI Poetry</li>
<li>(10:30) - State of Open Models</li>
<li>(12:38) - Emerging Tools and Technologies</li>
<li>(15:04) - Claude Code and New Features</li>
<li>(17:42) - AI in Gaming and Real-World Applications</li>
<li>(23:02) - Open Source Collaboration in AI Development</li>
<li>(24:53) - The Future of AI: Swarm Intelligence and Cloud Code</li>
<li>(26:09) - Understanding the Codex Agent Loop</li>
<li>(30:59) - Prompt Engineering and Model Limitations</li>
<li>(33:33) - Ethical Considerations in AI Development</li>
<li>(37:47) - The Risks of AI: Economic Disruption and Autocracy</li>
<li>(40:46) - Finding Purpose in an AI-Driven World</li>
<li>(44:32) - The Claude Constitution: Values and Guidelines</li>
<li>(46:15) - The Role of AI in Society and Governance</li>
<li>(50:21) - Building a Blog with AI Tools</li>
<li>(55:51) - The AI Investment Bubble and Its Implications</li>
<li>(01:04:05) - Davos Insights on AI and Sustainability</li>
<li>(01:08:51) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, Shimin, Dan and Rahul discuss the evolving landscape of AI in programming and business. They explore Brex's AI strategy, the AI fluency pyramid, the state of open models, and innovations in AI tools like Cloud Code. </p><p><br><strong>Takeaways</strong></p><ul><li>The AI fluency pyramid helps assess AI integration levels.</li><li>Open models are still dominated by Chinese companies.</li><li>Claude Code is evolving with new swarm feature.</li><li>The Claude Constitution aims to define ethical AI behavior.</li><li>Economic disruption is a significant risk associated with AI.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.latent.space/p/brex">Brex’s AI Hail Mary — With CTO James Reggio<br></a><a href="https://techcrunch.com/2026/01/23/whos-behind-ami-labs-yann-lecuns-world-model-startup/">Who’s behind AMI Labs, Yann LeCun’s ‘world model’ startup<br></a><a href="https://www.interconnects.ai/p/8-plots-that-explain-the-state-of">8 plots that explain the state of open models<br></a><a href="https://www.phoronix.com/news/GNOME-AI-Newelle-1.2">GNOME's AI Assistant Newelle Adds Llama.cpp Support, Command Execution Tool<br></a><a href="https://www.getagentcraft.com/">https://www.getagentcraft.com/<br></a><a href="https://x.com/NicerInPerson/status/2014989679796347375">Claude Code Swarms<br></a><a href="https://openai.com/index/unrolling-the-codex-agent-loop/">Unrolling the Codex agent loop<br></a><a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">The Adolescence of Technology<br></a><a href="https://www.anthropic.com/constitution">Claude’s Constitution<br></a><a href="https://deadneurons.substack.com/p/what-if-ai-is-both-really-good-and">What if AI is both really good and not that disruptive?<br></a><a href="https://techcrunch.com/2026/01/24/a-new-test-for-ai-labs-are-you-even-trying-to-make-money/">A new test for AI labs: Are you even trying to make money?<br></a><a href="https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-doubts/">Are AI agents ready for the workplace? A new benchmark raises doubts<br></a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/">Microsoft CEO warns that we must 'do something useful' with AI or they'll lose 'social permission' to burn electricity on it<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artifical Developer Intelligence</li>
<li>(02:44) - Brex's AI Transformation Journey</li>
<li>(05:16) - AI Fluency Pyramid and Corporate Culture</li>
<li>(07:56) - World Models and AI Poetry</li>
<li>(10:30) - State of Open Models</li>
<li>(12:38) - Emerging Tools and Technologies</li>
<li>(15:04) - Claude Code and New Features</li>
<li>(17:42) - AI in Gaming and Real-World Applications</li>
<li>(23:02) - Open Source Collaboration in AI Development</li>
<li>(24:53) - The Future of AI: Swarm Intelligence and Cloud Code</li>
<li>(26:09) - Understanding the Codex Agent Loop</li>
<li>(30:59) - Prompt Engineering and Model Limitations</li>
<li>(33:33) - Ethical Considerations in AI Development</li>
<li>(37:47) - The Risks of AI: Economic Disruption and Autocracy</li>
<li>(40:46) - Finding Purpose in an AI-Driven World</li>
<li>(44:32) - The Claude Constitution: Values and Guidelines</li>
<li>(46:15) - The Role of AI in Society and Governance</li>
<li>(50:21) - Building a Blog with AI Tools</li>
<li>(55:51) - The AI Investment Bubble and Its Implications</li>
<li>(01:04:05) - Davos Insights on AI and Sustainability</li>
<li>(01:08:51) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 30 Jan 2026 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/c39749f6/f3f3375f.mp3" length="33310845" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky &amp; Rahul Yadav</itunes:author>
      <itunes:duration>4147</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, Shimin, Dan and Rahul discuss the evolving landscape of AI in programming and business. They explore Brex's AI strategy, the AI fluency pyramid, the state of open models, and innovations in AI tools like Cloud Code. </p><p><br><strong>Takeaways</strong></p><ul><li>The AI fluency pyramid helps assess AI integration levels.</li><li>Open models are still dominated by Chinese companies.</li><li>Claude Code is evolving with new swarm feature.</li><li>The Claude Constitution aims to define ethical AI behavior.</li><li>Economic disruption is a significant risk associated with AI.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.latent.space/p/brex">Brex’s AI Hail Mary — With CTO James Reggio<br></a><a href="https://techcrunch.com/2026/01/23/whos-behind-ami-labs-yann-lecuns-world-model-startup/">Who’s behind AMI Labs, Yann LeCun’s ‘world model’ startup<br></a><a href="https://www.interconnects.ai/p/8-plots-that-explain-the-state-of">8 plots that explain the state of open models<br></a><a href="https://www.phoronix.com/news/GNOME-AI-Newelle-1.2">GNOME's AI Assistant Newelle Adds Llama.cpp Support, Command Execution Tool<br></a><a href="https://www.getagentcraft.com/">https://www.getagentcraft.com/<br></a><a href="https://x.com/NicerInPerson/status/2014989679796347375">Claude Code Swarms<br></a><a href="https://openai.com/index/unrolling-the-codex-agent-loop/">Unrolling the Codex agent loop<br></a><a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">The Adolescence of Technology<br></a><a href="https://www.anthropic.com/constitution">Claude’s Constitution<br></a><a href="https://deadneurons.substack.com/p/what-if-ai-is-both-really-good-and">What if AI is both really good and not that disruptive?<br></a><a href="https://techcrunch.com/2026/01/24/a-new-test-for-ai-labs-are-you-even-trying-to-make-money/">A new test for AI labs: Are you even trying to make money?<br></a><a href="https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-doubts/">Are AI agents ready for the workplace? A new benchmark raises doubts<br></a><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/">Microsoft CEO warns that we must 'do something useful' with AI or they'll lose 'social permission' to burn electricity on it<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artifical Developer Intelligence</li>
<li>(02:44) - Brex's AI Transformation Journey</li>
<li>(05:16) - AI Fluency Pyramid and Corporate Culture</li>
<li>(07:56) - World Models and AI Poetry</li>
<li>(10:30) - State of Open Models</li>
<li>(12:38) - Emerging Tools and Technologies</li>
<li>(15:04) - Claude Code and New Features</li>
<li>(17:42) - AI in Gaming and Real-World Applications</li>
<li>(23:02) - Open Source Collaboration in AI Development</li>
<li>(24:53) - The Future of AI: Swarm Intelligence and Cloud Code</li>
<li>(26:09) - Understanding the Codex Agent Loop</li>
<li>(30:59) - Prompt Engineering and Model Limitations</li>
<li>(33:33) - Ethical Considerations in AI Development</li>
<li>(37:47) - The Risks of AI: Economic Disruption and Autocracy</li>
<li>(40:46) - Finding Purpose in an AI-Driven World</li>
<li>(44:32) - The Claude Constitution: Values and Guidelines</li>
<li>(46:15) - The Role of AI in Society and Governance</li>
<li>(50:21) - Building a Blog with AI Tools</li>
<li>(55:51) - The AI Investment Bubble and Its Implications</li>
<li>(01:04:05) - Davos Insights on AI and Sustainability</li>
<li>(01:08:51) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords> AI, software engineering, coding agents, LLM, Claude, GPT, Gemini, vibe coding, AI bubble, developer tools, open source, prompt engineering, Anthropic, Claude,  OpenAI, agentic coding, tech podcast, AI news, deep learning, AI safety, future of programming, MCP, developer productivity, cognitive debt, spec-driven development, AI ethics</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c39749f6/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/c39749f6/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Episode 10: There's a New Sherif in the Gas Town of AI Software Development</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>Episode 10: There's a New Sherif in the Gas Town of AI Software Development</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1784a36a-fdbc-41f8-a02e-1e8f3a4fd0a9</guid>
      <link>https://www.adipod.ai/episodes/10-there-s-a-new-sherif-in-the-gas-town-of-ai-software-development/</link>
      <description>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.</p><p><br><strong>Takeaways</strong></p><ul><li>The introduction of ads in ChatGPT raises privacy concerns.</li><li>Automation vs. augmentation is a key theme in AI's impact on jobs.</li><li>AI tools like Gastown are changing the landscape of software development. AI has a decent baseline level of cognition already.</li><li>Measuring developer productivity is a complex challenge.</li><li>AI tools may not always lead to financial gains.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.cnbc.com/2026/01/16/open-ai-chatgpt-ads-us.html">OpenAI to begin testing ads on ChatGPT in the U.S.<br></a><a href="https://www.anthropic.com/research/anthropic-economic-index-january-2026-report">Anthropic Economic Index report: economic primitives<br></a><a href="https://huggingface.co/zai-org/GLM-4.7-Flash">GLM-4.7-Flash</a><br><a href="https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04">Welcome to Gas Town<br></a><a href="https://www.alilleybrinker.com/mini/gas-town-decoded/">Gas Town Decoded<br></a><a href="https://post.substack.com/p/the-ai-revolution-is-here-will-the">The AI revolution is here. Will the economy survive the transition?<br></a><a href="https://www.reddit.com/r/ClaudeAI/comments/1q2c0ne/comment/nxc4ap6/">Claude Code creator Boris shares his setup with 13 detailed steps,full details below</a><br><a href="https://unreasonable-rnn.vercel.app/">The Unreasonable Effectiveness of RNNs App<br></a><a href="https://ghuntley.com/solana/">two AI researchers are now funded by Solana<br></a><a href="https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/">Majority of CEOs report zero payoff from AI splurge<br></a><a href="https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur">AI companies will fail. We can salvage something from the wreckage<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to the Podcast and Hosts</li>
<li>(02:32) - Mad Max and AI: A Fun Introduction</li>
<li>(03:21) - OpenAI's New Advertising Strategy</li>
<li>(08:10) - Anthropic Economic Index Report Insights</li>
<li>(15:19) - The Future of Work in an AI-Driven World</li>
<li>(21:42) - Introducing Gas Town: A New Tool for AI Development</li>
<li>(29:17) - The Quirky World of Software Naming Conventions</li>
<li>(30:19) - Multi-Agent Systems: Pros and Cons</li>
<li>(31:57) - The Philosophy of Gas Town: Embracing Chaos</li>
<li>(33:39) - Tech Insights: Cloud Code and Agent Management</li>
<li>(40:21) - The AI Revolution: Economic Implications and Productivity</li>
<li>(51:07) - Technical Difficulties and Communication Challenges</li>
<li>(51:31) - Exploring Gas Town and Workflow Innovations</li>
<li>(56:38) - The Role of AI in Education</li>
<li>(01:02:00) - The AI Bubble: Current State and Future Outlook</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.</p><p><br><strong>Takeaways</strong></p><ul><li>The introduction of ads in ChatGPT raises privacy concerns.</li><li>Automation vs. augmentation is a key theme in AI's impact on jobs.</li><li>AI tools like Gastown are changing the landscape of software development. AI has a decent baseline level of cognition already.</li><li>Measuring developer productivity is a complex challenge.</li><li>AI tools may not always lead to financial gains.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.cnbc.com/2026/01/16/open-ai-chatgpt-ads-us.html">OpenAI to begin testing ads on ChatGPT in the U.S.<br></a><a href="https://www.anthropic.com/research/anthropic-economic-index-january-2026-report">Anthropic Economic Index report: economic primitives<br></a><a href="https://huggingface.co/zai-org/GLM-4.7-Flash">GLM-4.7-Flash</a><br><a href="https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04">Welcome to Gas Town<br></a><a href="https://www.alilleybrinker.com/mini/gas-town-decoded/">Gas Town Decoded<br></a><a href="https://post.substack.com/p/the-ai-revolution-is-here-will-the">The AI revolution is here. Will the economy survive the transition?<br></a><a href="https://www.reddit.com/r/ClaudeAI/comments/1q2c0ne/comment/nxc4ap6/">Claude Code creator Boris shares his setup with 13 detailed steps,full details below</a><br><a href="https://unreasonable-rnn.vercel.app/">The Unreasonable Effectiveness of RNNs App<br></a><a href="https://ghuntley.com/solana/">two AI researchers are now funded by Solana<br></a><a href="https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/">Majority of CEOs report zero payoff from AI splurge<br></a><a href="https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur">AI companies will fail. We can salvage something from the wreckage<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to the Podcast and Hosts</li>
<li>(02:32) - Mad Max and AI: A Fun Introduction</li>
<li>(03:21) - OpenAI's New Advertising Strategy</li>
<li>(08:10) - Anthropic Economic Index Report Insights</li>
<li>(15:19) - The Future of Work in an AI-Driven World</li>
<li>(21:42) - Introducing Gas Town: A New Tool for AI Development</li>
<li>(29:17) - The Quirky World of Software Naming Conventions</li>
<li>(30:19) - Multi-Agent Systems: Pros and Cons</li>
<li>(31:57) - The Philosophy of Gas Town: Embracing Chaos</li>
<li>(33:39) - Tech Insights: Cloud Code and Agent Management</li>
<li>(40:21) - The AI Revolution: Economic Implications and Productivity</li>
<li>(51:07) - Technical Difficulties and Communication Challenges</li>
<li>(51:31) - Exploring Gas Town and Workflow Innovations</li>
<li>(56:38) - The Role of AI in Education</li>
<li>(01:02:00) - The AI Bubble: Current State and Future Outlook</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 23 Jan 2026 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/056de547/9b94026c.mp3" length="34488239" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky &amp; Rahul Yadav</itunes:author>
      <itunes:duration>4294</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.</p><p><br><strong>Takeaways</strong></p><ul><li>The introduction of ads in ChatGPT raises privacy concerns.</li><li>Automation vs. augmentation is a key theme in AI's impact on jobs.</li><li>AI tools like Gastown are changing the landscape of software development. AI has a decent baseline level of cognition already.</li><li>Measuring developer productivity is a complex challenge.</li><li>AI tools may not always lead to financial gains.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://www.cnbc.com/2026/01/16/open-ai-chatgpt-ads-us.html">OpenAI to begin testing ads on ChatGPT in the U.S.<br></a><a href="https://www.anthropic.com/research/anthropic-economic-index-january-2026-report">Anthropic Economic Index report: economic primitives<br></a><a href="https://huggingface.co/zai-org/GLM-4.7-Flash">GLM-4.7-Flash</a><br><a href="https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04">Welcome to Gas Town<br></a><a href="https://www.alilleybrinker.com/mini/gas-town-decoded/">Gas Town Decoded<br></a><a href="https://post.substack.com/p/the-ai-revolution-is-here-will-the">The AI revolution is here. Will the economy survive the transition?<br></a><a href="https://www.reddit.com/r/ClaudeAI/comments/1q2c0ne/comment/nxc4ap6/">Claude Code creator Boris shares his setup with 13 detailed steps,full details below</a><br><a href="https://unreasonable-rnn.vercel.app/">The Unreasonable Effectiveness of RNNs App<br></a><a href="https://ghuntley.com/solana/">two AI researchers are now funded by Solana<br></a><a href="https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/">Majority of CEOs report zero payoff from AI splurge<br></a><a href="https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur">AI companies will fail. We can salvage something from the wreckage<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to the Podcast and Hosts</li>
<li>(02:32) - Mad Max and AI: A Fun Introduction</li>
<li>(03:21) - OpenAI's New Advertising Strategy</li>
<li>(08:10) - Anthropic Economic Index Report Insights</li>
<li>(15:19) - The Future of Work in an AI-Driven World</li>
<li>(21:42) - Introducing Gas Town: A New Tool for AI Development</li>
<li>(29:17) - The Quirky World of Software Naming Conventions</li>
<li>(30:19) - Multi-Agent Systems: Pros and Cons</li>
<li>(31:57) - The Philosophy of Gas Town: Embracing Chaos</li>
<li>(33:39) - Tech Insights: Cloud Code and Agent Management</li>
<li>(40:21) - The AI Revolution: Economic Implications and Productivity</li>
<li>(51:07) - Technical Difficulties and Communication Challenges</li>
<li>(51:31) - Exploring Gas Town and Workflow Innovations</li>
<li>(56:38) - The Role of AI in Education</li>
<li>(01:02:00) - The AI Bubble: Current State and Future Outlook</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, OpenAI, ChatGPT, Anthropic, Gas Town, augmentation, software development, technology news, AI revolution, AI productivity, developer efficiency, AI in education, productivity measurement, AI tools, learning modules, AI bubble, software development, dev tools</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/056de547/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/056de547/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Episode 9: Chinese Models 7 Months Behind US Labs, Token Efficient Languages, and LLM Problems Observed in Humans</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>Episode 9: Chinese Models 7 Months Behind US Labs, Token Efficient Languages, and LLM Problems Observed in Humans</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a44edf6a-3f6d-4db2-a0b8-be82398b5965</guid>
      <link>https://www.adipod.ai/9</link>
      <description>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.</p><p><br><strong>Takeaways</strong></p><ul><li>Apple's partnership with Google marks a significant shift in AI development.</li><li>Doom coding encourages productive use of time instead of doom scrolling.</li><li>Public perception of AI is heavily influenced by marketing hype.</li><li>Programming languages vary in token efficiency, affecting AI interactions.</li><li>Dynamic large concept models offer a new approach to language processing.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://techcrunch.com/2026/01/12/googles-gemini-to-power-apples-ai-features-like-siri/)">Google’s Gemini to power Apple’s AI features like Siri</a><br><a href="https://epoch.ai/data-insights/us-vs-china-eci">Chinese AI models have lagged the US frontier by 7 months on average since 2023</a><br><a href="https://github.com/rberg27/doom-coding">doom-coding</a><br><a href="https://github.com/haykgrigo3/TimeCapsuleLLM)](https://github.com/haykgrigo3/TimeCapsuleLLM">TimeCapsuleLLM</a><br><a href="https://vibeandscribe.xyz/posts/2026-01-07-emergent-behavior.html">Emergent Behavior: When Skills Combine</a><br><a href="https://embd.cc/llm-problems-observed-in-humans">LLM problems observed in humans</a><br><a href="https://martinalderson.com/posts/which-programming-languages-are-most-token-efficient/">Which programming languages are most token-efficient?</a><br><a href="https://arxiv.org/abs/2512.24617">Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space</a><br><a href="https://www.barchart.com/story/news/36862423/weve-done-our-country-a-great-disservice-by-offshoring-nvidias-jensen-huang-says-we-have-to-create-prosperity-for-all-not-just-phds">‘We’ve Done Our Country a Great Disservice’ by Offshoring: Nvidia’s Jensen Huang Says ‘We Have to Create Prosperity’ for All, Not Just PhDs</a><br><a href="https://arstechnica.com/ai/2026/01/computer-scientist-yann-lecun-intelligence-really-is-about-learning/">Computer scientist Yann LeCun: “Intelligence really is about learning”</a><br><a href="https://www.cnbc.com/2026/01/10/are-we-in-an-ai-bubble-tech-leaders-analysts.html">Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chart<br></a><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.</p><p><br><strong>Takeaways</strong></p><ul><li>Apple's partnership with Google marks a significant shift in AI development.</li><li>Doom coding encourages productive use of time instead of doom scrolling.</li><li>Public perception of AI is heavily influenced by marketing hype.</li><li>Programming languages vary in token efficiency, affecting AI interactions.</li><li>Dynamic large concept models offer a new approach to language processing.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://techcrunch.com/2026/01/12/googles-gemini-to-power-apples-ai-features-like-siri/)">Google’s Gemini to power Apple’s AI features like Siri</a><br><a href="https://epoch.ai/data-insights/us-vs-china-eci">Chinese AI models have lagged the US frontier by 7 months on average since 2023</a><br><a href="https://github.com/rberg27/doom-coding">doom-coding</a><br><a href="https://github.com/haykgrigo3/TimeCapsuleLLM)](https://github.com/haykgrigo3/TimeCapsuleLLM">TimeCapsuleLLM</a><br><a href="https://vibeandscribe.xyz/posts/2026-01-07-emergent-behavior.html">Emergent Behavior: When Skills Combine</a><br><a href="https://embd.cc/llm-problems-observed-in-humans">LLM problems observed in humans</a><br><a href="https://martinalderson.com/posts/which-programming-languages-are-most-token-efficient/">Which programming languages are most token-efficient?</a><br><a href="https://arxiv.org/abs/2512.24617">Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space</a><br><a href="https://www.barchart.com/story/news/36862423/weve-done-our-country-a-great-disservice-by-offshoring-nvidias-jensen-huang-says-we-have-to-create-prosperity-for-all-not-just-phds">‘We’ve Done Our Country a Great Disservice’ by Offshoring: Nvidia’s Jensen Huang Says ‘We Have to Create Prosperity’ for All, Not Just PhDs</a><br><a href="https://arstechnica.com/ai/2026/01/computer-scientist-yann-lecun-intelligence-really-is-about-learning/">Computer scientist Yann LeCun: “Intelligence really is about learning”</a><br><a href="https://www.cnbc.com/2026/01/10/are-we-in-an-ai-bubble-tech-leaders-analysts.html">Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chart<br></a><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 16 Jan 2026 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/b9a327ce/26aede22.mp3" length="31317886" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3898</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and co-host Dan Lasky discuss the evolving landscape of AI in programming, recent news, innovative tools, and the implications of AI on various sectors. They explore the partnership between Apple and Google, the concept of 'doom coding', and how humans make LLM like mistakes. The conversation also delves into the efficiency of programming languages, a deep dive into dynamic large concept models, and the societal perceptions of AI, culminating in a discussion about the potential AI bubble.</p><p><br><strong>Takeaways</strong></p><ul><li>Apple's partnership with Google marks a significant shift in AI development.</li><li>Doom coding encourages productive use of time instead of doom scrolling.</li><li>Public perception of AI is heavily influenced by marketing hype.</li><li>Programming languages vary in token efficiency, affecting AI interactions.</li><li>Dynamic large concept models offer a new approach to language processing.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://techcrunch.com/2026/01/12/googles-gemini-to-power-apples-ai-features-like-siri/)">Google’s Gemini to power Apple’s AI features like Siri</a><br><a href="https://epoch.ai/data-insights/us-vs-china-eci">Chinese AI models have lagged the US frontier by 7 months on average since 2023</a><br><a href="https://github.com/rberg27/doom-coding">doom-coding</a><br><a href="https://github.com/haykgrigo3/TimeCapsuleLLM)](https://github.com/haykgrigo3/TimeCapsuleLLM">TimeCapsuleLLM</a><br><a href="https://vibeandscribe.xyz/posts/2026-01-07-emergent-behavior.html">Emergent Behavior: When Skills Combine</a><br><a href="https://embd.cc/llm-problems-observed-in-humans">LLM problems observed in humans</a><br><a href="https://martinalderson.com/posts/which-programming-languages-are-most-token-efficient/">Which programming languages are most token-efficient?</a><br><a href="https://arxiv.org/abs/2512.24617">Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space</a><br><a href="https://www.barchart.com/story/news/36862423/weve-done-our-country-a-great-disservice-by-offshoring-nvidias-jensen-huang-says-we-have-to-create-prosperity-for-all-not-just-phds">‘We’ve Done Our Country a Great Disservice’ by Offshoring: Nvidia’s Jensen Huang Says ‘We Have to Create Prosperity’ for All, Not Just PhDs</a><br><a href="https://arstechnica.com/ai/2026/01/computer-scientist-yann-lecun-intelligence-really-is-about-learning/">Computer scientist Yann LeCun: “Intelligence really is about learning”</a><br><a href="https://www.cnbc.com/2026/01/10/are-we-in-an-ai-bubble-tech-leaders-analysts.html">Are we in an AI bubble? What 40 tech leaders and analysts are saying, in one chart<br></a><br><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords> AI, software engineering, coding agents, LLM, Claude, GPT, Gemini, vibe coding, AI bubble, developer tools, open source, prompt engineering, Anthropic, Claude,  OpenAI, agentic coding, tech podcast, AI news, deep learning, AI safety, future of programming, MCP, developer productivity, cognitive debt, spec-driven development, AI ethics</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b9a327ce/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Episode 8: AI Acquisitions, Everyone's a Staff Engineer Now, and Building a Technical Writing Agent</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>Episode 8: AI Acquisitions, Everyone's a Staff Engineer Now, and Building a Technical Writing Agent</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">03763a18-3782-44cc-924e-eb666eeb16d9</guid>
      <link>https://www.adipod.ai/8</link>
      <description>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and guest co-host Rahul Yadav discussing the evolving landscape of AI in software engineering. They cover recent AI-related acquisitions, such as Nvidia's purchase of Groq and Meta's acquisition of Manus, and explore the implications of these moves. The conversation also delves into the challenges and opportunities presented by AI in the tech industry, including the role of AI in automation and the potential for AI to reshape job roles. The episode concludes with a discussion on the AI bubble and its impact on the economy, highlighting the balance between technological advancement and financial stability.</p><p><br><strong>Takeaways</strong></p><ul><li>Nvidia's acquisition of Groq highlights strategic tech investments.</li><li>Meta's purchase of Manus aims to bolster AI capabilities.</li><li>Even world class AI scientists can feel beyond on the rapidly developing AI field. </li><li>Not all bubbles are negative, technological bubble can bring real efficiencies at the cost of investor capital.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://morethanmoore.substack.com/p/ho-ho-ho-groq-nvidia-is-a-gift">Ho Ho Ho, Groq+NVIDIA Is A Gift</a><br><a href="https://siliconangle.com/2025/12/29/meta-platforms-buys-manus-bolster-agentic-ai-skillset/">Meta Platforms buys Manus to bolster its agentic AI skillset </a><br><a href="https://x.com/karpathy/status/2004607146781278521">Karpathy's Tweet </a><br><a href="https://read.engineerscodex.com/p/everyone-is-a-staff-engineer-now">Everyone is a Staff Engineer Now</a><br><a href="https://zhengdongwang.com/2025/12/30/2025-letter.html">ZhangDong's 2025 Letter</a><br><a href="https://www.theregister.com/2025/12/24/ai_spending_cooling_off/">AI faces closing time at the cash buffet</a><br><a href="https://www.nytimes.com/2025/12/26/business/ai-debt-investors.html">As A.I. Companies Borrow Billions, Debt Investors Grow Wary</a><a href="https://karpathy.bearblog.dev/year-in-review-2025/"><br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(02:51) - Acquisitions in AI: Nvidia and Grok</li>
<li>(05:11) - Meta's Acquisition of Manus</li>
<li>(09:54) - Andrej Karpathy's Reflections on Programming</li>
<li>(19:51) - Tool Shed: Gemini in Chrome</li>
<li>(24:55) - Posts of the Week: Staff Engineers and Future Predictions</li>
<li>(35:48) - Reflections on AI Progress and Future Predictions</li>
<li>(42:25) - Innovations in Technical Writing with AI</li>
<li>(51:13) - The Role of AI in Internal Documentation</li>
<li>(59:44) - Navigating the AI Bubble: Current Trends and Insights</li>
<li>(01:12:51) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and guest co-host Rahul Yadav discussing the evolving landscape of AI in software engineering. They cover recent AI-related acquisitions, such as Nvidia's purchase of Groq and Meta's acquisition of Manus, and explore the implications of these moves. The conversation also delves into the challenges and opportunities presented by AI in the tech industry, including the role of AI in automation and the potential for AI to reshape job roles. The episode concludes with a discussion on the AI bubble and its impact on the economy, highlighting the balance between technological advancement and financial stability.</p><p><br><strong>Takeaways</strong></p><ul><li>Nvidia's acquisition of Groq highlights strategic tech investments.</li><li>Meta's purchase of Manus aims to bolster AI capabilities.</li><li>Even world class AI scientists can feel beyond on the rapidly developing AI field. </li><li>Not all bubbles are negative, technological bubble can bring real efficiencies at the cost of investor capital.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://morethanmoore.substack.com/p/ho-ho-ho-groq-nvidia-is-a-gift">Ho Ho Ho, Groq+NVIDIA Is A Gift</a><br><a href="https://siliconangle.com/2025/12/29/meta-platforms-buys-manus-bolster-agentic-ai-skillset/">Meta Platforms buys Manus to bolster its agentic AI skillset </a><br><a href="https://x.com/karpathy/status/2004607146781278521">Karpathy's Tweet </a><br><a href="https://read.engineerscodex.com/p/everyone-is-a-staff-engineer-now">Everyone is a Staff Engineer Now</a><br><a href="https://zhengdongwang.com/2025/12/30/2025-letter.html">ZhangDong's 2025 Letter</a><br><a href="https://www.theregister.com/2025/12/24/ai_spending_cooling_off/">AI faces closing time at the cash buffet</a><br><a href="https://www.nytimes.com/2025/12/26/business/ai-debt-investors.html">As A.I. Companies Borrow Billions, Debt Investors Grow Wary</a><a href="https://karpathy.bearblog.dev/year-in-review-2025/"><br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(02:51) - Acquisitions in AI: Nvidia and Grok</li>
<li>(05:11) - Meta's Acquisition of Manus</li>
<li>(09:54) - Andrej Karpathy's Reflections on Programming</li>
<li>(19:51) - Tool Shed: Gemini in Chrome</li>
<li>(24:55) - Posts of the Week: Staff Engineers and Future Predictions</li>
<li>(35:48) - Reflections on AI Progress and Future Predictions</li>
<li>(42:25) - Innovations in Technical Writing with AI</li>
<li>(51:13) - The Role of AI in Internal Documentation</li>
<li>(59:44) - Navigating the AI Bubble: Current Trends and Insights</li>
<li>(01:12:51) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 09 Jan 2026 04:00:00 -0800</pubDate>
      <author>Shimin Zhang &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/a48706bf/71900288.mp3" length="35233701" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang &amp; Rahul Yadav</itunes:author>
      <itunes:duration>4387</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The podcast "Artificial Developer Intelligence" features hosts Shimin Zhang and guest co-host Rahul Yadav discussing the evolving landscape of AI in software engineering. They cover recent AI-related acquisitions, such as Nvidia's purchase of Groq and Meta's acquisition of Manus, and explore the implications of these moves. The conversation also delves into the challenges and opportunities presented by AI in the tech industry, including the role of AI in automation and the potential for AI to reshape job roles. The episode concludes with a discussion on the AI bubble and its impact on the economy, highlighting the balance between technological advancement and financial stability.</p><p><br><strong>Takeaways</strong></p><ul><li>Nvidia's acquisition of Groq highlights strategic tech investments.</li><li>Meta's purchase of Manus aims to bolster AI capabilities.</li><li>Even world class AI scientists can feel beyond on the rapidly developing AI field. </li><li>Not all bubbles are negative, technological bubble can bring real efficiencies at the cost of investor capital.</li></ul><p><strong>Resources Mentioned<br></strong><a href="https://morethanmoore.substack.com/p/ho-ho-ho-groq-nvidia-is-a-gift">Ho Ho Ho, Groq+NVIDIA Is A Gift</a><br><a href="https://siliconangle.com/2025/12/29/meta-platforms-buys-manus-bolster-agentic-ai-skillset/">Meta Platforms buys Manus to bolster its agentic AI skillset </a><br><a href="https://x.com/karpathy/status/2004607146781278521">Karpathy's Tweet </a><br><a href="https://read.engineerscodex.com/p/everyone-is-a-staff-engineer-now">Everyone is a Staff Engineer Now</a><br><a href="https://zhengdongwang.com/2025/12/30/2025-letter.html">ZhangDong's 2025 Letter</a><br><a href="https://www.theregister.com/2025/12/24/ai_spending_cooling_off/">AI faces closing time at the cash buffet</a><br><a href="https://www.nytimes.com/2025/12/26/business/ai-debt-investors.html">As A.I. Companies Borrow Billions, Debt Investors Grow Wary</a><a href="https://karpathy.bearblog.dev/year-in-review-2025/"><br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(02:51) - Acquisitions in AI: Nvidia and Grok</li>
<li>(05:11) - Meta's Acquisition of Manus</li>
<li>(09:54) - Andrej Karpathy's Reflections on Programming</li>
<li>(19:51) - Tool Shed: Gemini in Chrome</li>
<li>(24:55) - Posts of the Week: Staff Engineers and Future Predictions</li>
<li>(35:48) - Reflections on AI Progress and Future Predictions</li>
<li>(42:25) - Innovations in Technical Writing with AI</li>
<li>(51:13) - The Role of AI in Internal Documentation</li>
<li>(59:44) - Navigating the AI Bubble: Current Trends and Insights</li>
<li>(01:12:51) - ADI Intro.mp4</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, software engineering, LLM, vibe coding, Nvidia, Groq, Meta, Manus, automation, AI bubble, </itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a48706bf/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/a48706bf/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Episode 7: Project Vend Update, Hallucinating Neurons, and Year End Reflections</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>Episode 7: Project Vend Update, Hallucinating Neurons, and Year End Reflections</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5c9efe33-eab5-476f-afaf-5529d0a4577f</guid>
      <link>https://www.adipod.ai/7</link>
      <description>
        <![CDATA[<p>In this episode, Shimin and Dan explore the latest advancements in AI coding, including NVIDIA's new models, the implications of AI-generated code, and the outcome of Anthropic's project Vend, AI management of vending machines. They also discuss the significance of multi-agent systems in coding, the concept of vibe coding, and delve into the research on hallucination neurons in large language models. The episode concludes with a year-end review reflecting on the rapid developments in AI technology throughout 2025.</p><p><br><strong>Takeaways</strong></p><ul><li>AI-generated code has been found to create more problems than human code.</li><li>AI in vending machines has led to humorous and unexpected outcomes.</li><li>Multi-agent systems can enhance the coding process by providing diverse solutions.</li><li>H-neurons in LLMs are linked to hallucination and overcompliance.</li><li>Year-end reflections highlight the rapid adoption of AI in the industry.</li><li>The future of AI coding looks promising with ongoing innovations.</li></ul><p><br><strong>Resources Mentioned<br></strong><a href="https://research.nvidia.com/labs/nemotron/Nemotron-3/">NVIDIA Nemotron 3 Family of Models<br></a><a href="https://z.ai/blog/glm-4.7">GLM-4.7: Advancing the Coding Capability<br></a><a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-reporthttps://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report">Our new report: AI code creates 1.7x more problems<br></a><a href="https://www.anthropic.com/research/project-vend-2">Project Vend: Phase two<br></a><a href="https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md">Claude Code Changelog<br></a><a href="https://benr.build/blog/one-agent-isnt-enough">One Agent Isn't Enough<br></a><a href="https://davidbau.com/archives/2025/12/16/vibe_coding.html">Vibe Coding<br></a><a href="https://arxiv.org/pdf/2512.01797">H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs</a><br><a href="https://karpathy.bearblog.dev/year-in-review-2025/">2025 LLM Year in Review<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI Coding Landscape</li>
<li>(05:00) - GLM 4.7 and Chinese AI Models</li>
<li>(09:27) - Project Vend: AI Vending Machine Experiment</li>
<li>(13:42) - Using Multiple AI Agents for Coding</li>
<li>(22:51) - Exploring Agent-Based Approaches</li>
<li>(30:28) - Deep Dive into Hallucination Neurons</li>
<li>(36:07) - Dan's Rant: Context Management in AI</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Shimin and Dan explore the latest advancements in AI coding, including NVIDIA's new models, the implications of AI-generated code, and the outcome of Anthropic's project Vend, AI management of vending machines. They also discuss the significance of multi-agent systems in coding, the concept of vibe coding, and delve into the research on hallucination neurons in large language models. The episode concludes with a year-end review reflecting on the rapid developments in AI technology throughout 2025.</p><p><br><strong>Takeaways</strong></p><ul><li>AI-generated code has been found to create more problems than human code.</li><li>AI in vending machines has led to humorous and unexpected outcomes.</li><li>Multi-agent systems can enhance the coding process by providing diverse solutions.</li><li>H-neurons in LLMs are linked to hallucination and overcompliance.</li><li>Year-end reflections highlight the rapid adoption of AI in the industry.</li><li>The future of AI coding looks promising with ongoing innovations.</li></ul><p><br><strong>Resources Mentioned<br></strong><a href="https://research.nvidia.com/labs/nemotron/Nemotron-3/">NVIDIA Nemotron 3 Family of Models<br></a><a href="https://z.ai/blog/glm-4.7">GLM-4.7: Advancing the Coding Capability<br></a><a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-reporthttps://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report">Our new report: AI code creates 1.7x more problems<br></a><a href="https://www.anthropic.com/research/project-vend-2">Project Vend: Phase two<br></a><a href="https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md">Claude Code Changelog<br></a><a href="https://benr.build/blog/one-agent-isnt-enough">One Agent Isn't Enough<br></a><a href="https://davidbau.com/archives/2025/12/16/vibe_coding.html">Vibe Coding<br></a><a href="https://arxiv.org/pdf/2512.01797">H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs</a><br><a href="https://karpathy.bearblog.dev/year-in-review-2025/">2025 LLM Year in Review<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI Coding Landscape</li>
<li>(05:00) - GLM 4.7 and Chinese AI Models</li>
<li>(09:27) - Project Vend: AI Vending Machine Experiment</li>
<li>(13:42) - Using Multiple AI Agents for Coding</li>
<li>(22:51) - Exploring Agent-Based Approaches</li>
<li>(30:28) - Deep Dive into Hallucination Neurons</li>
<li>(36:07) - Dan's Rant: Context Management in AI</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Sat, 27 Dec 2025 13:24:23 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/8effe510/59bc80a0.mp3" length="23063994" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>2866</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Shimin and Dan explore the latest advancements in AI coding, including NVIDIA's new models, the implications of AI-generated code, and the outcome of Anthropic's project Vend, AI management of vending machines. They also discuss the significance of multi-agent systems in coding, the concept of vibe coding, and delve into the research on hallucination neurons in large language models. The episode concludes with a year-end review reflecting on the rapid developments in AI technology throughout 2025.</p><p><br><strong>Takeaways</strong></p><ul><li>AI-generated code has been found to create more problems than human code.</li><li>AI in vending machines has led to humorous and unexpected outcomes.</li><li>Multi-agent systems can enhance the coding process by providing diverse solutions.</li><li>H-neurons in LLMs are linked to hallucination and overcompliance.</li><li>Year-end reflections highlight the rapid adoption of AI in the industry.</li><li>The future of AI coding looks promising with ongoing innovations.</li></ul><p><br><strong>Resources Mentioned<br></strong><a href="https://research.nvidia.com/labs/nemotron/Nemotron-3/">NVIDIA Nemotron 3 Family of Models<br></a><a href="https://z.ai/blog/glm-4.7">GLM-4.7: Advancing the Coding Capability<br></a><a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-reporthttps://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report">Our new report: AI code creates 1.7x more problems<br></a><a href="https://www.anthropic.com/research/project-vend-2">Project Vend: Phase two<br></a><a href="https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md">Claude Code Changelog<br></a><a href="https://benr.build/blog/one-agent-isnt-enough">One Agent Isn't Enough<br></a><a href="https://davidbau.com/archives/2025/12/16/vibe_coding.html">Vibe Coding<br></a><a href="https://arxiv.org/pdf/2512.01797">H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs</a><br><a href="https://karpathy.bearblog.dev/year-in-review-2025/">2025 LLM Year in Review<br></a><br><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI Coding Landscape</li>
<li>(05:00) - GLM 4.7 and Chinese AI Models</li>
<li>(09:27) - Project Vend: AI Vending Machine Experiment</li>
<li>(13:42) - Using Multiple AI Agents for Coding</li>
<li>(22:51) - Exploring Agent-Based Approaches</li>
<li>(30:28) - Deep Dive into Hallucination Neurons</li>
<li>(36:07) - Dan's Rant: Context Management in AI</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>AI, coding, NVIDIA, GLM, Project Vend, LLM, Claude Code, Vibe Coding, H-neurons, AI</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8effe510/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/8effe510/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Episode 6: GPT 5.2, Claude Skills, and Hacker Hall of Fame</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>Episode 6: GPT 5.2, Claude Skills, and Hacker Hall of Fame</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">785bea3f-8cac-4d8c-ad99-c185b869731c</guid>
      <link>https://www.adipod.ai/6</link>
      <description>
        <![CDATA[<p>In this episode of "Artificial Developer Intelligence," hosts Shimin Zhang and Dan explore the latest advancements in AI, including the release of GPT 5.2 and its implications for the industry. They discuss the integration of Cloud Code into Slack, Mistral AI's new coding model, and the innovative MindEval framework for assessing AI's clinical competence. The episode also features a deep dive into AI-generated user interfaces and a lively discussion on the evolving role of hackers in the tech industry. </p><p><br><strong>Takeaways</strong><br>GPT 5.2 offers incremental improvements and new modes for AI applications</p><p>Cloud Code's integration into Slack aims to streamline coding workflows.</p><p>Mistral AI's new model targets the coding space with open-weight strategies.</p><p>OpenAI's enterprise products show significant adoption, especially in non-coding sectors.</p><p><br><strong>Resources Mentioned<br></strong><a href="https://openai.com/index/introducing-gpt-5-2/">Introducing GPT-5.2<br></a><a href="https://techcrunch.com/2025/12/08/claude-code-is-coming-to-slack-and-thats-a-bigger-deal-than-it-sounds/">Claude Code is coming to Slack, and that’s a bigger deal than it sounds<br></a><a href="https://techcrunch.com/2025/12/09/mistral-ai-surfs-vibe-coding-tailwinds-with-new-coding-models/">Mistral AI surfs vibe-coding tailwinds with new coding models<br></a><a href="https://swordhealth.com/newsroom/sword-introduces-mindeval">Introducing MindEval: a new framework to measure LLM clinical competence</a><br><a href="https://higashi.blog/2025/12/07/ai-verification/">AI should only run as fast as we can catch up</a><br><a href="https://simonwillison.net/2025/Dec/10/html-tools/">Useful patterns for building HTML tools<br></a><a href="https://news.ycombinator.com/item?id=46255285">Ask HN: How can I get better at using AI for programming?</a><br><a href="https://leehanchung.github.io/blogs/2025/10/26/claude-skills-deep-dive/">Claude Agent Skills: A First Principles Deep Dive</a><br><a href="https://research.google/blog/generative-ui-a-rich-custom-visual-interactive-user-experience-for-any-prompt/">Generative UI: A rich, custom, visual interactive user experience for any prompt<br></a><a href="https://techcrunch.com/2025/12/09/coreweave-ceo-defends-ai-circular-deals-as-working-together/">CoreWeave CEO defends AI circular deals as ‘working together’<br></a><a href="https://techcrunch.com/2025/12/08/openai-boasts-enterprise-win-days-after-internal-code-red-on-google-threat/">OpenAI boasts enterprise win days after internal ‘code red’ on Google threat</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(02:40) - Latest Developments in AI Models</li>
<li>(09:12) - Innovations in AI Coding Assistants</li>
<li>(12:11) - Benchmarking AI Clinical Competence</li>
<li>(12:59) - Techniques for Effective AI Utilization</li>
<li>(17:48) - Exploring AI Tools for Web Development</li>
<li>(22:01) - Personal Experiences with AI Models</li>
<li>(26:30) - Deep Dive into Claude's Agent Skills</li>
<li>(27:40) - Exploring Skill Invocation in AI Tools</li>
<li>(31:38) - Generative UI: The Future of Interactive Experiences</li>
<li>(36:36) - Ranting About Context Management in AI</li>
<li>(44:21) - The Hacker Ethos in Software Development</li>
<li>(50:37) - Two Minutes to Midnight: AI Bubble Watch</li>
<li>(51:40) - ADI Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of "Artificial Developer Intelligence," hosts Shimin Zhang and Dan explore the latest advancements in AI, including the release of GPT 5.2 and its implications for the industry. They discuss the integration of Cloud Code into Slack, Mistral AI's new coding model, and the innovative MindEval framework for assessing AI's clinical competence. The episode also features a deep dive into AI-generated user interfaces and a lively discussion on the evolving role of hackers in the tech industry. </p><p><br><strong>Takeaways</strong><br>GPT 5.2 offers incremental improvements and new modes for AI applications</p><p>Cloud Code's integration into Slack aims to streamline coding workflows.</p><p>Mistral AI's new model targets the coding space with open-weight strategies.</p><p>OpenAI's enterprise products show significant adoption, especially in non-coding sectors.</p><p><br><strong>Resources Mentioned<br></strong><a href="https://openai.com/index/introducing-gpt-5-2/">Introducing GPT-5.2<br></a><a href="https://techcrunch.com/2025/12/08/claude-code-is-coming-to-slack-and-thats-a-bigger-deal-than-it-sounds/">Claude Code is coming to Slack, and that’s a bigger deal than it sounds<br></a><a href="https://techcrunch.com/2025/12/09/mistral-ai-surfs-vibe-coding-tailwinds-with-new-coding-models/">Mistral AI surfs vibe-coding tailwinds with new coding models<br></a><a href="https://swordhealth.com/newsroom/sword-introduces-mindeval">Introducing MindEval: a new framework to measure LLM clinical competence</a><br><a href="https://higashi.blog/2025/12/07/ai-verification/">AI should only run as fast as we can catch up</a><br><a href="https://simonwillison.net/2025/Dec/10/html-tools/">Useful patterns for building HTML tools<br></a><a href="https://news.ycombinator.com/item?id=46255285">Ask HN: How can I get better at using AI for programming?</a><br><a href="https://leehanchung.github.io/blogs/2025/10/26/claude-skills-deep-dive/">Claude Agent Skills: A First Principles Deep Dive</a><br><a href="https://research.google/blog/generative-ui-a-rich-custom-visual-interactive-user-experience-for-any-prompt/">Generative UI: A rich, custom, visual interactive user experience for any prompt<br></a><a href="https://techcrunch.com/2025/12/09/coreweave-ceo-defends-ai-circular-deals-as-working-together/">CoreWeave CEO defends AI circular deals as ‘working together’<br></a><a href="https://techcrunch.com/2025/12/08/openai-boasts-enterprise-win-days-after-internal-code-red-on-google-threat/">OpenAI boasts enterprise win days after internal ‘code red’ on Google threat</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(02:40) - Latest Developments in AI Models</li>
<li>(09:12) - Innovations in AI Coding Assistants</li>
<li>(12:11) - Benchmarking AI Clinical Competence</li>
<li>(12:59) - Techniques for Effective AI Utilization</li>
<li>(17:48) - Exploring AI Tools for Web Development</li>
<li>(22:01) - Personal Experiences with AI Models</li>
<li>(26:30) - Deep Dive into Claude's Agent Skills</li>
<li>(27:40) - Exploring Skill Invocation in AI Tools</li>
<li>(31:38) - Generative UI: The Future of Interactive Experiences</li>
<li>(36:36) - Ranting About Context Management in AI</li>
<li>(44:21) - The Hacker Ethos in Software Development</li>
<li>(50:37) - Two Minutes to Midnight: AI Bubble Watch</li>
<li>(51:40) - ADI Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Fri, 19 Dec 2025 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/30a0719a/73795a8c.mp3" length="25060213" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3116</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of "Artificial Developer Intelligence," hosts Shimin Zhang and Dan explore the latest advancements in AI, including the release of GPT 5.2 and its implications for the industry. They discuss the integration of Cloud Code into Slack, Mistral AI's new coding model, and the innovative MindEval framework for assessing AI's clinical competence. The episode also features a deep dive into AI-generated user interfaces and a lively discussion on the evolving role of hackers in the tech industry. </p><p><br><strong>Takeaways</strong><br>GPT 5.2 offers incremental improvements and new modes for AI applications</p><p>Cloud Code's integration into Slack aims to streamline coding workflows.</p><p>Mistral AI's new model targets the coding space with open-weight strategies.</p><p>OpenAI's enterprise products show significant adoption, especially in non-coding sectors.</p><p><br><strong>Resources Mentioned<br></strong><a href="https://openai.com/index/introducing-gpt-5-2/">Introducing GPT-5.2<br></a><a href="https://techcrunch.com/2025/12/08/claude-code-is-coming-to-slack-and-thats-a-bigger-deal-than-it-sounds/">Claude Code is coming to Slack, and that’s a bigger deal than it sounds<br></a><a href="https://techcrunch.com/2025/12/09/mistral-ai-surfs-vibe-coding-tailwinds-with-new-coding-models/">Mistral AI surfs vibe-coding tailwinds with new coding models<br></a><a href="https://swordhealth.com/newsroom/sword-introduces-mindeval">Introducing MindEval: a new framework to measure LLM clinical competence</a><br><a href="https://higashi.blog/2025/12/07/ai-verification/">AI should only run as fast as we can catch up</a><br><a href="https://simonwillison.net/2025/Dec/10/html-tools/">Useful patterns for building HTML tools<br></a><a href="https://news.ycombinator.com/item?id=46255285">Ask HN: How can I get better at using AI for programming?</a><br><a href="https://leehanchung.github.io/blogs/2025/10/26/claude-skills-deep-dive/">Claude Agent Skills: A First Principles Deep Dive</a><br><a href="https://research.google/blog/generative-ui-a-rich-custom-visual-interactive-user-experience-for-any-prompt/">Generative UI: A rich, custom, visual interactive user experience for any prompt<br></a><a href="https://techcrunch.com/2025/12/09/coreweave-ceo-defends-ai-circular-deals-as-working-together/">CoreWeave CEO defends AI circular deals as ‘working together’<br></a><a href="https://techcrunch.com/2025/12/08/openai-boasts-enterprise-win-days-after-internal-code-red-on-google-threat/">OpenAI boasts enterprise win days after internal ‘code red’ on Google threat</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to AI in Software Engineering</li>
<li>(02:40) - Latest Developments in AI Models</li>
<li>(09:12) - Innovations in AI Coding Assistants</li>
<li>(12:11) - Benchmarking AI Clinical Competence</li>
<li>(12:59) - Techniques for Effective AI Utilization</li>
<li>(17:48) - Exploring AI Tools for Web Development</li>
<li>(22:01) - Personal Experiences with AI Models</li>
<li>(26:30) - Deep Dive into Claude's Agent Skills</li>
<li>(27:40) - Exploring Skill Invocation in AI Tools</li>
<li>(31:38) - Generative UI: The Future of Interactive Experiences</li>
<li>(36:36) - Ranting About Context Management in AI</li>
<li>(44:21) - The Hacker Ethos in Software Development</li>
<li>(50:37) - Two Minutes to Midnight: AI Bubble Watch</li>
<li>(51:40) - ADI Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords> AI, software engineering, coding agents, LLM, Claude, GPT, Gemini, vibe coding, AI bubble, developer tools, open source, prompt engineering, Anthropic, Claude,  OpenAI, agentic coding, tech podcast, AI news, deep learning, AI safety, future of programming, MCP, developer productivity, cognitive debt, spec-driven development, AI ethics</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/30a0719a/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/30a0719a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Episode 5: How Anthropic Engineers use AI, Spec Driven Development, and LLM Psychological Profiles</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>Episode 5: How Anthropic Engineers use AI, Spec Driven Development, and LLM Psychological Profiles</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">622efb3f-0766-4957-988b-88234ac7b7d0</guid>
      <link>https://www.adipod.ai/5</link>
      <description>
        <![CDATA[<p>In this episode, Shimin and Dan explore the evolving landscape of AI in software engineering, discussing the implications of the Cloud Opus 4.5 sole document, the ethical considerations of AI models, and the impact of AI on developer productivity. They delve into spec-driven development, the latest advancements in AI models like DeepSeek v3.2, and the intersection of AI and mental health. The conversation also touches on the potential AI bubble and the challenges faced by developers in integrating AI tools effectively.</p><p><br><strong>Takeaways</strong><br>The Cloud Opus 4.5 sole document reveals insights into AI model training.<br>Spec-driven development is a promising approach for AI-assisted coding.<br>DeepSeek v3.2 showcases advancements in reasoning models.<br>AI models can exhibit traits similar to human emotions and traumas.<br>Skills in AI may not always resolve context issues effectively.</p><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic">How AI is transforming work at Anthropic</a><br><a href="https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document">Claude 4.5 Opus Soul Document</a><strong><br></strong><a href="https://github.com/humanlayer/12-factor-agents?tab=readme-ov-file">12 Factor Agents</a><br><a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html">Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl</a><strong><br></strong><a href="https://magazine.sebastianraschka.com/p/technical-deepseek">From DeepSeek V3 to V3.2: Architecture, Sparse Attention, and RL Updates</a><br><a href="https://arxiv.org/html/2512.04124v1">When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models</a> <br><a href="https://martinalderson.com/posts/are-we-really-repeating-the-telecoms-crash-with-ai-datacenters/">Are we really repeating the telecoms crash with AI datacenters?</a><br><a href="https://techcrunch.com/2025/12/04/anthropic-ceo-weighs-in-on-ai-bubble-talk-and-risk-taking-among-competitors/">Anthropic CEO weighs in on AI bubble talk and risk-taking among competitors</a> <br><a href="https://pop-the-bubble.xyz/">Time until the AI bubble bursts</a><br><a href="https://futurism.com/artificial-intelligence/microsoft-sell-ai-agents-disaster">Microsoft’s Attempts to Sell AI Agents Are Turning Into a Disaster</a></p><p><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Shimin and Dan explore the evolving landscape of AI in software engineering, discussing the implications of the Cloud Opus 4.5 sole document, the ethical considerations of AI models, and the impact of AI on developer productivity. They delve into spec-driven development, the latest advancements in AI models like DeepSeek v3.2, and the intersection of AI and mental health. The conversation also touches on the potential AI bubble and the challenges faced by developers in integrating AI tools effectively.</p><p><br><strong>Takeaways</strong><br>The Cloud Opus 4.5 sole document reveals insights into AI model training.<br>Spec-driven development is a promising approach for AI-assisted coding.<br>DeepSeek v3.2 showcases advancements in reasoning models.<br>AI models can exhibit traits similar to human emotions and traumas.<br>Skills in AI may not always resolve context issues effectively.</p><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic">How AI is transforming work at Anthropic</a><br><a href="https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document">Claude 4.5 Opus Soul Document</a><strong><br></strong><a href="https://github.com/humanlayer/12-factor-agents?tab=readme-ov-file">12 Factor Agents</a><br><a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html">Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl</a><strong><br></strong><a href="https://magazine.sebastianraschka.com/p/technical-deepseek">From DeepSeek V3 to V3.2: Architecture, Sparse Attention, and RL Updates</a><br><a href="https://arxiv.org/html/2512.04124v1">When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models</a> <br><a href="https://martinalderson.com/posts/are-we-really-repeating-the-telecoms-crash-with-ai-datacenters/">Are we really repeating the telecoms crash with AI datacenters?</a><br><a href="https://techcrunch.com/2025/12/04/anthropic-ceo-weighs-in-on-ai-bubble-talk-and-risk-taking-among-competitors/">Anthropic CEO weighs in on AI bubble talk and risk-taking among competitors</a> <br><a href="https://pop-the-bubble.xyz/">Time until the AI bubble bursts</a><br><a href="https://futurism.com/artificial-intelligence/microsoft-sell-ai-agents-disaster">Microsoft’s Attempts to Sell AI Agents Are Turning Into a Disaster</a></p><p><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 12 Dec 2025 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/da86085c/8d255004.mp3" length="27509844" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3422</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Shimin and Dan explore the evolving landscape of AI in software engineering, discussing the implications of the Cloud Opus 4.5 sole document, the ethical considerations of AI models, and the impact of AI on developer productivity. They delve into spec-driven development, the latest advancements in AI models like DeepSeek v3.2, and the intersection of AI and mental health. The conversation also touches on the potential AI bubble and the challenges faced by developers in integrating AI tools effectively.</p><p><br><strong>Takeaways</strong><br>The Cloud Opus 4.5 sole document reveals insights into AI model training.<br>Spec-driven development is a promising approach for AI-assisted coding.<br>DeepSeek v3.2 showcases advancements in reasoning models.<br>AI models can exhibit traits similar to human emotions and traumas.<br>Skills in AI may not always resolve context issues effectively.</p><p><strong>Resources Mentioned<br></strong><a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic">How AI is transforming work at Anthropic</a><br><a href="https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document">Claude 4.5 Opus Soul Document</a><strong><br></strong><a href="https://github.com/humanlayer/12-factor-agents?tab=readme-ov-file">12 Factor Agents</a><br><a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html">Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl</a><strong><br></strong><a href="https://magazine.sebastianraschka.com/p/technical-deepseek">From DeepSeek V3 to V3.2: Architecture, Sparse Attention, and RL Updates</a><br><a href="https://arxiv.org/html/2512.04124v1">When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models</a> <br><a href="https://martinalderson.com/posts/are-we-really-repeating-the-telecoms-crash-with-ai-datacenters/">Are we really repeating the telecoms crash with AI datacenters?</a><br><a href="https://techcrunch.com/2025/12/04/anthropic-ceo-weighs-in-on-ai-bubble-talk-and-risk-taking-among-competitors/">Anthropic CEO weighs in on AI bubble talk and risk-taking among competitors</a> <br><a href="https://pop-the-bubble.xyz/">Time until the AI bubble bursts</a><br><a href="https://futurism.com/artificial-intelligence/microsoft-sell-ai-agents-disaster">Microsoft’s Attempts to Sell AI Agents Are Turning Into a Disaster</a></p><p><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, software engineering, productivity, spec-driven development, DeepSeek, AI bubble</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/da86085c/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Episode 4: Open AI Code Red, TPU vs GPU and More Autonomous Coding Agents</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>Episode 4: Open AI Code Red, TPU vs GPU and More Autonomous Coding Agents</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0634f244-35ad-4054-8df2-73a8985121f6</guid>
      <link>https://www.adipod.ai/4</link>
      <description>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan discuss the evolving landscape of AI in software engineering, touching on topics such as OpenAI's recent challenges, the significance of Google TPUs, and effective techniques for working with large language models. They also delve into a deep dive on general agentic memory, share insights on code quality, and assess the current state of the AI bubble.</p><p><br></p><p><strong>Takeaways</strong></p><ul><li>Google's TPUs are designed specifically for AI inference, offering advantages over traditional GPUs.</li><li>Effective use of large language models requires avoiding common anti-patterns.</li><li>AI adoption rates are showing signs of flattening out, particularly among larger firms.</li><li>General agentic memory can enhance the performance of AI models by improving context management.</li><li>Code quality remains crucial, even as AI tools make coding easier and faster.</li><li>Smaller, more frequent code reviews can enhance team communication and project understanding.</li><li>AI models are not infallible; they require careful oversight and validation of generated code.</li><li>The future of AI may hinge on research rather than mere scaling of existing models.</li></ul><p><br></p><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/ai/2025/12/openai-ceo-declares-code-red-as-gemini-gains-200-million-users-in-3-months/">OpenAI Code Red<br></a><a href="https://www.uncoveralpha.com/p/the-chip-made-for-the-ai-inference">The chip made for the AI inference era – the Google TPU<br></a><a href="https://instavm.io/blog/llm-anti-patterns">Anti-patterns while working with LLMs<br></a><a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md">Writing a good claude md<br></a><a href="https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents">Effective harnesses for long-running agents<br></a><a href="https://arxiv.org/pdf/2511.18423">General Agentic Memory Via Deep Research<br></a><a href="https://www.apolloacademy.com/ai-adoption-rates-starting-to-flatten-out/">AI Adoption Rates Starting to Flatten Out<br></a><a href="https://garymarcus.substack.com/p/a-trillion-dollars-is-a-terrible">A trillion dollars is a terrible thing to waste<strong><br></strong></a></p><p><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan discuss the evolving landscape of AI in software engineering, touching on topics such as OpenAI's recent challenges, the significance of Google TPUs, and effective techniques for working with large language models. They also delve into a deep dive on general agentic memory, share insights on code quality, and assess the current state of the AI bubble.</p><p><br></p><p><strong>Takeaways</strong></p><ul><li>Google's TPUs are designed specifically for AI inference, offering advantages over traditional GPUs.</li><li>Effective use of large language models requires avoiding common anti-patterns.</li><li>AI adoption rates are showing signs of flattening out, particularly among larger firms.</li><li>General agentic memory can enhance the performance of AI models by improving context management.</li><li>Code quality remains crucial, even as AI tools make coding easier and faster.</li><li>Smaller, more frequent code reviews can enhance team communication and project understanding.</li><li>AI models are not infallible; they require careful oversight and validation of generated code.</li><li>The future of AI may hinge on research rather than mere scaling of existing models.</li></ul><p><br></p><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/ai/2025/12/openai-ceo-declares-code-red-as-gemini-gains-200-million-users-in-3-months/">OpenAI Code Red<br></a><a href="https://www.uncoveralpha.com/p/the-chip-made-for-the-ai-inference">The chip made for the AI inference era – the Google TPU<br></a><a href="https://instavm.io/blog/llm-anti-patterns">Anti-patterns while working with LLMs<br></a><a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md">Writing a good claude md<br></a><a href="https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents">Effective harnesses for long-running agents<br></a><a href="https://arxiv.org/pdf/2511.18423">General Agentic Memory Via Deep Research<br></a><a href="https://www.apolloacademy.com/ai-adoption-rates-starting-to-flatten-out/">AI Adoption Rates Starting to Flatten Out<br></a><a href="https://garymarcus.substack.com/p/a-trillion-dollars-is-a-terrible">A trillion dollars is a terrible thing to waste<strong><br></strong></a></p><p><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 05 Dec 2025 04:00:00 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/62fec857/687b7695.mp3" length="31029245" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3862</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan discuss the evolving landscape of AI in software engineering, touching on topics such as OpenAI's recent challenges, the significance of Google TPUs, and effective techniques for working with large language models. They also delve into a deep dive on general agentic memory, share insights on code quality, and assess the current state of the AI bubble.</p><p><br></p><p><strong>Takeaways</strong></p><ul><li>Google's TPUs are designed specifically for AI inference, offering advantages over traditional GPUs.</li><li>Effective use of large language models requires avoiding common anti-patterns.</li><li>AI adoption rates are showing signs of flattening out, particularly among larger firms.</li><li>General agentic memory can enhance the performance of AI models by improving context management.</li><li>Code quality remains crucial, even as AI tools make coding easier and faster.</li><li>Smaller, more frequent code reviews can enhance team communication and project understanding.</li><li>AI models are not infallible; they require careful oversight and validation of generated code.</li><li>The future of AI may hinge on research rather than mere scaling of existing models.</li></ul><p><br></p><p><strong>Resources Mentioned<br></strong><a href="https://arstechnica.com/ai/2025/12/openai-ceo-declares-code-red-as-gemini-gains-200-million-users-in-3-months/">OpenAI Code Red<br></a><a href="https://www.uncoveralpha.com/p/the-chip-made-for-the-ai-inference">The chip made for the AI inference era – the Google TPU<br></a><a href="https://instavm.io/blog/llm-anti-patterns">Anti-patterns while working with LLMs<br></a><a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md">Writing a good claude md<br></a><a href="https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents">Effective harnesses for long-running agents<br></a><a href="https://arxiv.org/pdf/2511.18423">General Agentic Memory Via Deep Research<br></a><a href="https://www.apolloacademy.com/ai-adoption-rates-starting-to-flatten-out/">AI Adoption Rates Starting to Flatten Out<br></a><a href="https://garymarcus.substack.com/p/a-trillion-dollars-is-a-terrible">A trillion dollars is a terrible thing to waste<strong><br></strong></a></p><p><strong>Chapters<br></strong><strong>Connect with ADIPod</strong></p><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai/">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, software engineering, OpenAI, Google TPU, large language models, AI competition, agentic memory, code quality, AI adoption</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/62fec857/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Claude Opus 4.5, Olmo 3, and a Paper on Diffusion + Auto Regression</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>Claude Opus 4.5, Olmo 3, and a Paper on Diffusion + Auto Regression</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bf832a3a-469d-4f0d-85b3-78e8ecbe785e</guid>
      <link>https://www.adipod.ai/3</link>
      <description>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest advancements in AI models, including the release of Claude Opus 4.5 and Gemini 3. They discuss the implications of these models on software engineering, the rise of open-source models like Olmo 3, and the enhancements in the Claude Developer Platform. The conversation also delves into the challenges of relying on AI for coding tasks, the potential pitfalls of the AI bubble, and the future of written exams in the age of AI. </p><p><strong>Takeaways</strong></p><ul><li> Claude Opus 4.5 setting benchmarks, enhance usability and reduce token consumption.</li><li>The introduction of open-source models like Olmo 3 is a significant development in AI.</li><li>The future of written exams may be challenged by AI's ability to generate human-like responses.</li><li>Relying too heavily on AI can lead to a lack of critical thinking and problem-solving skills.</li><li>The AI bubble is at 25s to midnight</li><li>Recent research suggests that AI models can improve their performance through emulating query based search.</li><li>The importance of prompt engineering in AI interactions is highlighted.</li></ul><p><strong>Resources Mentioned <br></strong><a href="https://www.anthropic.com/news/claude-opus-4-5">Introducing Claude Opus 4.5</a><br><a href="https://blog.google/technology/developers/gemini-3-pro-image-developers/">Build with Nano Banana Pro, our Gemini 3 Pro Image model</a><br><a href="https://x.com/karpathy/status/1992655330002817095">Andrej Karpathy's Post about Nano Banana Pro</a><br><a href="https://allenai.org/blog/olmo3">Olmo 3: Charting a path through the model flow to lead open-source AI </a><br><a href="https://www.anthropic.com/engineering/advanced-tool-use">Introducing advanced tool use on the Claude Developer Platform</a><br><a href="https://arxiv.org/abs/2511.08923">TiDAR: Think in Diffusion, Talk in Autoregression</a><br><a href="https://arxiv.org/pdf/2508.10874">SSRL: SELF-SEARCH REINFORCEMENT LEARNING</a><br><a href="https://www.reuters.com/technology/mira-muratis-thinking-machines-seeks-50-billion-valuation-funding-talks-2025-11-13/">Mira Murati's Thinking Machines seeks $50 billion valuation in funding talks, Bloomberg News reports</a><br><a href="https://crazystupidtech.com/2025/11/21/boom-bubble-bust-boom-why-should-ai-be-different/">Boom, bubble, bust, boom. Why should AI be different?</a><br><a href="https://finance.yahoo.com/news/nvidia-didn-t-save-market-140007853.html">Nvidia didn’t save the market. What’s next for the AI trade?</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(01:25) - Claude Opus 4.5</li>
<li>(07:02) - Exploring Gemini 3 and Image Models</li>
<li>(11:24) - Olmo 3 and The Rise of Open Flow Models</li>
<li>(15:46) - Innovations in AI Tools and Platforms</li>
<li>(19:33) - Research Insights: Diffusion and Auto-Regression Models</li>
<li>(23:39) - Advancements in AI Output Efficiency</li>
<li>(25:45) - Exploring Self Search Reinforcement Learning</li>
<li>(27:48) - The Dilemma of Language Models</li>
<li>(30:11) - Prompt Engineering and Search Integration</li>
<li>(32:55) - Dan's Rants on AI Limitations</li>
<li>(38:17) - 2 Minutes to Midnight</li>
<li>(46:41) - Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest advancements in AI models, including the release of Claude Opus 4.5 and Gemini 3. They discuss the implications of these models on software engineering, the rise of open-source models like Olmo 3, and the enhancements in the Claude Developer Platform. The conversation also delves into the challenges of relying on AI for coding tasks, the potential pitfalls of the AI bubble, and the future of written exams in the age of AI. </p><p><strong>Takeaways</strong></p><ul><li> Claude Opus 4.5 setting benchmarks, enhance usability and reduce token consumption.</li><li>The introduction of open-source models like Olmo 3 is a significant development in AI.</li><li>The future of written exams may be challenged by AI's ability to generate human-like responses.</li><li>Relying too heavily on AI can lead to a lack of critical thinking and problem-solving skills.</li><li>The AI bubble is at 25s to midnight</li><li>Recent research suggests that AI models can improve their performance through emulating query based search.</li><li>The importance of prompt engineering in AI interactions is highlighted.</li></ul><p><strong>Resources Mentioned <br></strong><a href="https://www.anthropic.com/news/claude-opus-4-5">Introducing Claude Opus 4.5</a><br><a href="https://blog.google/technology/developers/gemini-3-pro-image-developers/">Build with Nano Banana Pro, our Gemini 3 Pro Image model</a><br><a href="https://x.com/karpathy/status/1992655330002817095">Andrej Karpathy's Post about Nano Banana Pro</a><br><a href="https://allenai.org/blog/olmo3">Olmo 3: Charting a path through the model flow to lead open-source AI </a><br><a href="https://www.anthropic.com/engineering/advanced-tool-use">Introducing advanced tool use on the Claude Developer Platform</a><br><a href="https://arxiv.org/abs/2511.08923">TiDAR: Think in Diffusion, Talk in Autoregression</a><br><a href="https://arxiv.org/pdf/2508.10874">SSRL: SELF-SEARCH REINFORCEMENT LEARNING</a><br><a href="https://www.reuters.com/technology/mira-muratis-thinking-machines-seeks-50-billion-valuation-funding-talks-2025-11-13/">Mira Murati's Thinking Machines seeks $50 billion valuation in funding talks, Bloomberg News reports</a><br><a href="https://crazystupidtech.com/2025/11/21/boom-bubble-bust-boom-why-should-ai-be-different/">Boom, bubble, bust, boom. Why should AI be different?</a><br><a href="https://finance.yahoo.com/news/nvidia-didn-t-save-market-140007853.html">Nvidia didn’t save the market. What’s next for the AI trade?</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(01:25) - Claude Opus 4.5</li>
<li>(07:02) - Exploring Gemini 3 and Image Models</li>
<li>(11:24) - Olmo 3 and The Rise of Open Flow Models</li>
<li>(15:46) - Innovations in AI Tools and Platforms</li>
<li>(19:33) - Research Insights: Diffusion and Auto-Regression Models</li>
<li>(23:39) - Advancements in AI Output Efficiency</li>
<li>(25:45) - Exploring Self Search Reinforcement Learning</li>
<li>(27:48) - The Dilemma of Language Models</li>
<li>(30:11) - Prompt Engineering and Search Integration</li>
<li>(32:55) - Dan's Rants on AI Limitations</li>
<li>(38:17) - 2 Minutes to Midnight</li>
<li>(46:41) - Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 28 Nov 2025 19:50:41 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/9d8ec5ad/1d59bda0.mp3" length="23053476" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>2865</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest advancements in AI models, including the release of Claude Opus 4.5 and Gemini 3. They discuss the implications of these models on software engineering, the rise of open-source models like Olmo 3, and the enhancements in the Claude Developer Platform. The conversation also delves into the challenges of relying on AI for coding tasks, the potential pitfalls of the AI bubble, and the future of written exams in the age of AI. </p><p><strong>Takeaways</strong></p><ul><li> Claude Opus 4.5 setting benchmarks, enhance usability and reduce token consumption.</li><li>The introduction of open-source models like Olmo 3 is a significant development in AI.</li><li>The future of written exams may be challenged by AI's ability to generate human-like responses.</li><li>Relying too heavily on AI can lead to a lack of critical thinking and problem-solving skills.</li><li>The AI bubble is at 25s to midnight</li><li>Recent research suggests that AI models can improve their performance through emulating query based search.</li><li>The importance of prompt engineering in AI interactions is highlighted.</li></ul><p><strong>Resources Mentioned <br></strong><a href="https://www.anthropic.com/news/claude-opus-4-5">Introducing Claude Opus 4.5</a><br><a href="https://blog.google/technology/developers/gemini-3-pro-image-developers/">Build with Nano Banana Pro, our Gemini 3 Pro Image model</a><br><a href="https://x.com/karpathy/status/1992655330002817095">Andrej Karpathy's Post about Nano Banana Pro</a><br><a href="https://allenai.org/blog/olmo3">Olmo 3: Charting a path through the model flow to lead open-source AI </a><br><a href="https://www.anthropic.com/engineering/advanced-tool-use">Introducing advanced tool use on the Claude Developer Platform</a><br><a href="https://arxiv.org/abs/2511.08923">TiDAR: Think in Diffusion, Talk in Autoregression</a><br><a href="https://arxiv.org/pdf/2508.10874">SSRL: SELF-SEARCH REINFORCEMENT LEARNING</a><br><a href="https://www.reuters.com/technology/mira-muratis-thinking-machines-seeks-50-billion-valuation-funding-talks-2025-11-13/">Mira Murati's Thinking Machines seeks $50 billion valuation in funding talks, Bloomberg News reports</a><br><a href="https://crazystupidtech.com/2025/11/21/boom-bubble-bust-boom-why-should-ai-be-different/">Boom, bubble, bust, boom. Why should AI be different?</a><br><a href="https://finance.yahoo.com/news/nvidia-didn-t-save-market-140007853.html">Nvidia didn’t save the market. What’s next for the AI trade?</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(01:25) - Claude Opus 4.5</li>
<li>(07:02) - Exploring Gemini 3 and Image Models</li>
<li>(11:24) - Olmo 3 and The Rise of Open Flow Models</li>
<li>(15:46) - Innovations in AI Tools and Platforms</li>
<li>(19:33) - Research Insights: Diffusion and Auto-Regression Models</li>
<li>(23:39) - Advancements in AI Output Efficiency</li>
<li>(25:45) - Exploring Self Search Reinforcement Learning</li>
<li>(27:48) - The Dilemma of Language Models</li>
<li>(30:11) - Prompt Engineering and Search Integration</li>
<li>(32:55) - Dan's Rants on AI Limitations</li>
<li>(38:17) - 2 Minutes to Midnight</li>
<li>(46:41) - Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, software engineering, Claude Opus 4.5, Nano Banana Pro, Olmo 3, open source LLM models, AI safety, Prompt Engineering</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9d8ec5ad/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/9d8ec5ad/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>It's Gemini 3 Week! And How to Persuade an LLM to Call You a Jerk</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>It's Gemini 3 Week! And How to Persuade an LLM to Call You a Jerk</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8c9a420b-8249-45fa-bf37-21b912af55fa</guid>
      <link>https://www.adipod.ai/2</link>
      <description>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest developments in AI, including Google's Gemini 3 model and its implications for software engineering. They discuss the rise of AI-driven cybersecurity threats, the concept of world models, and the evolving landscape of software development techniques. The conversation also delves into the ethical considerations of AI compliance and the challenges of running open weight models. Finally, they reflect on the current state of the AI bubble and its potential future.</p><p><br></p><p><strong>Takeaways</strong></p><ul><li>The rent for running AI models is too high.</li><li>The AI bubble may burst, but it can still leading to innovation.</li><li>Persuasion techniques can influence AI behavior.</li><li>World models are changing how we understand AI.</li><li>Gemini 3 shows significant improvements over previous models.</li><li>Cybersecurity threats are evolving with AI technology.</li><li>Software development is becoming more meta-focused.</li></ul><p><strong>Resources Mentioned <br></strong><a href="https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf">Disrupting the first reported AI-orchestrated cyber espionage campaign</a><br><a href="https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools">GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools</a><br><a href="https://entropytown.com/articles/2025-11-13-world-model-lecun-feifei-li/">Why Fei-Fei Li, Yann LeCun and DeepMind Are All Betting on “World Models” — and How Their Bets Differ </a><br><a href="https://www.engadget.com/ai/googles-new-gemini-3-model-arrives-in-ai-mode-and-the-gemini-app-160054273.html?src=rss">Google's new Gemini 3 model arrives in AI Mode and the Gemini app</a><br><a href="https://simonw.substack.com/p/code-research-projects-with-async">Code research projects with async coding agents like Claude Code and Codex</a><br><a href="https://cloud.google.com/blog/topics/developers-practitioners/where-to-use-sub-agents-versus-agents-as-tools">ADK architecture: When to use sub-agents versus agents as tools</a><br><a href="https://sundaylettersfromsam.substack.com/p/i-have-seen-the-compounding-teams">I have seen the compounding teams</a><br><a href="https://gail.wharton.upenn.edu/research-and-insights/call-me-a-jerk-persuading-ai/">Call Me A Jerk: Persuading AI to Comply with Objectionable Requests</a><br><a href="https://www.project-syndicate.org/onpoint/will-ai-bubble-burst-trigger-financial-crisis-by-william-h-janeway-2025-11">In Search of the AI Bubble’s Economic Fundamentals</a><br><a href="https://www.youtube.com/watch?v=IplmaMf1xMU">The Benefits of Bubbles | Stratechery by Ben Thompson </a><br><a href="https://medium.com/@anwarzaid76/is-perplexity-the-first-ai-unicorn-to-fail-eb0e827b5e7e">Is Perplexity the first AI unicorn to fail?</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(02:44) - AI in Cybersecurity: Threats and Innovations</li>
<li>(07:35) - World Models: Understanding AI Cognition</li>
<li>(11:41) - Gemini 3: A New Era for AI Models</li>
<li>(13:31) - Benchmarking AI: The Vending Bench 2</li>
<li>(16:18) - Techniques for AI Development</li>
<li>(18:59) - Code Search Use Case</li>
<li>(22:11) - ADK Architecture</li>
<li>(27:27) - Post of the Week: Compounding Teams</li>
<li>(31:16) - Persuasion Techniques in AI: A Deep Dive</li>
<li>(36:17) - Dan's Rant on The Cost of Running Open-Weight Models</li>
<li>(45:09) - 2 Minutes to Midnight</li>
<li>(57:45) - Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai">www.adipod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest developments in AI, including Google's Gemini 3 model and its implications for software engineering. They discuss the rise of AI-driven cybersecurity threats, the concept of world models, and the evolving landscape of software development techniques. The conversation also delves into the ethical considerations of AI compliance and the challenges of running open weight models. Finally, they reflect on the current state of the AI bubble and its potential future.</p><p><br></p><p><strong>Takeaways</strong></p><ul><li>The rent for running AI models is too high.</li><li>The AI bubble may burst, but it can still leading to innovation.</li><li>Persuasion techniques can influence AI behavior.</li><li>World models are changing how we understand AI.</li><li>Gemini 3 shows significant improvements over previous models.</li><li>Cybersecurity threats are evolving with AI technology.</li><li>Software development is becoming more meta-focused.</li></ul><p><strong>Resources Mentioned <br></strong><a href="https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf">Disrupting the first reported AI-orchestrated cyber espionage campaign</a><br><a href="https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools">GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools</a><br><a href="https://entropytown.com/articles/2025-11-13-world-model-lecun-feifei-li/">Why Fei-Fei Li, Yann LeCun and DeepMind Are All Betting on “World Models” — and How Their Bets Differ </a><br><a href="https://www.engadget.com/ai/googles-new-gemini-3-model-arrives-in-ai-mode-and-the-gemini-app-160054273.html?src=rss">Google's new Gemini 3 model arrives in AI Mode and the Gemini app</a><br><a href="https://simonw.substack.com/p/code-research-projects-with-async">Code research projects with async coding agents like Claude Code and Codex</a><br><a href="https://cloud.google.com/blog/topics/developers-practitioners/where-to-use-sub-agents-versus-agents-as-tools">ADK architecture: When to use sub-agents versus agents as tools</a><br><a href="https://sundaylettersfromsam.substack.com/p/i-have-seen-the-compounding-teams">I have seen the compounding teams</a><br><a href="https://gail.wharton.upenn.edu/research-and-insights/call-me-a-jerk-persuading-ai/">Call Me A Jerk: Persuading AI to Comply with Objectionable Requests</a><br><a href="https://www.project-syndicate.org/onpoint/will-ai-bubble-burst-trigger-financial-crisis-by-william-h-janeway-2025-11">In Search of the AI Bubble’s Economic Fundamentals</a><br><a href="https://www.youtube.com/watch?v=IplmaMf1xMU">The Benefits of Bubbles | Stratechery by Ben Thompson </a><br><a href="https://medium.com/@anwarzaid76/is-perplexity-the-first-ai-unicorn-to-fail-eb0e827b5e7e">Is Perplexity the first AI unicorn to fail?</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(02:44) - AI in Cybersecurity: Threats and Innovations</li>
<li>(07:35) - World Models: Understanding AI Cognition</li>
<li>(11:41) - Gemini 3: A New Era for AI Models</li>
<li>(13:31) - Benchmarking AI: The Vending Bench 2</li>
<li>(16:18) - Techniques for AI Development</li>
<li>(18:59) - Code Search Use Case</li>
<li>(22:11) - ADK Architecture</li>
<li>(27:27) - Post of the Week: Compounding Teams</li>
<li>(31:16) - Persuasion Techniques in AI: A Deep Dive</li>
<li>(36:17) - Dan's Rant on The Cost of Running Open-Weight Models</li>
<li>(45:09) - 2 Minutes to Midnight</li>
<li>(57:45) - Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai">www.adipod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 28 Nov 2025 19:04:20 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/b064776c/aefbd49e.mp3" length="28369877" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3529</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest developments in AI, including Google's Gemini 3 model and its implications for software engineering. They discuss the rise of AI-driven cybersecurity threats, the concept of world models, and the evolving landscape of software development techniques. The conversation also delves into the ethical considerations of AI compliance and the challenges of running open weight models. Finally, they reflect on the current state of the AI bubble and its potential future.</p><p><br></p><p><strong>Takeaways</strong></p><ul><li>The rent for running AI models is too high.</li><li>The AI bubble may burst, but it can still leading to innovation.</li><li>Persuasion techniques can influence AI behavior.</li><li>World models are changing how we understand AI.</li><li>Gemini 3 shows significant improvements over previous models.</li><li>Cybersecurity threats are evolving with AI technology.</li><li>Software development is becoming more meta-focused.</li></ul><p><strong>Resources Mentioned <br></strong><a href="https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf">Disrupting the first reported AI-orchestrated cyber espionage campaign</a><br><a href="https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools">GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools</a><br><a href="https://entropytown.com/articles/2025-11-13-world-model-lecun-feifei-li/">Why Fei-Fei Li, Yann LeCun and DeepMind Are All Betting on “World Models” — and How Their Bets Differ </a><br><a href="https://www.engadget.com/ai/googles-new-gemini-3-model-arrives-in-ai-mode-and-the-gemini-app-160054273.html?src=rss">Google's new Gemini 3 model arrives in AI Mode and the Gemini app</a><br><a href="https://simonw.substack.com/p/code-research-projects-with-async">Code research projects with async coding agents like Claude Code and Codex</a><br><a href="https://cloud.google.com/blog/topics/developers-practitioners/where-to-use-sub-agents-versus-agents-as-tools">ADK architecture: When to use sub-agents versus agents as tools</a><br><a href="https://sundaylettersfromsam.substack.com/p/i-have-seen-the-compounding-teams">I have seen the compounding teams</a><br><a href="https://gail.wharton.upenn.edu/research-and-insights/call-me-a-jerk-persuading-ai/">Call Me A Jerk: Persuading AI to Comply with Objectionable Requests</a><br><a href="https://www.project-syndicate.org/onpoint/will-ai-bubble-burst-trigger-financial-crisis-by-william-h-janeway-2025-11">In Search of the AI Bubble’s Economic Fundamentals</a><br><a href="https://www.youtube.com/watch?v=IplmaMf1xMU">The Benefits of Bubbles | Stratechery by Ben Thompson </a><br><a href="https://medium.com/@anwarzaid76/is-perplexity-the-first-ai-unicorn-to-fail-eb0e827b5e7e">Is Perplexity the first AI unicorn to fail?</a></p><p><strong>Chapters<br></strong></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(02:44) - AI in Cybersecurity: Threats and Innovations</li>
<li>(07:35) - World Models: Understanding AI Cognition</li>
<li>(11:41) - Gemini 3: A New Era for AI Models</li>
<li>(13:31) - Benchmarking AI: The Vending Bench 2</li>
<li>(16:18) - Techniques for AI Development</li>
<li>(18:59) - Code Search Use Case</li>
<li>(22:11) - ADK Architecture</li>
<li>(27:27) - Post of the Week: Compounding Teams</li>
<li>(31:16) - Persuasion Techniques in AI: A Deep Dive</li>
<li>(36:17) - Dan's Rant on The Cost of Running Open-Weight Models</li>
<li>(45:09) - 2 Minutes to Midnight</li>
<li>(57:45) - Outro</li>
</ul><br><strong>Connect with ADIPod</strong><ul><li>Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello! </li><li>Checkout our website <a href="https://www.adipod.ai">www.adipod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, software engineering, Gemini 3, cybersecurity, world models, AI techniques, LLM persuasion, open weight models, AI bubble</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b064776c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/b064776c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>AI Benchmarks, Tech Radar, and Limits of Current LLM Architectures</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>AI Benchmarks, Tech Radar, and Limits of Current LLM Architectures</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">28dc0701-b55b-4ba2-947e-85dd17a5ad61</guid>
      <link>https://www.adipod.ai/1</link>
      <description>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the rapidly evolving landscape of AI, discussing recent news, benchmarking challenges, and the implications of AGI as a conspiracy theory. They delve into the latest techniques in AI development, ethical considerations, and the potential impact of AI on human intelligence. The conversation culminates in the latest advancements in LLM architectures, and the ongoing concerns surrounding the AI bubble. </p><p><strong>Takeaways</strong></p><ul><li>Benchmarking AI performance is fraught with challenges and potential biases.</li><li>AGI is increasingly viewed as a conspiracy theory rather than a technical goal.</li><li>New LLM architectures are emerging to address context limitations.</li><li>Ethical dilemmas in AI models raise questions about their decision-making capabilities.</li><li>The AI bubble may lead to significant economic consequences.</li><li>AI's influence on human intelligence is a growing concern among.</li></ul><p><strong>Resources Mentioned:</strong><br><a href="https://www.theregister.com/2025/11/07/measuring_ai_models_hampered_by/">AI benchmarks are a bad joke – and LLM makers are the ones laughing</a> <br><a href="https://www.thoughtworks.com/radar">Technology Radar V33</a> <br><a href="https://blog.sshh.io/p/how-i-use-every-claude-code-feature">How I use Every Claude Code Feature</a> </p><p><a href="https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/">How AGI became the most consequential conspiracy theory of our time</a><br><a href="https://magazine.sebastianraschka.com/p/beyond-standard-llms">Beyond Standard LLMs</a><br><a href="https://alignment.anthropic.com/2025/stress-testing-model-specs/">Stress-testing model specs reveals character differences among language models</a><br><a href="https://arstechnica.com/google/2025/11/meet-project-suncatcher-googles-plan-to-put-ai-data-centers-in-space/">Meet Project Suncatcher, Google’s plan to put AI data centers in space </a><br><a href="https://www.cnbc.com/2025/11/06/openai-cfo-sarah-friar-says-company-is-not-seeking-government-backstop.html">OpenAI CFO Sarah Friar says company isn’t seeking government backstop, clarifying prior comment</a></p><p><strong>Chapters:</strong><br></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(02:26) - AI Benchmarks: Are They Reliable?</li>
<li>(08:02) - ThoughtWorks Tech Radar: AI-Centric Trends</li>
<li>(11:47) - Techniques Corner: Exploring AI Subagents</li>
<li>(14:17) - AGI: The Most Consequential Conspiracy Theory</li>
<li>(22:57) - Deep Dive: Limitations of Current LLM Architectures</li>
<li>(34:13) - Ethics and Decision-Making in AI</li>
<li>(38:41) - Dan's Rant on the Impact of AI on Human Intelligence</li>
<li>(43:26) - 2 Minutes to Midnight</li>
<li>(50:29) - Outro</li>
</ul><br><strong>Connect with ADIPod:</strong><ul><li>Check out our website: <a href="https://www.ADIpod.ai">www.ADIpod.ai</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the rapidly evolving landscape of AI, discussing recent news, benchmarking challenges, and the implications of AGI as a conspiracy theory. They delve into the latest techniques in AI development, ethical considerations, and the potential impact of AI on human intelligence. The conversation culminates in the latest advancements in LLM architectures, and the ongoing concerns surrounding the AI bubble. </p><p><strong>Takeaways</strong></p><ul><li>Benchmarking AI performance is fraught with challenges and potential biases.</li><li>AGI is increasingly viewed as a conspiracy theory rather than a technical goal.</li><li>New LLM architectures are emerging to address context limitations.</li><li>Ethical dilemmas in AI models raise questions about their decision-making capabilities.</li><li>The AI bubble may lead to significant economic consequences.</li><li>AI's influence on human intelligence is a growing concern among.</li></ul><p><strong>Resources Mentioned:</strong><br><a href="https://www.theregister.com/2025/11/07/measuring_ai_models_hampered_by/">AI benchmarks are a bad joke – and LLM makers are the ones laughing</a> <br><a href="https://www.thoughtworks.com/radar">Technology Radar V33</a> <br><a href="https://blog.sshh.io/p/how-i-use-every-claude-code-feature">How I use Every Claude Code Feature</a> </p><p><a href="https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/">How AGI became the most consequential conspiracy theory of our time</a><br><a href="https://magazine.sebastianraschka.com/p/beyond-standard-llms">Beyond Standard LLMs</a><br><a href="https://alignment.anthropic.com/2025/stress-testing-model-specs/">Stress-testing model specs reveals character differences among language models</a><br><a href="https://arstechnica.com/google/2025/11/meet-project-suncatcher-googles-plan-to-put-ai-data-centers-in-space/">Meet Project Suncatcher, Google’s plan to put AI data centers in space </a><br><a href="https://www.cnbc.com/2025/11/06/openai-cfo-sarah-friar-says-company-is-not-seeking-government-backstop.html">OpenAI CFO Sarah Friar says company isn’t seeking government backstop, clarifying prior comment</a></p><p><strong>Chapters:</strong><br></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(02:26) - AI Benchmarks: Are They Reliable?</li>
<li>(08:02) - ThoughtWorks Tech Radar: AI-Centric Trends</li>
<li>(11:47) - Techniques Corner: Exploring AI Subagents</li>
<li>(14:17) - AGI: The Most Consequential Conspiracy Theory</li>
<li>(22:57) - Deep Dive: Limitations of Current LLM Architectures</li>
<li>(34:13) - Ethics and Decision-Making in AI</li>
<li>(38:41) - Dan's Rant on the Impact of AI on Human Intelligence</li>
<li>(43:26) - 2 Minutes to Midnight</li>
<li>(50:29) - Outro</li>
</ul><br><strong>Connect with ADIPod:</strong><ul><li>Check out our website: <a href="https://www.ADIpod.ai">www.ADIpod.ai</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 28 Nov 2025 18:06:51 -0800</pubDate>
      <author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</author>
      <enclosure url="https://op3.dev/e/prfx.byspotify.com/e/media.transistor.fm/3c2d9719/9786a80c.mp3" length="25003179" type="audio/mpeg"/>
      <itunes:author>Shimin Zhang, Dan Lasky, &amp; Rahul Yadav</itunes:author>
      <itunes:duration>3109</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the rapidly evolving landscape of AI, discussing recent news, benchmarking challenges, and the implications of AGI as a conspiracy theory. They delve into the latest techniques in AI development, ethical considerations, and the potential impact of AI on human intelligence. The conversation culminates in the latest advancements in LLM architectures, and the ongoing concerns surrounding the AI bubble. </p><p><strong>Takeaways</strong></p><ul><li>Benchmarking AI performance is fraught with challenges and potential biases.</li><li>AGI is increasingly viewed as a conspiracy theory rather than a technical goal.</li><li>New LLM architectures are emerging to address context limitations.</li><li>Ethical dilemmas in AI models raise questions about their decision-making capabilities.</li><li>The AI bubble may lead to significant economic consequences.</li><li>AI's influence on human intelligence is a growing concern among.</li></ul><p><strong>Resources Mentioned:</strong><br><a href="https://www.theregister.com/2025/11/07/measuring_ai_models_hampered_by/">AI benchmarks are a bad joke – and LLM makers are the ones laughing</a> <br><a href="https://www.thoughtworks.com/radar">Technology Radar V33</a> <br><a href="https://blog.sshh.io/p/how-i-use-every-claude-code-feature">How I use Every Claude Code Feature</a> </p><p><a href="https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/">How AGI became the most consequential conspiracy theory of our time</a><br><a href="https://magazine.sebastianraschka.com/p/beyond-standard-llms">Beyond Standard LLMs</a><br><a href="https://alignment.anthropic.com/2025/stress-testing-model-specs/">Stress-testing model specs reveals character differences among language models</a><br><a href="https://arstechnica.com/google/2025/11/meet-project-suncatcher-googles-plan-to-put-ai-data-centers-in-space/">Meet Project Suncatcher, Google’s plan to put AI data centers in space </a><br><a href="https://www.cnbc.com/2025/11/06/openai-cfo-sarah-friar-says-company-is-not-seeking-government-backstop.html">OpenAI CFO Sarah Friar says company isn’t seeking government backstop, clarifying prior comment</a></p><p><strong>Chapters:</strong><br></p><ul><li>(00:00) - Introduction to Artificial Developer Intelligence</li>
<li>(02:26) - AI Benchmarks: Are They Reliable?</li>
<li>(08:02) - ThoughtWorks Tech Radar: AI-Centric Trends</li>
<li>(11:47) - Techniques Corner: Exploring AI Subagents</li>
<li>(14:17) - AGI: The Most Consequential Conspiracy Theory</li>
<li>(22:57) - Deep Dive: Limitations of Current LLM Architectures</li>
<li>(34:13) - Ethics and Decision-Making in AI</li>
<li>(38:41) - Dan's Rant on the Impact of AI on Human Intelligence</li>
<li>(43:26) - 2 Minutes to Midnight</li>
<li>(50:29) - Outro</li>
</ul><br><strong>Connect with ADIPod:</strong><ul><li>Check out our website: <a href="https://www.ADIpod.ai">www.ADIpod.ai</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>AI, model benchmarks, technology radar, LLM architecture, ethics in AI, AI news</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3c2d9719/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/3c2d9719/chapters.json" type="application/json+chapters"/>
    </item>
  </channel>
</rss>
