<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/the-engineering-enablement-podcast" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Engineering Enablement by DX</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/the-engineering-enablement-podcast</itunes:new-feed-url>
    <description>The show focused on developer productivity and the teams and leaders dedicated to improving it. Each episode features in-depth interviews with Platform and DevEx teams, along with the latest research and approaches for measuring developer productivity. Presented by DX (getdx.com), the developer intelligence platform designed by researchers.</description>
    <copyright>© 2026 DX</copyright>
    <podcast:guid>585377eb-51f9-5356-8b85-b1e8e04e1c52</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    <language>en</language>
    <pubDate>Fri, 03 Apr 2026 06:00:11 -0600</pubDate>
    <lastBuildDate>Fri, 03 Apr 2026 06:02:32 -0600</lastBuildDate>
    <link>https://getdx.com/engineering-enablement-podcast</link>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Business">
      <itunes:category text="Management"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>DX</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/X8HviCHMiUIeY9D4VtdUwvBxKbDyJ0xOs13Z4RF37Bc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjU2/NjdiYzQ1NmMxNTgx/Y2VhZDcyNWQ1M2Uy/YThlOC5wbmc.jpg"/>
    <itunes:summary>The show focused on developer productivity and the teams and leaders dedicated to improving it. Each episode features in-depth interviews with Platform and DevEx teams, along with the latest research and approaches for measuring developer productivity. Presented by DX (getdx.com), the developer intelligence platform designed by researchers.</itunes:summary>
    <itunes:subtitle>The show focused on developer productivity and the teams and leaders dedicated to improving it.</itunes:subtitle>
    <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
    <itunes:owner>
      <itunes:name>Brook Perry</itunes:name>
      <itunes:email>brook@getdx.com</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>Measuring AI impact, assessing readiness, and new data trends</title>
      <itunes:episode>97</itunes:episode>
      <podcast:episode>97</podcast:episode>
      <itunes:title>Measuring AI impact, assessing readiness, and new data trends</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">98c43e5f-6ede-45bd-b604-64e183f54ea3</guid>
      <link>https://share.transistor.fm/s/820b39d5</link>
      <description>
        <![CDATA[<p>In this episode of<em> Engineering Enablement</em>, Jesse Adametz joins Abi Noda, this time to host. </p><p><br>Together, they explore how AI is showing up across the SDLC, not just in code generation, and how it is shifting bottlenecks across the development process. They unpack what “AI readiness” actually means in practice, and why it often comes down to developer experience fundamentals like documentation, environments, and feedback loops.</p><p>They also discuss why enablement matters more than tool choice, how teams are thinking about measuring ROI, and what changes as background agents become more common. Finally, they explore how the role of the engineer may evolve, the open questions teams are still grappling with, and the challenges of non-engineers contributing to codebases.</p><p><br></p><p><strong>Where to find Jesse Adametz: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/jesseadametz">https://www.linkedin.com/in/jesseadametz</a> </p><p>• X: <a href="https://x.com/jesseadametz">https://x.com/jesseadametz</a> </p><p>• Website: <a href="https://www.jesseadametz.com/">https://www.jesseadametz.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(02:12) Where AI is showing up across the SDLC</p><p>(05:53) AI readiness and its link to developer experience</p><p>(08:23) Why enablement, education, and experimentation matter more than tool choice</p><p>(13:05) The case for a dedicated enablement team</p><p>(14:50) Measuring AI ROI: challenges and tradeoffs</p><p>(19:46) Background agents and token spend</p><p>(24:12) Measuring agent output with PR throughput</p><p>(26:58) How the engineer role might change</p><p>(31:01) Specs and documentation in the age of AI</p><p>(33:11) Non-engineers writing code</p><p>(35:30) What’s changing in the SDLC and open questions</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/whitepaper/ai-measurement-framework">Measuring AI code assistants and agents</a></p><p>• <a href="https://getdx.com/podcast/jesse-aldametz-twilio-platform-consolidation/">Lessons from Twilio’s multi-year platform consolidation</a></p><p>• <a href="https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592">The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win</a></p><p>• <a href="https://code.claude.com/docs/en/memory">How Claude remembers your project - Claude Code Docs</a></p><p>• <a href="https://www.reddit.com/r/ProgrammerHumor/comments/1p70bk8/specisjustcode/#lightbox">specIsJustCode : r/ProgrammerHumor</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of<em> Engineering Enablement</em>, Jesse Adametz joins Abi Noda, this time to host. </p><p><br>Together, they explore how AI is showing up across the SDLC, not just in code generation, and how it is shifting bottlenecks across the development process. They unpack what “AI readiness” actually means in practice, and why it often comes down to developer experience fundamentals like documentation, environments, and feedback loops.</p><p>They also discuss why enablement matters more than tool choice, how teams are thinking about measuring ROI, and what changes as background agents become more common. Finally, they explore how the role of the engineer may evolve, the open questions teams are still grappling with, and the challenges of non-engineers contributing to codebases.</p><p><br></p><p><strong>Where to find Jesse Adametz: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/jesseadametz">https://www.linkedin.com/in/jesseadametz</a> </p><p>• X: <a href="https://x.com/jesseadametz">https://x.com/jesseadametz</a> </p><p>• Website: <a href="https://www.jesseadametz.com/">https://www.jesseadametz.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(02:12) Where AI is showing up across the SDLC</p><p>(05:53) AI readiness and its link to developer experience</p><p>(08:23) Why enablement, education, and experimentation matter more than tool choice</p><p>(13:05) The case for a dedicated enablement team</p><p>(14:50) Measuring AI ROI: challenges and tradeoffs</p><p>(19:46) Background agents and token spend</p><p>(24:12) Measuring agent output with PR throughput</p><p>(26:58) How the engineer role might change</p><p>(31:01) Specs and documentation in the age of AI</p><p>(33:11) Non-engineers writing code</p><p>(35:30) What’s changing in the SDLC and open questions</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/whitepaper/ai-measurement-framework">Measuring AI code assistants and agents</a></p><p>• <a href="https://getdx.com/podcast/jesse-aldametz-twilio-platform-consolidation/">Lessons from Twilio’s multi-year platform consolidation</a></p><p>• <a href="https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592">The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win</a></p><p>• <a href="https://code.claude.com/docs/en/memory">How Claude remembers your project - Claude Code Docs</a></p><p>• <a href="https://www.reddit.com/r/ProgrammerHumor/comments/1p70bk8/specisjustcode/#lightbox">specIsJustCode : r/ProgrammerHumor</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 03 Apr 2026 06:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/820b39d5/0c059388.mp3" length="36712836" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/xBL8juPuadzFYnRNmC3REkAEs4SND7uDPPbNcCkyg48/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84MmFl/YmJhZGFlMWJiZTYx/OGEzOWUxMjhmMGNj/ZGRhOC5wbmc.jpg"/>
      <itunes:duration>2294</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of<em> Engineering Enablement</em>, Jesse Adametz joins Abi Noda, this time to host. </p><p><br>Together, they explore how AI is showing up across the SDLC, not just in code generation, and how it is shifting bottlenecks across the development process. They unpack what “AI readiness” actually means in practice, and why it often comes down to developer experience fundamentals like documentation, environments, and feedback loops.</p><p>They also discuss why enablement matters more than tool choice, how teams are thinking about measuring ROI, and what changes as background agents become more common. Finally, they explore how the role of the engineer may evolve, the open questions teams are still grappling with, and the challenges of non-engineers contributing to codebases.</p><p><br></p><p><strong>Where to find Jesse Adametz: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/jesseadametz">https://www.linkedin.com/in/jesseadametz</a> </p><p>• X: <a href="https://x.com/jesseadametz">https://x.com/jesseadametz</a> </p><p>• Website: <a href="https://www.jesseadametz.com/">https://www.jesseadametz.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(02:12) Where AI is showing up across the SDLC</p><p>(05:53) AI readiness and its link to developer experience</p><p>(08:23) Why enablement, education, and experimentation matter more than tool choice</p><p>(13:05) The case for a dedicated enablement team</p><p>(14:50) Measuring AI ROI: challenges and tradeoffs</p><p>(19:46) Background agents and token spend</p><p>(24:12) Measuring agent output with PR throughput</p><p>(26:58) How the engineer role might change</p><p>(31:01) Specs and documentation in the age of AI</p><p>(33:11) Non-engineers writing code</p><p>(35:30) What’s changing in the SDLC and open questions</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/whitepaper/ai-measurement-framework">Measuring AI code assistants and agents</a></p><p>• <a href="https://getdx.com/podcast/jesse-aldametz-twilio-platform-consolidation/">Lessons from Twilio’s multi-year platform consolidation</a></p><p>• <a href="https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592">The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win</a></p><p>• <a href="https://code.claude.com/docs/en/memory">How Claude remembers your project - Claude Code Docs</a></p><p>• <a href="https://www.reddit.com/r/ProgrammerHumor/comments/1p70bk8/specisjustcode/#lightbox">specIsJustCode : r/ProgrammerHumor</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Scaling developer experience across 1,000 engineers at Dropbox</title>
      <itunes:episode>96</itunes:episode>
      <podcast:episode>96</podcast:episode>
      <itunes:title>Scaling developer experience across 1,000 engineers at Dropbox</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9beda533-7f2f-492d-bb69-3b7e75c6256e</guid>
      <link>https://share.transistor.fm/s/32c2c2fb</link>
      <description>
        <![CDATA[<p>Developer productivity is often framed as a tooling initiative or a morale issue. At scale, it’s a more complex socio-technical systems challenge that spans engineering foundations, leadership alignment, organizational structure, and culture.</p><p><br>In this episode, Laura Tacho sits down with Uma Namasivayam, Senior Director, Engineering Productivity at Dropbox, to discuss how the company approaches developer experience across an organization of nearly 1,000 engineers. Uma explains why productivity must be treated as a business problem, how executive alignment enables sustained progress, and what it means to run developer experience like a product.</p><p>The conversation also explores the intersection of AI and developer experience. Uma shares how Dropbox prepared its engineering systems to support AI adoption, why daily AI use depends more on habits than access, and how the company evaluates build-versus-buy decisions as AI tools struggle to scale in large environments.</p><p><br>The episode concludes with a candid discussion of the open questions facing engineering leaders today: how to understand where AI-driven capacity actually goes, and how to connect improvements in developer experience to meaningful business outcomes in 2026.</p><p><br></p><p><strong>Where to find Uma Namasivayam:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/unamasivayam">https://www.linkedin.com/in/unamasivayam</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(00:45) Dropbox’s engineering org</p><p>(01:59) Why developer productivity is a business problem</p><p>(04:08) The role of executive sponsorship in developer productivity</p><p>(06:02) How DX’s Core Four framework created a shared language</p><p>(08:13) Treating developer experience as a product</p><p>(11:30) How Dropbox prioritizes developer experience work</p><p>(14:20) The challenge of tying developer experience to business outcomes</p><p>(16:38) How AI and developer experience intersect at Dropbox</p><p>(18:35) The prerequisites for AI adoption to accelerate work</p><p>(20:26) How Dropbox encourages daily AI use</p><p>(23:12) AI use beyond code completion</p><p>(25:00) Managing AI tool demand at scale</p><p>(27:56) Early results from Dropbox’s AI efforts</p><p>(30:05) Progress on developer experience at Dropbox</p><p>(32:55) Advice for organizations investing in developer experience</p><p>(34:25) Capacity tradeoffs for developer experience</p><p>(35:59) The unanswered questions around AI and capacity in 2026</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></p><p>• <a href="https://www.dropbox.com/">Dropbox.com</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Developer productivity is often framed as a tooling initiative or a morale issue. At scale, it’s a more complex socio-technical systems challenge that spans engineering foundations, leadership alignment, organizational structure, and culture.</p><p><br>In this episode, Laura Tacho sits down with Uma Namasivayam, Senior Director, Engineering Productivity at Dropbox, to discuss how the company approaches developer experience across an organization of nearly 1,000 engineers. Uma explains why productivity must be treated as a business problem, how executive alignment enables sustained progress, and what it means to run developer experience like a product.</p><p>The conversation also explores the intersection of AI and developer experience. Uma shares how Dropbox prepared its engineering systems to support AI adoption, why daily AI use depends more on habits than access, and how the company evaluates build-versus-buy decisions as AI tools struggle to scale in large environments.</p><p><br>The episode concludes with a candid discussion of the open questions facing engineering leaders today: how to understand where AI-driven capacity actually goes, and how to connect improvements in developer experience to meaningful business outcomes in 2026.</p><p><br></p><p><strong>Where to find Uma Namasivayam:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/unamasivayam">https://www.linkedin.com/in/unamasivayam</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(00:45) Dropbox’s engineering org</p><p>(01:59) Why developer productivity is a business problem</p><p>(04:08) The role of executive sponsorship in developer productivity</p><p>(06:02) How DX’s Core Four framework created a shared language</p><p>(08:13) Treating developer experience as a product</p><p>(11:30) How Dropbox prioritizes developer experience work</p><p>(14:20) The challenge of tying developer experience to business outcomes</p><p>(16:38) How AI and developer experience intersect at Dropbox</p><p>(18:35) The prerequisites for AI adoption to accelerate work</p><p>(20:26) How Dropbox encourages daily AI use</p><p>(23:12) AI use beyond code completion</p><p>(25:00) Managing AI tool demand at scale</p><p>(27:56) Early results from Dropbox’s AI efforts</p><p>(30:05) Progress on developer experience at Dropbox</p><p>(32:55) Advice for organizations investing in developer experience</p><p>(34:25) Capacity tradeoffs for developer experience</p><p>(35:59) The unanswered questions around AI and capacity in 2026</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></p><p>• <a href="https://www.dropbox.com/">Dropbox.com</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Feb 2026 06:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/32c2c2fb/e5adfcfd.mp3" length="37489453" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/nCypnlkXgUvPlPyoCdztr7usSPGAcXHbPLj0dOhlEhg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85MDlh/YTc5MzM1YTJhMTIx/Y2Q5MjZhM2ViMmEw/M2ExYS5wbmc.jpg"/>
      <itunes:duration>2342</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Developer productivity is often framed as a tooling initiative or a morale issue. At scale, it’s a more complex socio-technical systems challenge that spans engineering foundations, leadership alignment, organizational structure, and culture.</p><p><br>In this episode, Laura Tacho sits down with Uma Namasivayam, Senior Director, Engineering Productivity at Dropbox, to discuss how the company approaches developer experience across an organization of nearly 1,000 engineers. Uma explains why productivity must be treated as a business problem, how executive alignment enables sustained progress, and what it means to run developer experience like a product.</p><p>The conversation also explores the intersection of AI and developer experience. Uma shares how Dropbox prepared its engineering systems to support AI adoption, why daily AI use depends more on habits than access, and how the company evaluates build-versus-buy decisions as AI tools struggle to scale in large environments.</p><p><br>The episode concludes with a candid discussion of the open questions facing engineering leaders today: how to understand where AI-driven capacity actually goes, and how to connect improvements in developer experience to meaningful business outcomes in 2026.</p><p><br></p><p><strong>Where to find Uma Namasivayam:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/unamasivayam">https://www.linkedin.com/in/unamasivayam</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(00:45) Dropbox’s engineering org</p><p>(01:59) Why developer productivity is a business problem</p><p>(04:08) The role of executive sponsorship in developer productivity</p><p>(06:02) How DX’s Core Four framework created a shared language</p><p>(08:13) Treating developer experience as a product</p><p>(11:30) How Dropbox prioritizes developer experience work</p><p>(14:20) The challenge of tying developer experience to business outcomes</p><p>(16:38) How AI and developer experience intersect at Dropbox</p><p>(18:35) The prerequisites for AI adoption to accelerate work</p><p>(20:26) How Dropbox encourages daily AI use</p><p>(23:12) AI use beyond code completion</p><p>(25:00) Managing AI tool demand at scale</p><p>(27:56) Early results from Dropbox’s AI efforts</p><p>(30:05) Progress on developer experience at Dropbox</p><p>(32:55) Advice for organizations investing in developer experience</p><p>(34:25) Capacity tradeoffs for developer experience</p><p>(35:59) The unanswered questions around AI and capacity in 2026</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></p><p>• <a href="https://www.dropbox.com/">Dropbox.com</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>AI and productivity: A year-in-review with Microsoft, Google, and GitHub researchers</title>
      <itunes:episode>95</itunes:episode>
      <podcast:episode>95</podcast:episode>
      <itunes:title>AI and productivity: A year-in-review with Microsoft, Google, and GitHub researchers</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9478b760-26d4-4370-95ef-6c2ea7248e6a</guid>
      <link>https://share.transistor.fm/s/a00360b6</link>
      <description>
        <![CDATA[<p>As AI adoption accelerates across the software industry, engineering leaders are increasingly focused on a harder question: how to understand whether these tools are actually improving developer experience and organizational outcomes.</p><p>In this year-end episode of the <em>Engineering Enablement </em>podcast, host Laura Tacho is joined by Brian Houck from Microsoft, Collin Green and Ciera Jaspan from Google, and Eirini Kalliamvakou from GitHub to examine what 2025 research reveals about AI impact in engineering teams. The panel discusses why measuring AI’s effectiveness is inherently complex, why familiar metrics like lines of code continue to resurface despite their limitations, and how multidimensional frameworks such as SPACE and DORA provide a more accurate view of developer productivity.</p><p><br>The conversation also looks ahead to 2026, exploring how AI is beginning to reshape the role of the developer, how junior engineers’ skill sets may evolve, where agentic workflows are emerging, and why some widely shared AI studies were misunderstood. Together, the panel offers a grounded perspective on moving beyond hype toward more thoughtful, evidence-based AI adoption.</p><p><strong>Where to find Brian Houck:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brianhouck/">https://www.linkedin.com/in/brianhouck/</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/bhouck/">https://www.microsoft.com/en-us/research/people/bhouck/</a> </p><p><br></p><p><strong>Where to find Collin Green: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/collin-green-97720378">https://www.linkedin.com/in/collin-green-97720378</a> </p><p>• Website: <a href="https://research.google/people/107023">https://research.google/people/107023</a></p><p><br></p><p><strong>Where to find Ciera Jaspan: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/ciera">https://www.linkedin.com/in/ciera</a> </p><p>• Website: <a href="https://research.google/people/cierajaspan/">https://research.google/people/cierajaspan/</a></p><p><br></p><p><strong>Where to find Eirini Kalliamvakou: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/eirini-kalliamvakou-1016865/?originalSubdomain=ca">https://www.linkedin.com/in/eirini-kalliamvakou-1016865/</a></p><p>• X: <a href="https://x.com/irina_kAl">https://x.com/irina_kAl</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/eikalli">https://www.microsoft.com/en-us/research/people/eikalli</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(02:35) Introducing the panel and the focus of the discussion</p><p>(04:43) Why measuring AI’s impact is such a hard problem</p><p>(05:30) How Microsoft approaches AI impact measurement</p><p>(06:40) How Google thinks about measuring AI impact</p><p>(07:28) GitHub’s perspective on measurement and insights from the DORA report</p><p>(10:35) Why lines of code is a misleading metric</p><p>(14:27) The limitations of measuring the percentage of code generated by AI</p><p>(18:24) GitHub’s research on how AI is shaping the identity of the developer</p><p>(21:39) How AI may change junior engineers’ skill sets</p><p>(24:42) Google’s research on using AI and creativity </p><p>(26:24) High-leverage AI use cases that improve developer experience</p><p>(32:38) Open research questions for AI and developer productivity in 2026</p><p>(35:33) How leading organizations approach change and agentic workflows</p><p>(38:02) Why the METR paper resonated and how it was misunderstood</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/whitepaper/ai-measurement-framework">Measuring AI code assistants and agents</a></p><p>• <a href="https://kiro.dev/">Kiro</a></p><p>• <a href="https://code.claude.com/">Claude Code - AI coding agent for terminal &amp; IDE</a></p><p>• <a href="https://getdx.com/blog/space-framework-primer/">SPACE framework: a quick primer</a></p><p>• <a href="https://dora.dev/research/2025/dora-report/">DORA | State of AI-assisted Software Development 2025</a></p><p>• <a href="https://newsletter.pragmaticengineer.com/p/martin-fowler">Martin Fowler - by Gergely Orosz - The Pragmatic Engineer</a></p><p>• <a href="https://ieeexplore.ieee.org/document/10857384">Seamful AI for Creative Software Engineering: Use in Software Development Workflows | IEEE Journals &amp; Magazine | IEEE Xplore</a></p><p>• <a href="https://www.microsoft.com/en-us/research/publication/ai-where-it-matters-where-why-and-how-developers-want-ai-support-in-daily-work/">AI Where It Matters: Where, Why, and How Developers Want AI Support in Daily Work - Microsoft Research</a></p><p>• <a href="https://getdx.com/blog/unpacking-metri-findings-does-ai-slow-developers-down/">Unpacking METR’s findings: Does AI slow developers down?</a></p><p>• <a href="https://dxannual.com/">DX Annual 2026</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>As AI adoption accelerates across the software industry, engineering leaders are increasingly focused on a harder question: how to understand whether these tools are actually improving developer experience and organizational outcomes.</p><p>In this year-end episode of the <em>Engineering Enablement </em>podcast, host Laura Tacho is joined by Brian Houck from Microsoft, Collin Green and Ciera Jaspan from Google, and Eirini Kalliamvakou from GitHub to examine what 2025 research reveals about AI impact in engineering teams. The panel discusses why measuring AI’s effectiveness is inherently complex, why familiar metrics like lines of code continue to resurface despite their limitations, and how multidimensional frameworks such as SPACE and DORA provide a more accurate view of developer productivity.</p><p><br>The conversation also looks ahead to 2026, exploring how AI is beginning to reshape the role of the developer, how junior engineers’ skill sets may evolve, where agentic workflows are emerging, and why some widely shared AI studies were misunderstood. Together, the panel offers a grounded perspective on moving beyond hype toward more thoughtful, evidence-based AI adoption.</p><p><strong>Where to find Brian Houck:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brianhouck/">https://www.linkedin.com/in/brianhouck/</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/bhouck/">https://www.microsoft.com/en-us/research/people/bhouck/</a> </p><p><br></p><p><strong>Where to find Collin Green: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/collin-green-97720378">https://www.linkedin.com/in/collin-green-97720378</a> </p><p>• Website: <a href="https://research.google/people/107023">https://research.google/people/107023</a></p><p><br></p><p><strong>Where to find Ciera Jaspan: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/ciera">https://www.linkedin.com/in/ciera</a> </p><p>• Website: <a href="https://research.google/people/cierajaspan/">https://research.google/people/cierajaspan/</a></p><p><br></p><p><strong>Where to find Eirini Kalliamvakou: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/eirini-kalliamvakou-1016865/?originalSubdomain=ca">https://www.linkedin.com/in/eirini-kalliamvakou-1016865/</a></p><p>• X: <a href="https://x.com/irina_kAl">https://x.com/irina_kAl</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/eikalli">https://www.microsoft.com/en-us/research/people/eikalli</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(02:35) Introducing the panel and the focus of the discussion</p><p>(04:43) Why measuring AI’s impact is such a hard problem</p><p>(05:30) How Microsoft approaches AI impact measurement</p><p>(06:40) How Google thinks about measuring AI impact</p><p>(07:28) GitHub’s perspective on measurement and insights from the DORA report</p><p>(10:35) Why lines of code is a misleading metric</p><p>(14:27) The limitations of measuring the percentage of code generated by AI</p><p>(18:24) GitHub’s research on how AI is shaping the identity of the developer</p><p>(21:39) How AI may change junior engineers’ skill sets</p><p>(24:42) Google’s research on using AI and creativity </p><p>(26:24) High-leverage AI use cases that improve developer experience</p><p>(32:38) Open research questions for AI and developer productivity in 2026</p><p>(35:33) How leading organizations approach change and agentic workflows</p><p>(38:02) Why the METR paper resonated and how it was misunderstood</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/whitepaper/ai-measurement-framework">Measuring AI code assistants and agents</a></p><p>• <a href="https://kiro.dev/">Kiro</a></p><p>• <a href="https://code.claude.com/">Claude Code - AI coding agent for terminal &amp; IDE</a></p><p>• <a href="https://getdx.com/blog/space-framework-primer/">SPACE framework: a quick primer</a></p><p>• <a href="https://dora.dev/research/2025/dora-report/">DORA | State of AI-assisted Software Development 2025</a></p><p>• <a href="https://newsletter.pragmaticengineer.com/p/martin-fowler">Martin Fowler - by Gergely Orosz - The Pragmatic Engineer</a></p><p>• <a href="https://ieeexplore.ieee.org/document/10857384">Seamful AI for Creative Software Engineering: Use in Software Development Workflows | IEEE Journals &amp; Magazine | IEEE Xplore</a></p><p>• <a href="https://www.microsoft.com/en-us/research/publication/ai-where-it-matters-where-why-and-how-developers-want-ai-support-in-daily-work/">AI Where It Matters: Where, Why, and How Developers Want AI Support in Daily Work - Microsoft Research</a></p><p>• <a href="https://getdx.com/blog/unpacking-metri-findings-does-ai-slow-developers-down/">Unpacking METR’s findings: Does AI slow developers down?</a></p><p>• <a href="https://dxannual.com/">DX Annual 2026</a></p>]]>
      </content:encoded>
      <pubDate>Mon, 29 Dec 2025 06:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/a00360b6/d9c68c44.mp3" length="40386604" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/OiSDPSO4ysLZFchkBY1-I8gbm6SHIA4tWkEsxh9vlc0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wNjJl/ODFhNjFlNTk0MGNj/YjZiNDE2NGRhNTA0/ZDAzZi5wbmc.jpg"/>
      <itunes:duration>2520</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>As AI adoption accelerates across the software industry, engineering leaders are increasingly focused on a harder question: how to understand whether these tools are actually improving developer experience and organizational outcomes.</p><p>In this year-end episode of the <em>Engineering Enablement </em>podcast, host Laura Tacho is joined by Brian Houck from Microsoft, Collin Green and Ciera Jaspan from Google, and Eirini Kalliamvakou from GitHub to examine what 2025 research reveals about AI impact in engineering teams. The panel discusses why measuring AI’s effectiveness is inherently complex, why familiar metrics like lines of code continue to resurface despite their limitations, and how multidimensional frameworks such as SPACE and DORA provide a more accurate view of developer productivity.</p><p><br>The conversation also looks ahead to 2026, exploring how AI is beginning to reshape the role of the developer, how junior engineers’ skill sets may evolve, where agentic workflows are emerging, and why some widely shared AI studies were misunderstood. Together, the panel offers a grounded perspective on moving beyond hype toward more thoughtful, evidence-based AI adoption.</p><p><strong>Where to find Brian Houck:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brianhouck/">https://www.linkedin.com/in/brianhouck/</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/bhouck/">https://www.microsoft.com/en-us/research/people/bhouck/</a> </p><p><br></p><p><strong>Where to find Collin Green: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/collin-green-97720378">https://www.linkedin.com/in/collin-green-97720378</a> </p><p>• Website: <a href="https://research.google/people/107023">https://research.google/people/107023</a></p><p><br></p><p><strong>Where to find Ciera Jaspan: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/ciera">https://www.linkedin.com/in/ciera</a> </p><p>• Website: <a href="https://research.google/people/cierajaspan/">https://research.google/people/cierajaspan/</a></p><p><br></p><p><strong>Where to find Eirini Kalliamvakou: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/eirini-kalliamvakou-1016865/?originalSubdomain=ca">https://www.linkedin.com/in/eirini-kalliamvakou-1016865/</a></p><p>• X: <a href="https://x.com/irina_kAl">https://x.com/irina_kAl</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/eikalli">https://www.microsoft.com/en-us/research/people/eikalli</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(02:35) Introducing the panel and the focus of the discussion</p><p>(04:43) Why measuring AI’s impact is such a hard problem</p><p>(05:30) How Microsoft approaches AI impact measurement</p><p>(06:40) How Google thinks about measuring AI impact</p><p>(07:28) GitHub’s perspective on measurement and insights from the DORA report</p><p>(10:35) Why lines of code is a misleading metric</p><p>(14:27) The limitations of measuring the percentage of code generated by AI</p><p>(18:24) GitHub’s research on how AI is shaping the identity of the developer</p><p>(21:39) How AI may change junior engineers’ skill sets</p><p>(24:42) Google’s research on using AI and creativity </p><p>(26:24) High-leverage AI use cases that improve developer experience</p><p>(32:38) Open research questions for AI and developer productivity in 2026</p><p>(35:33) How leading organizations approach change and agentic workflows</p><p>(38:02) Why the METR paper resonated and how it was misunderstood</p><p><br></p><p><strong>Referenced:</strong></p><p>• <a href="https://getdx.com/whitepaper/ai-measurement-framework">Measuring AI code assistants and agents</a></p><p>• <a href="https://kiro.dev/">Kiro</a></p><p>• <a href="https://code.claude.com/">Claude Code - AI coding agent for terminal &amp; IDE</a></p><p>• <a href="https://getdx.com/blog/space-framework-primer/">SPACE framework: a quick primer</a></p><p>• <a href="https://dora.dev/research/2025/dora-report/">DORA | State of AI-assisted Software Development 2025</a></p><p>• <a href="https://newsletter.pragmaticengineer.com/p/martin-fowler">Martin Fowler - by Gergely Orosz - The Pragmatic Engineer</a></p><p>• <a href="https://ieeexplore.ieee.org/document/10857384">Seamful AI for Creative Software Engineering: Use in Software Development Workflows | IEEE Journals &amp; Magazine | IEEE Xplore</a></p><p>• <a href="https://www.microsoft.com/en-us/research/publication/ai-where-it-matters-where-why-and-how-developers-want-ai-support-in-daily-work/">AI Where It Matters: Where, Why, and How Developers Want AI Support in Daily Work - Microsoft Research</a></p><p>• <a href="https://getdx.com/blog/unpacking-metri-findings-does-ai-slow-developers-down/">Unpacking METR’s findings: Does AI slow developers down?</a></p><p>• <a href="https://dxannual.com/">DX Annual 2026</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Running data-driven evaluations of AI engineering tools</title>
      <itunes:episode>94</itunes:episode>
      <podcast:episode>94</podcast:episode>
      <itunes:title>Running data-driven evaluations of AI engineering tools</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d28a936e-387d-41bd-a47b-6157029fc115</guid>
      <link>https://share.transistor.fm/s/5f3753bb</link>
      <description>
        <![CDATA[<p>AI engineering tools are evolving fast. New coding assistants, debugging agents, and automation platforms emerge every month. Engineering leaders want to take advantage of these innovations while avoiding costly experiments that create more distraction than impact.</p><p><br>In this episode of the <em>Engineering Enablement</em> podcast, host Laura Tacho and Abi Noda outline a practical model for evaluating AI tools with data. They explain how to shortlist tools by use case, run trials that mirror real development work, select representative cohorts, and ensure consistent support and enablement. They also highlight why baselines and frameworks like DX’s Core 4 and the AI Measurement Framework are essential for measuring impact.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Running a data-driven evaluation of AI tools</p><p>(02:36) Challenges in evaluating AI tools</p><p>(06:11) How often to reevaluate AI tools</p><p>(07:02) Incumbent tools vs challenger tools</p><p>(07:40) Why organizations need disciplined evaluations before rolling out tools</p><p>(09:28) How to size your tool shortlist based on developer population</p><p>(12:44) Why tools must be grouped by use case and interaction mode</p><p>(13:30) How to structure trials around a clear research question</p><p>(16:45) Best practices for selecting trial participants</p><p>(19:22) Why support and enablement are essential for success</p><p>(21:10) How to choose the right duration for evaluations</p><p>(22:52) How to measure impact using baselines and the AI Measurement Framework</p><p>(25:28) Key considerations for an AI tool evaluation</p><p>(28:52) Q&amp;A: How reliable is self-reported time savings from AI tools?</p><p>(32:22) Q&amp;A: Why not adopt multiple tools instead of choosing just one?</p><p>(33:27) Q&amp;A: Tool performance differences and avoiding vendor lock-in</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://qconferences.com/">QCon conferences</a></li><li><a href="https://getdx.com/dx-core-4/">DX Core 4 engineering metrics</a></li><li><a href="https://getdx.com/podcast/doras-2025-research-on-the-impact-of-ai/">DORA’s 2025 research on the impact of AI</a></li><li><a href="https://getdx.com/blog/unpacking-metri-findings-does-ai-slow-developers-down/">Unpacking METR’s findings: Does AI slow developers down?</a></li><li><a href="https://newsletter.getdx.com/p/metr-study-on-how-ai-affects-developer-productivity">METR’s study on how AI affects developer productivity</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://newsletter.getdx.com/p/do-newer-ai-native-ides-outperform-other-ai-coding-assistants">Do newer AI-native IDEs outperform other AI coding assistants?</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>AI engineering tools are evolving fast. New coding assistants, debugging agents, and automation platforms emerge every month. Engineering leaders want to take advantage of these innovations while avoiding costly experiments that create more distraction than impact.</p><p><br>In this episode of the <em>Engineering Enablement</em> podcast, host Laura Tacho and Abi Noda outline a practical model for evaluating AI tools with data. They explain how to shortlist tools by use case, run trials that mirror real development work, select representative cohorts, and ensure consistent support and enablement. They also highlight why baselines and frameworks like DX’s Core 4 and the AI Measurement Framework are essential for measuring impact.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Running a data-driven evaluation of AI tools</p><p>(02:36) Challenges in evaluating AI tools</p><p>(06:11) How often to reevaluate AI tools</p><p>(07:02) Incumbent tools vs challenger tools</p><p>(07:40) Why organizations need disciplined evaluations before rolling out tools</p><p>(09:28) How to size your tool shortlist based on developer population</p><p>(12:44) Why tools must be grouped by use case and interaction mode</p><p>(13:30) How to structure trials around a clear research question</p><p>(16:45) Best practices for selecting trial participants</p><p>(19:22) Why support and enablement are essential for success</p><p>(21:10) How to choose the right duration for evaluations</p><p>(22:52) How to measure impact using baselines and the AI Measurement Framework</p><p>(25:28) Key considerations for an AI tool evaluation</p><p>(28:52) Q&amp;A: How reliable is self-reported time savings from AI tools?</p><p>(32:22) Q&amp;A: Why not adopt multiple tools instead of choosing just one?</p><p>(33:27) Q&amp;A: Tool performance differences and avoiding vendor lock-in</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://qconferences.com/">QCon conferences</a></li><li><a href="https://getdx.com/dx-core-4/">DX Core 4 engineering metrics</a></li><li><a href="https://getdx.com/podcast/doras-2025-research-on-the-impact-of-ai/">DORA’s 2025 research on the impact of AI</a></li><li><a href="https://getdx.com/blog/unpacking-metri-findings-does-ai-slow-developers-down/">Unpacking METR’s findings: Does AI slow developers down?</a></li><li><a href="https://newsletter.getdx.com/p/metr-study-on-how-ai-affects-developer-productivity">METR’s study on how AI affects developer productivity</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://newsletter.getdx.com/p/do-newer-ai-native-ides-outperform-other-ai-coding-assistants">Do newer AI-native IDEs outperform other AI coding assistants?</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 12 Dec 2025 06:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/5f3753bb/0fe4d68d.mp3" length="36155156" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/zQNY1vXH_mOPclUYimCG_lNS9jkobUmEKiH0sG0-UnQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjkx/MTg3ZDZiYmQ5ZTkz/NjA5NGY2OWM4NzE3/OGNhOS5wbmc.jpg"/>
      <itunes:duration>2255</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>AI engineering tools are evolving fast. New coding assistants, debugging agents, and automation platforms emerge every month. Engineering leaders want to take advantage of these innovations while avoiding costly experiments that create more distraction than impact.</p><p><br>In this episode of the <em>Engineering Enablement</em> podcast, host Laura Tacho and Abi Noda outline a practical model for evaluating AI tools with data. They explain how to shortlist tools by use case, run trials that mirror real development work, select representative cohorts, and ensure consistent support and enablement. They also highlight why baselines and frameworks like DX’s Core 4 and the AI Measurement Framework are essential for measuring impact.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Running a data-driven evaluation of AI tools</p><p>(02:36) Challenges in evaluating AI tools</p><p>(06:11) How often to reevaluate AI tools</p><p>(07:02) Incumbent tools vs challenger tools</p><p>(07:40) Why organizations need disciplined evaluations before rolling out tools</p><p>(09:28) How to size your tool shortlist based on developer population</p><p>(12:44) Why tools must be grouped by use case and interaction mode</p><p>(13:30) How to structure trials around a clear research question</p><p>(16:45) Best practices for selecting trial participants</p><p>(19:22) Why support and enablement are essential for success</p><p>(21:10) How to choose the right duration for evaluations</p><p>(22:52) How to measure impact using baselines and the AI Measurement Framework</p><p>(25:28) Key considerations for an AI tool evaluation</p><p>(28:52) Q&amp;A: How reliable is self-reported time savings from AI tools?</p><p>(32:22) Q&amp;A: Why not adopt multiple tools instead of choosing just one?</p><p>(33:27) Q&amp;A: Tool performance differences and avoiding vendor lock-in</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://qconferences.com/">QCon conferences</a></li><li><a href="https://getdx.com/dx-core-4/">DX Core 4 engineering metrics</a></li><li><a href="https://getdx.com/podcast/doras-2025-research-on-the-impact-of-ai/">DORA’s 2025 research on the impact of AI</a></li><li><a href="https://getdx.com/blog/unpacking-metri-findings-does-ai-slow-developers-down/">Unpacking METR’s findings: Does AI slow developers down?</a></li><li><a href="https://newsletter.getdx.com/p/metr-study-on-how-ai-affects-developer-productivity">METR’s study on how AI affects developer productivity</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://newsletter.getdx.com/p/do-newer-ai-native-ides-outperform-other-ai-coding-assistants">Do newer AI-native IDEs outperform other AI coding assistants?</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>DORA’s 2025 research on the impact of AI</title>
      <itunes:episode>93</itunes:episode>
      <podcast:episode>93</podcast:episode>
      <itunes:title>DORA’s 2025 research on the impact of AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9ae36452-e1f8-48e7-b93f-cb512e1e474b</guid>
      <link>https://share.transistor.fm/s/03e5c3d0</link>
      <description>
        <![CDATA[<p>Nathen Harvey leads research at DORA, focused on how teams measure and improve software delivery. In today’s episode of Engineering Enablement, Nathen sits down with host Laura Tacho to explore how AI is changing the way teams think about productivity, quality, and performance.</p><p><br>Together, they examine findings from the 2025 DORA research on AI-assisted software development and DX’s Q4 AI Impact report, comparing where the data aligns and where important gaps emerge. They discuss why relying on traditional delivery metrics can give leaders a false sense of confidence and why AI acts as an amplifier, accelerating healthy systems while intensifying existing friction and failure.</p><p>The conversation focuses on how AI is reshaping engineering systems themselves. Rather than treating AI as a standalone tool, they explore how it changes workflows, feedback loops, team dynamics, and organizational decision-making, and why leaders need better system-level visibility to understand its real impact.</p><p><br><strong>Where to find Nathen Harvey:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/nathen">https://www.linkedin.com/in/nathen</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(00:55) Why the four key DORA metrics aren’t enough to measure AI impact</p><p>(03:44) The shift from four to five DORA metrics and why leaders need more than dashboards</p><p>(06:20) The one-sentence takeaway from the 2025 DORA report</p><p>(07:38) How AI amplifies both strengths and bottlenecks inside engineering systems</p><p>(08:58) What DX data reveals about how junior and senior engineers use AI differently</p><p>(10:33) The DORA AI Capabilities Model and why AI success depends on how it’s used</p><p>(18:24) How a clear and communicated AI stance improves adoption and reduces friction</p><p>(23:02) Why talking to your teams still matters </p><p><br></p><p><strong>Referenced:<br></strong>• <a href="https://dora.dev/research/2025/dora-report/">DORA | State of AI-assisted Software Development 2025</a><br>• <a href="https://www.linkedin.com/in/stevefenton/">Steve Fenton - Octonaut | LinkedIn</a><br>• <a href="https://getdx.com/report/ai-assisted-engineering-q4-impact-report/?utm_source=podcast">AI-assisted engineering: Q4 impact report</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Nathen Harvey leads research at DORA, focused on how teams measure and improve software delivery. In today’s episode of Engineering Enablement, Nathen sits down with host Laura Tacho to explore how AI is changing the way teams think about productivity, quality, and performance.</p><p><br>Together, they examine findings from the 2025 DORA research on AI-assisted software development and DX’s Q4 AI Impact report, comparing where the data aligns and where important gaps emerge. They discuss why relying on traditional delivery metrics can give leaders a false sense of confidence and why AI acts as an amplifier, accelerating healthy systems while intensifying existing friction and failure.</p><p>The conversation focuses on how AI is reshaping engineering systems themselves. Rather than treating AI as a standalone tool, they explore how it changes workflows, feedback loops, team dynamics, and organizational decision-making, and why leaders need better system-level visibility to understand its real impact.</p><p><br><strong>Where to find Nathen Harvey:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/nathen">https://www.linkedin.com/in/nathen</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(00:55) Why the four key DORA metrics aren’t enough to measure AI impact</p><p>(03:44) The shift from four to five DORA metrics and why leaders need more than dashboards</p><p>(06:20) The one-sentence takeaway from the 2025 DORA report</p><p>(07:38) How AI amplifies both strengths and bottlenecks inside engineering systems</p><p>(08:58) What DX data reveals about how junior and senior engineers use AI differently</p><p>(10:33) The DORA AI Capabilities Model and why AI success depends on how it’s used</p><p>(18:24) How a clear and communicated AI stance improves adoption and reduces friction</p><p>(23:02) Why talking to your teams still matters </p><p><br></p><p><strong>Referenced:<br></strong>• <a href="https://dora.dev/research/2025/dora-report/">DORA | State of AI-assisted Software Development 2025</a><br>• <a href="https://www.linkedin.com/in/stevefenton/">Steve Fenton - Octonaut | LinkedIn</a><br>• <a href="https://getdx.com/report/ai-assisted-engineering-q4-impact-report/?utm_source=podcast">AI-assisted engineering: Q4 impact report</a></p>]]>
      </content:encoded>
      <pubDate>Fri, 21 Nov 2025 06:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/03e5c3d0/edf51144.mp3" length="25213779" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/kZ5IQNFrPZICTr6fC_TVfGpdRM85hyKJkXuuZERlhXc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80MTY2/NzNlNTZlMjk4ZjFi/Njc0ODUxZWRhMGY5/MzliOS5wbmc.jpg"/>
      <itunes:duration>1571</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Nathen Harvey leads research at DORA, focused on how teams measure and improve software delivery. In today’s episode of Engineering Enablement, Nathen sits down with host Laura Tacho to explore how AI is changing the way teams think about productivity, quality, and performance.</p><p><br>Together, they examine findings from the 2025 DORA research on AI-assisted software development and DX’s Q4 AI Impact report, comparing where the data aligns and where important gaps emerge. They discuss why relying on traditional delivery metrics can give leaders a false sense of confidence and why AI acts as an amplifier, accelerating healthy systems while intensifying existing friction and failure.</p><p>The conversation focuses on how AI is reshaping engineering systems themselves. Rather than treating AI as a standalone tool, they explore how it changes workflows, feedback loops, team dynamics, and organizational decision-making, and why leaders need better system-level visibility to understand its real impact.</p><p><br><strong>Where to find Nathen Harvey:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/nathen">https://www.linkedin.com/in/nathen</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(00:55) Why the four key DORA metrics aren’t enough to measure AI impact</p><p>(03:44) The shift from four to five DORA metrics and why leaders need more than dashboards</p><p>(06:20) The one-sentence takeaway from the 2025 DORA report</p><p>(07:38) How AI amplifies both strengths and bottlenecks inside engineering systems</p><p>(08:58) What DX data reveals about how junior and senior engineers use AI differently</p><p>(10:33) The DORA AI Capabilities Model and why AI success depends on how it’s used</p><p>(18:24) How a clear and communicated AI stance improves adoption and reduces friction</p><p>(23:02) Why talking to your teams still matters </p><p><br></p><p><strong>Referenced:<br></strong>• <a href="https://dora.dev/research/2025/dora-report/">DORA | State of AI-assisted Software Development 2025</a><br>• <a href="https://www.linkedin.com/in/stevefenton/">Steve Fenton - Octonaut | LinkedIn</a><br>• <a href="https://getdx.com/report/ai-assisted-engineering-q4-impact-report/?utm_source=podcast">AI-assisted engineering: Q4 impact report</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How Monzo runs data-driven AI experimentation</title>
      <itunes:episode>92</itunes:episode>
      <podcast:episode>92</podcast:episode>
      <itunes:title>How Monzo runs data-driven AI experimentation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8ed27c39-b842-43b1-bf61-4aeb59507193</guid>
      <link>https://share.transistor.fm/s/ee9bc51a</link>
      <description>
        <![CDATA[<p>In this episode of <em>Engineering Enablement,</em> host Laura Tacho talks with Fabien Deshayes, who leads multiple platform engineering teams at Monzo Bank. Fabien explains how Monzo is adopting AI responsibly within a highly regulated industry, balancing innovation with structure, control, and data-driven decision-making.</p><p><br>They discuss how Monzo runs structured AI trials, measures adoption and satisfaction, and uses metrics to guide investment and training. Fabien shares why the company moved from broad rollouts to small, focused cohorts, how they are addressing existing PR review bottlenecks that AI has intensified, and what they have learned from empowering product managers and designers to use AI tools directly.</p><p><br>He also offers insights into budgeting and experimentation, the results Monzo is seeing from AI-assisted engineering, and his outlook on what comes next, from agent orchestration to more seamless collaboration across roles.</p><p><strong>Where to find Fabien Deshayes: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/fabiendeshayes">https://www.linkedin.com/in/fabiendeshayes</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro  </p><p>(01:01) An overview of Monzo bank and Fabien’s role  </p><p>(02:05) Monzo’s careful, structured approach to AI experimentation  </p><p>(05:30) How Monzo’s AI journey began  </p><p>(06:26) Why Monzo chose a structured approach to experimentation and what criteria they used  </p><p>(09:21) How Monzo selected AI tools for experimentation  </p><p>(11:51) Why individual tool stipends don’t work for large, regulated organizations  </p><p>(15:32) How Monzo measures the impact of AI tools and uses the data  </p><p>(18:10) Why Monzo limits AI tool trials to small, focused cohorts  </p><p>(20:54) The phases of Monzo’s AI rollout and how learnings are shared across the organization  </p><p>(22:43) What Monzo’s data reveals about AI usage and spending  </p><p>(24:30) How Monzo balances AI budgeting with innovation  </p><p>(26:45) Results from DX’s spending poll and general advice on AI budgeting  </p><p>(28:03) What Monzo’s data shows about AI’s impact on engineering performance  </p><p>(29:50) The growing bottleneck in PR reviews and how Monzo is solving it with tenancies  </p><p>(33:54) How product managers and designers are using AI at Monzo  </p><p>(36:36) Fabien’s advice for moving the needle with AI adoption  </p><p>(38:42) The biggest changes coming next in AI engineering </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://monzo.com/">Monzo</a> </li><li><a href="https://go.dev/">The Go Programming Language</a></li><li><a href="https://www.swift.org/">Swift.org</a></li><li><a href="https://kotlinlang.org/">Kotlin</a></li><li><a href="https://code.visualstudio.com/docs/copilot/overview">GitHub Copilot in VS Code</a> </li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://getdx.com/podcast/planning-2026-ai-tooling-budget/">Planning your 2026 AI tooling budget: guidance for engineering leaders</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>Engineering Enablement,</em> host Laura Tacho talks with Fabien Deshayes, who leads multiple platform engineering teams at Monzo Bank. Fabien explains how Monzo is adopting AI responsibly within a highly regulated industry, balancing innovation with structure, control, and data-driven decision-making.</p><p><br>They discuss how Monzo runs structured AI trials, measures adoption and satisfaction, and uses metrics to guide investment and training. Fabien shares why the company moved from broad rollouts to small, focused cohorts, how they are addressing existing PR review bottlenecks that AI has intensified, and what they have learned from empowering product managers and designers to use AI tools directly.</p><p><br>He also offers insights into budgeting and experimentation, the results Monzo is seeing from AI-assisted engineering, and his outlook on what comes next, from agent orchestration to more seamless collaboration across roles.</p><p><strong>Where to find Fabien Deshayes: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/fabiendeshayes">https://www.linkedin.com/in/fabiendeshayes</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro  </p><p>(01:01) An overview of Monzo bank and Fabien’s role  </p><p>(02:05) Monzo’s careful, structured approach to AI experimentation  </p><p>(05:30) How Monzo’s AI journey began  </p><p>(06:26) Why Monzo chose a structured approach to experimentation and what criteria they used  </p><p>(09:21) How Monzo selected AI tools for experimentation  </p><p>(11:51) Why individual tool stipends don’t work for large, regulated organizations  </p><p>(15:32) How Monzo measures the impact of AI tools and uses the data  </p><p>(18:10) Why Monzo limits AI tool trials to small, focused cohorts  </p><p>(20:54) The phases of Monzo’s AI rollout and how learnings are shared across the organization  </p><p>(22:43) What Monzo’s data reveals about AI usage and spending  </p><p>(24:30) How Monzo balances AI budgeting with innovation  </p><p>(26:45) Results from DX’s spending poll and general advice on AI budgeting  </p><p>(28:03) What Monzo’s data shows about AI’s impact on engineering performance  </p><p>(29:50) The growing bottleneck in PR reviews and how Monzo is solving it with tenancies  </p><p>(33:54) How product managers and designers are using AI at Monzo  </p><p>(36:36) Fabien’s advice for moving the needle with AI adoption  </p><p>(38:42) The biggest changes coming next in AI engineering </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://monzo.com/">Monzo</a> </li><li><a href="https://go.dev/">The Go Programming Language</a></li><li><a href="https://www.swift.org/">Swift.org</a></li><li><a href="https://kotlinlang.org/">Kotlin</a></li><li><a href="https://code.visualstudio.com/docs/copilot/overview">GitHub Copilot in VS Code</a> </li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://getdx.com/podcast/planning-2026-ai-tooling-budget/">Planning your 2026 AI tooling budget: guidance for engineering leaders</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 31 Oct 2025 06:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/ee9bc51a/760576bf.mp3" length="39738689" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ecqCfPtUOnW8bzlliTISWFv7nJV83DZm--do4eK8XMg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82YTli/YzA3ODdhZGQ0Mzg5/YmVhYjAyN2QwM2Zm/NjU5MC5wbmc.jpg"/>
      <itunes:duration>2479</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>Engineering Enablement,</em> host Laura Tacho talks with Fabien Deshayes, who leads multiple platform engineering teams at Monzo Bank. Fabien explains how Monzo is adopting AI responsibly within a highly regulated industry, balancing innovation with structure, control, and data-driven decision-making.</p><p><br>They discuss how Monzo runs structured AI trials, measures adoption and satisfaction, and uses metrics to guide investment and training. Fabien shares why the company moved from broad rollouts to small, focused cohorts, how they are addressing existing PR review bottlenecks that AI has intensified, and what they have learned from empowering product managers and designers to use AI tools directly.</p><p><br>He also offers insights into budgeting and experimentation, the results Monzo is seeing from AI-assisted engineering, and his outlook on what comes next, from agent orchestration to more seamless collaboration across roles.</p><p><strong>Where to find Fabien Deshayes: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/fabiendeshayes">https://www.linkedin.com/in/fabiendeshayes</a></p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro  </p><p>(01:01) An overview of Monzo bank and Fabien’s role  </p><p>(02:05) Monzo’s careful, structured approach to AI experimentation  </p><p>(05:30) How Monzo’s AI journey began  </p><p>(06:26) Why Monzo chose a structured approach to experimentation and what criteria they used  </p><p>(09:21) How Monzo selected AI tools for experimentation  </p><p>(11:51) Why individual tool stipends don’t work for large, regulated organizations  </p><p>(15:32) How Monzo measures the impact of AI tools and uses the data  </p><p>(18:10) Why Monzo limits AI tool trials to small, focused cohorts  </p><p>(20:54) The phases of Monzo’s AI rollout and how learnings are shared across the organization  </p><p>(22:43) What Monzo’s data reveals about AI usage and spending  </p><p>(24:30) How Monzo balances AI budgeting with innovation  </p><p>(26:45) Results from DX’s spending poll and general advice on AI budgeting  </p><p>(28:03) What Monzo’s data shows about AI’s impact on engineering performance  </p><p>(29:50) The growing bottleneck in PR reviews and how Monzo is solving it with tenancies  </p><p>(33:54) How product managers and designers are using AI at Monzo  </p><p>(36:36) Fabien’s advice for moving the needle with AI adoption  </p><p>(38:42) The biggest changes coming next in AI engineering </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://monzo.com/">Monzo</a> </li><li><a href="https://go.dev/">The Go Programming Language</a></li><li><a href="https://www.swift.org/">Swift.org</a></li><li><a href="https://kotlinlang.org/">Kotlin</a></li><li><a href="https://code.visualstudio.com/docs/copilot/overview">GitHub Copilot in VS Code</a> </li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://getdx.com/podcast/planning-2026-ai-tooling-budget/">Planning your 2026 AI tooling budget: guidance for engineering leaders</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Planning your 2026 AI tooling budget: guidance for engineering leaders</title>
      <itunes:episode>91</itunes:episode>
      <podcast:episode>91</podcast:episode>
      <itunes:title>Planning your 2026 AI tooling budget: guidance for engineering leaders</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e1316594-54e7-45d1-a306-7a0d23e1333a</guid>
      <link>https://share.transistor.fm/s/e2980215</link>
      <description>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, Laura Tacho and Abi Noda discuss how engineering leaders can plan their 2026 AI budgets effectively amid rapid change and rising costs. Drawing on data from DX’s recent poll and industry benchmarks, they explore how much organizations should expect to spend per developer, how to allocate budgets across AI tools, and how to balance innovation with cost control.</p><p>Laura and Abi also share practical insights on building a multi-vendor strategy, evaluating ROI through the right metrics, and ensuring continuous measurement before and after adoption. They discuss how to communicate AI’s value to executives, avoid the trap of cost-cutting narratives, and invest in enablement and training to make adoption stick.</p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Setting the stage for AI budgeting in 2026</p><p>(01:45) Results from DX’s AI spending poll and early trends</p><p>(03:30) How companies are currently spending and what to watch in 2026</p><p>(04:52) Why clear definitions for AI tools matter and how Laura and Abi think about them</p><p>(07:12) The entry point for 2026 AI tooling budgets and emerging spending patterns</p><p>(10:14) Why 2026 is the year to prove ROI on AI investments</p><p>(11:10) How organizations should approach AI budgeting and allocation</p><p>(15:08) Best practices for managing AI vendors and enterprise licensing</p><p>(17:02) How to define and choose metrics before and after adopting AI tools</p><p>(19:30) How to identify bottlenecks and AI use cases with the highest ROI</p><p>(21:58) Key considerations for AI budgeting </p><p>(25:10) Why AI investments are about competitiveness, not cost-cutting</p><p>(27:19) How to use the right language to build trust and executive buy-in</p><p>(28:18) Why training and enablement are essential parts of AI investment</p><p>(31:40) How AI add-ons may increase your tool costs</p><p>(32:47) Why custom and fine-tuned models aren’t relevant for most companies today</p><p>(34:00) The tradeoffs between stipend models and enterprise AI licenses</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/research/measuring-ai-code-assistants-and-agents/">Measuring AI code assistants and agents</a></li><li><a href="https://www.iconiqcapital.com/growth/reports/2025-state-of-ai">2025 State of AI Report: The Builder's Playbook</a></li><li><a href="https://github.com/features/copilot">GitHub Copilot · Your AI pair programmer</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://www.glean.com/">Glean</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://chatgpt.com/">ChatGPT</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://getdx.com/blog/dx-releases-integration-with-claude-code/">Track Claude Code adoption, impact, and ROI, directly in DX</a></li><li><a href="https://getdx.com/podcast/measuring-ai-code-assistants-ai-framework/">Measuring AI code assistants and agents with the AI Measurement Framework</a></li><li><a href="https://getdx.com/podcast/enterprise-wide-ai-tool-adoption/">Driving enterprise-wide AI tool adoption</a></li><li><a href="https://sentry.io/welcome/">Sentry</a></li><li><a href="https://poolside.ai/">Poolside</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, Laura Tacho and Abi Noda discuss how engineering leaders can plan their 2026 AI budgets effectively amid rapid change and rising costs. Drawing on data from DX’s recent poll and industry benchmarks, they explore how much organizations should expect to spend per developer, how to allocate budgets across AI tools, and how to balance innovation with cost control.</p><p>Laura and Abi also share practical insights on building a multi-vendor strategy, evaluating ROI through the right metrics, and ensuring continuous measurement before and after adoption. They discuss how to communicate AI’s value to executives, avoid the trap of cost-cutting narratives, and invest in enablement and training to make adoption stick.</p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Setting the stage for AI budgeting in 2026</p><p>(01:45) Results from DX’s AI spending poll and early trends</p><p>(03:30) How companies are currently spending and what to watch in 2026</p><p>(04:52) Why clear definitions for AI tools matter and how Laura and Abi think about them</p><p>(07:12) The entry point for 2026 AI tooling budgets and emerging spending patterns</p><p>(10:14) Why 2026 is the year to prove ROI on AI investments</p><p>(11:10) How organizations should approach AI budgeting and allocation</p><p>(15:08) Best practices for managing AI vendors and enterprise licensing</p><p>(17:02) How to define and choose metrics before and after adopting AI tools</p><p>(19:30) How to identify bottlenecks and AI use cases with the highest ROI</p><p>(21:58) Key considerations for AI budgeting </p><p>(25:10) Why AI investments are about competitiveness, not cost-cutting</p><p>(27:19) How to use the right language to build trust and executive buy-in</p><p>(28:18) Why training and enablement are essential parts of AI investment</p><p>(31:40) How AI add-ons may increase your tool costs</p><p>(32:47) Why custom and fine-tuned models aren’t relevant for most companies today</p><p>(34:00) The tradeoffs between stipend models and enterprise AI licenses</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/research/measuring-ai-code-assistants-and-agents/">Measuring AI code assistants and agents</a></li><li><a href="https://www.iconiqcapital.com/growth/reports/2025-state-of-ai">2025 State of AI Report: The Builder's Playbook</a></li><li><a href="https://github.com/features/copilot">GitHub Copilot · Your AI pair programmer</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://www.glean.com/">Glean</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://chatgpt.com/">ChatGPT</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://getdx.com/blog/dx-releases-integration-with-claude-code/">Track Claude Code adoption, impact, and ROI, directly in DX</a></li><li><a href="https://getdx.com/podcast/measuring-ai-code-assistants-ai-framework/">Measuring AI code assistants and agents with the AI Measurement Framework</a></li><li><a href="https://getdx.com/podcast/enterprise-wide-ai-tool-adoption/">Driving enterprise-wide AI tool adoption</a></li><li><a href="https://sentry.io/welcome/">Sentry</a></li><li><a href="https://poolside.ai/">Poolside</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 17 Oct 2025 07:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/e2980215/3fe8bca6.mp3" length="37498019" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/PcSnqUo4VXPHKOeoTdY2qKeUPD52Y4x8aSjFl00Y75o/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80NGNi/ZmJjNDk3ZjhjMGI5/ODQ2OTVkY2UxNTUy/ZTQyOC5wbmc.jpg"/>
      <itunes:duration>2339</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, Laura Tacho and Abi Noda discuss how engineering leaders can plan their 2026 AI budgets effectively amid rapid change and rising costs. Drawing on data from DX’s recent poll and industry benchmarks, they explore how much organizations should expect to spend per developer, how to allocate budgets across AI tools, and how to balance innovation with cost control.</p><p>Laura and Abi also share practical insights on building a multi-vendor strategy, evaluating ROI through the right metrics, and ensuring continuous measurement before and after adoption. They discuss how to communicate AI’s value to executives, avoid the trap of cost-cutting narratives, and invest in enablement and training to make adoption stick.</p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Setting the stage for AI budgeting in 2026</p><p>(01:45) Results from DX’s AI spending poll and early trends</p><p>(03:30) How companies are currently spending and what to watch in 2026</p><p>(04:52) Why clear definitions for AI tools matter and how Laura and Abi think about them</p><p>(07:12) The entry point for 2026 AI tooling budgets and emerging spending patterns</p><p>(10:14) Why 2026 is the year to prove ROI on AI investments</p><p>(11:10) How organizations should approach AI budgeting and allocation</p><p>(15:08) Best practices for managing AI vendors and enterprise licensing</p><p>(17:02) How to define and choose metrics before and after adopting AI tools</p><p>(19:30) How to identify bottlenecks and AI use cases with the highest ROI</p><p>(21:58) Key considerations for AI budgeting </p><p>(25:10) Why AI investments are about competitiveness, not cost-cutting</p><p>(27:19) How to use the right language to build trust and executive buy-in</p><p>(28:18) Why training and enablement are essential parts of AI investment</p><p>(31:40) How AI add-ons may increase your tool costs</p><p>(32:47) Why custom and fine-tuned models aren’t relevant for most companies today</p><p>(34:00) The tradeoffs between stipend models and enterprise AI licenses</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/research/measuring-ai-code-assistants-and-agents/">Measuring AI code assistants and agents</a></li><li><a href="https://www.iconiqcapital.com/growth/reports/2025-state-of-ai">2025 State of AI Report: The Builder's Playbook</a></li><li><a href="https://github.com/features/copilot">GitHub Copilot · Your AI pair programmer</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://www.glean.com/">Glean</a></li><li><a href="https://www.claude.com/product/claude-code">Claude Code</a></li><li><a href="https://chatgpt.com/">ChatGPT</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://getdx.com/blog/dx-releases-integration-with-claude-code/">Track Claude Code adoption, impact, and ROI, directly in DX</a></li><li><a href="https://getdx.com/podcast/measuring-ai-code-assistants-ai-framework/">Measuring AI code assistants and agents with the AI Measurement Framework</a></li><li><a href="https://getdx.com/podcast/enterprise-wide-ai-tool-adoption/">Driving enterprise-wide AI tool adoption</a></li><li><a href="https://sentry.io/welcome/">Sentry</a></li><li><a href="https://poolside.ai/">Poolside</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>The evolving role of DevProd teams in the AI era</title>
      <itunes:episode>90</itunes:episode>
      <podcast:episode>90</podcast:episode>
      <itunes:title>The evolving role of DevProd teams in the AI era</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f84d030a-de0a-4e38-ab71-96e4734d3fa8</guid>
      <link>https://share.transistor.fm/s/b09ef3b3</link>
      <description>
        <![CDATA[<p>CEO Abi Noda is joined by DX CTO Laura Tacho to discuss the evolving role of Platform and DevProd teams in the AI era. Together, they unpack how AI is reshaping platform responsibilities, from evaluation and rollout to measurement, tool standardization, and guardrails. They explore why fundamentals like documentation and feedback loops matter more than ever for both developers and AI agents. They also share insights on reducing tool sprawl, hardening systems for higher throughput, and leveraging AI to tackle tech debt, modernize legacy code, and improve workflows across the SDLC.</p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Why platform teams need to evolve</p><p>(02:34) The challenge of defining platform teams and how AI is changing expectations</p><p>(04:44) Why evaluating and rolling out AI tools is becoming a core platform responsibility</p><p>(07:14) Why platform teams need solid measurement frameworks to evaluate AI tools</p><p>(08:56) Why platform leaders should champion education and advocacy on measurement</p><p>(11:20) How AI code stresses pipelines and why platform teams must harden systems</p><p>(12:24) Why platform teams must go beyond training to standardize tools and create workflows</p><p>(14:31) How platform teams control tool sprawl</p><p>(16:22) Why platform teams need strong guardrails and safety checks</p><p>(18:41) The importance of standardizing tools and knowledge</p><p>(19:44) The opportunity for platform teams to apply AI at scale across the organization</p><p>(23:40) Quick recap of the key points so far</p><p>(24:33) How AI helps modernize legacy code and handle migrations</p><p>(25:45) Why focusing on fundamentals benefits both developers and AI agents</p><p>(27:42) Identifying SDLC bottlenecks beyond AI code generation</p><p>(30:08) Techniques for optimizing legacy code bases </p><p>(32:47) How AI helps tackle tech debt and large-scale code migrations</p><p>(35:40) Tools across the SDLC</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://www.linkedin.com/posts/abinoda_many-platform-teams-are-stuck-optimizing-activity-7356687949532483586-W-zG?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABf37PYBgozFf00ihr4fkqjRtMnFajHkQ5E">Abi Noda's LinkedIn post</a></li><li><a href="https://getdx.com/podcast/measuring-ai-code-assistants-ai-framework/">Measuring AI code assistants and agents with the AI Measurement Framework</a></li><li><a href="https://getdx.com/blog/space-metrics/">The SPACE framework: A comprehensive guide to developer productivity</a></li><li><a href="https://docs.anthropic.com/en/docs/claude-code/common-workflows">Common workflows - Anthropic</a></li><li><a href="https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/">Enterprise Tech Leadership Summit Las Vegas 2025</a></li><li><a href="https://getdx.com/podcast/enterprise-wide-ai-tool-adoption/">Driving enterprise-wide AI tool adoption with Bruno Passos</a></li><li><a href="https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b">Accelerating Large-Scale Test Migration with LLMs | by Charles Covey-Brandt | The Airbnb Tech Blog | Medium</a></li><li><a href="https://www.linkedin.com/in/justinreock/">Justin Reock - DX | LinkedIn</a></li><li><a href="https://www.businessinsider.com/devgen-ai-tool-saved-morgan-stanley-280-000-hours-jobs-2025-7">A New Tool Saved Morgan Stanley More Than 280,000 Hours This Year - Business Insider</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>CEO Abi Noda is joined by DX CTO Laura Tacho to discuss the evolving role of Platform and DevProd teams in the AI era. Together, they unpack how AI is reshaping platform responsibilities, from evaluation and rollout to measurement, tool standardization, and guardrails. They explore why fundamentals like documentation and feedback loops matter more than ever for both developers and AI agents. They also share insights on reducing tool sprawl, hardening systems for higher throughput, and leveraging AI to tackle tech debt, modernize legacy code, and improve workflows across the SDLC.</p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Why platform teams need to evolve</p><p>(02:34) The challenge of defining platform teams and how AI is changing expectations</p><p>(04:44) Why evaluating and rolling out AI tools is becoming a core platform responsibility</p><p>(07:14) Why platform teams need solid measurement frameworks to evaluate AI tools</p><p>(08:56) Why platform leaders should champion education and advocacy on measurement</p><p>(11:20) How AI code stresses pipelines and why platform teams must harden systems</p><p>(12:24) Why platform teams must go beyond training to standardize tools and create workflows</p><p>(14:31) How platform teams control tool sprawl</p><p>(16:22) Why platform teams need strong guardrails and safety checks</p><p>(18:41) The importance of standardizing tools and knowledge</p><p>(19:44) The opportunity for platform teams to apply AI at scale across the organization</p><p>(23:40) Quick recap of the key points so far</p><p>(24:33) How AI helps modernize legacy code and handle migrations</p><p>(25:45) Why focusing on fundamentals benefits both developers and AI agents</p><p>(27:42) Identifying SDLC bottlenecks beyond AI code generation</p><p>(30:08) Techniques for optimizing legacy code bases </p><p>(32:47) How AI helps tackle tech debt and large-scale code migrations</p><p>(35:40) Tools across the SDLC</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://www.linkedin.com/posts/abinoda_many-platform-teams-are-stuck-optimizing-activity-7356687949532483586-W-zG?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABf37PYBgozFf00ihr4fkqjRtMnFajHkQ5E">Abi Noda's LinkedIn post</a></li><li><a href="https://getdx.com/podcast/measuring-ai-code-assistants-ai-framework/">Measuring AI code assistants and agents with the AI Measurement Framework</a></li><li><a href="https://getdx.com/blog/space-metrics/">The SPACE framework: A comprehensive guide to developer productivity</a></li><li><a href="https://docs.anthropic.com/en/docs/claude-code/common-workflows">Common workflows - Anthropic</a></li><li><a href="https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/">Enterprise Tech Leadership Summit Las Vegas 2025</a></li><li><a href="https://getdx.com/podcast/enterprise-wide-ai-tool-adoption/">Driving enterprise-wide AI tool adoption with Bruno Passos</a></li><li><a href="https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b">Accelerating Large-Scale Test Migration with LLMs | by Charles Covey-Brandt | The Airbnb Tech Blog | Medium</a></li><li><a href="https://www.linkedin.com/in/justinreock/">Justin Reock - DX | LinkedIn</a></li><li><a href="https://www.businessinsider.com/devgen-ai-tool-saved-morgan-stanley-280-000-hours-jobs-2025-7">A New Tool Saved Morgan Stanley More Than 280,000 Hours This Year - Business Insider</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 26 Sep 2025 06:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/b09ef3b3/36766927.mp3" length="35763076" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/pXP_O0Fe7zD3NFzttczPWA6nnMVuQh3cA-wG6Yz02kw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hZWI0/ZjE1ODA0ZWJlNjJl/MGZjOWY3YzVjNWUw/NWY3Yi5wbmc.jpg"/>
      <itunes:duration>2231</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>CEO Abi Noda is joined by DX CTO Laura Tacho to discuss the evolving role of Platform and DevProd teams in the AI era. Together, they unpack how AI is reshaping platform responsibilities, from evaluation and rollout to measurement, tool standardization, and guardrails. They explore why fundamentals like documentation and feedback loops matter more than ever for both developers and AI agents. They also share insights on reducing tool sprawl, hardening systems for higher throughput, and leveraging AI to tackle tech debt, modernize legacy code, and improve workflows across the SDLC.</p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a>  </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a>  </p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact): <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Why platform teams need to evolve</p><p>(02:34) The challenge of defining platform teams and how AI is changing expectations</p><p>(04:44) Why evaluating and rolling out AI tools is becoming a core platform responsibility</p><p>(07:14) Why platform teams need solid measurement frameworks to evaluate AI tools</p><p>(08:56) Why platform leaders should champion education and advocacy on measurement</p><p>(11:20) How AI code stresses pipelines and why platform teams must harden systems</p><p>(12:24) Why platform teams must go beyond training to standardize tools and create workflows</p><p>(14:31) How platform teams control tool sprawl</p><p>(16:22) Why platform teams need strong guardrails and safety checks</p><p>(18:41) The importance of standardizing tools and knowledge</p><p>(19:44) The opportunity for platform teams to apply AI at scale across the organization</p><p>(23:40) Quick recap of the key points so far</p><p>(24:33) How AI helps modernize legacy code and handle migrations</p><p>(25:45) Why focusing on fundamentals benefits both developers and AI agents</p><p>(27:42) Identifying SDLC bottlenecks beyond AI code generation</p><p>(30:08) Techniques for optimizing legacy code bases </p><p>(32:47) How AI helps tackle tech debt and large-scale code migrations</p><p>(35:40) Tools across the SDLC</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://www.linkedin.com/posts/abinoda_many-platform-teams-are-stuck-optimizing-activity-7356687949532483586-W-zG?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABf37PYBgozFf00ihr4fkqjRtMnFajHkQ5E">Abi Noda's LinkedIn post</a></li><li><a href="https://getdx.com/podcast/measuring-ai-code-assistants-ai-framework/">Measuring AI code assistants and agents with the AI Measurement Framework</a></li><li><a href="https://getdx.com/blog/space-metrics/">The SPACE framework: A comprehensive guide to developer productivity</a></li><li><a href="https://docs.anthropic.com/en/docs/claude-code/common-workflows">Common workflows - Anthropic</a></li><li><a href="https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/">Enterprise Tech Leadership Summit Las Vegas 2025</a></li><li><a href="https://getdx.com/podcast/enterprise-wide-ai-tool-adoption/">Driving enterprise-wide AI tool adoption with Bruno Passos</a></li><li><a href="https://medium.com/airbnb-engineering/accelerating-large-scale-test-migration-with-llms-9565c208023b">Accelerating Large-Scale Test Migration with LLMs | by Charles Covey-Brandt | The Airbnb Tech Blog | Medium</a></li><li><a href="https://www.linkedin.com/in/justinreock/">Justin Reock - DX | LinkedIn</a></li><li><a href="https://www.businessinsider.com/devgen-ai-tool-saved-morgan-stanley-280-000-hours-jobs-2025-7">A New Tool Saved Morgan Stanley More Than 280,000 Hours This Year - Business Insider</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Lessons from Twilio’s multi-year platform consolidation</title>
      <itunes:episode>89</itunes:episode>
      <podcast:episode>89</podcast:episode>
      <itunes:title>Lessons from Twilio’s multi-year platform consolidation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d3296f7f-4e07-4783-8462-d30d59625ace</guid>
      <link>https://share.transistor.fm/s/4069bd5a</link>
      <description>
        <![CDATA[<p>In this episode, host Laura Tacho speaks with Jesse Adametz, Senior Engineering Leader on the Developer Platform at Twilio. Jesse is leading Twilio’s multi-year platform consolidation, unifying tech stacks across large acquisitions and driving migrations at enterprise scale. He discusses platform adoption, the limits of Kubernetes, and how Twilio balances modernization with pragmatism. The conversation also explores treating developer experience as a product, offering “change as a service,” and Twilio’s evolving approach to AI adoption and platform support.</p><p><strong>Where to find Jesse Adametz: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/jesseadametz/">https://www.linkedin.com/in/jesseadametz/</a></p><p>• X: <a href="https://x.com/jesseadametz">https://x.com/jesseadametz</a></p><p>• Website: <a href="https://www.jesseadametz.com/">https://www.jesseadametz.com/</a></p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:30) Jesse’s background and how he ended up at Twilio</p><p>(04:00) What SRE teaches leaders and ICs</p><p>(06:06) Where Twilio started the post-acquisition integration</p><p>(08:22) Why platform migrations can’t follow a straight-line plan</p><p>(10:05) How Twilio balances multiple strategies for migrations</p><p>(12:30) The human side of change: advocacy, training, and alignment</p><p>(17:46) Treating developer experience as a first-class product</p><p>(21:40) What “change as a service” looks like in practice</p><p>(24:57) A mandateless approach: creating voluntary adoption through value</p><p>(28:50) How Twilio demonstrates value with metrics and reviews</p><p>(30:41) Why Kubernetes wasn’t the right fit for all Twilio workloads </p><p>(36:12) How Twilio decides when to expose complexity</p><p>(38:23) Lessons from Kubernetes hype and how AI demands more experimentation</p><p>(44:48) Where AI fits into Twilio’s platform strategy</p><p>(49:45) How guilds fill needs the platform team hasn’t yet met</p><p>(51:17) The future of platform in centralizing knowledge and standards</p><p>(54:32) How Twilio evaluates tools for fit, pricing, and reliability </p><p>(57:53) Where Twilio applies AI in reliability, and where Jesse is skeptical</p><p>(59:26) Laura’s vibe-coded side project built on Twilio</p><p>(1:01:11) How external lessons shape Twilio’s approach to platform support and docs</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/whitepaper/ai-measurement-framework">The AI Measurement Framework</a></li><li><a href="https://www.experian.com/">Experian</a></li><li><a href="https://en.wikipedia.org/wiki/Transact-SQL">Transact-SQL - Wikipedia</a></li><li><a href="https://www.twilio.com/">Twilio</a></li><li><a href="https://kubernetes.io/">Kubernetes</a></li><li><a href="http://copilot.microsoft.com">Copilot</a></li><li><a href="https://www.anthropic.com/claude-code">Claude Code</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://aws.amazon.com/bedrock/">Bedrock</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, host Laura Tacho speaks with Jesse Adametz, Senior Engineering Leader on the Developer Platform at Twilio. Jesse is leading Twilio’s multi-year platform consolidation, unifying tech stacks across large acquisitions and driving migrations at enterprise scale. He discusses platform adoption, the limits of Kubernetes, and how Twilio balances modernization with pragmatism. The conversation also explores treating developer experience as a product, offering “change as a service,” and Twilio’s evolving approach to AI adoption and platform support.</p><p><strong>Where to find Jesse Adametz: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/jesseadametz/">https://www.linkedin.com/in/jesseadametz/</a></p><p>• X: <a href="https://x.com/jesseadametz">https://x.com/jesseadametz</a></p><p>• Website: <a href="https://www.jesseadametz.com/">https://www.jesseadametz.com/</a></p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:30) Jesse’s background and how he ended up at Twilio</p><p>(04:00) What SRE teaches leaders and ICs</p><p>(06:06) Where Twilio started the post-acquisition integration</p><p>(08:22) Why platform migrations can’t follow a straight-line plan</p><p>(10:05) How Twilio balances multiple strategies for migrations</p><p>(12:30) The human side of change: advocacy, training, and alignment</p><p>(17:46) Treating developer experience as a first-class product</p><p>(21:40) What “change as a service” looks like in practice</p><p>(24:57) A mandateless approach: creating voluntary adoption through value</p><p>(28:50) How Twilio demonstrates value with metrics and reviews</p><p>(30:41) Why Kubernetes wasn’t the right fit for all Twilio workloads </p><p>(36:12) How Twilio decides when to expose complexity</p><p>(38:23) Lessons from Kubernetes hype and how AI demands more experimentation</p><p>(44:48) Where AI fits into Twilio’s platform strategy</p><p>(49:45) How guilds fill needs the platform team hasn’t yet met</p><p>(51:17) The future of platform in centralizing knowledge and standards</p><p>(54:32) How Twilio evaluates tools for fit, pricing, and reliability </p><p>(57:53) Where Twilio applies AI in reliability, and where Jesse is skeptical</p><p>(59:26) Laura’s vibe-coded side project built on Twilio</p><p>(1:01:11) How external lessons shape Twilio’s approach to platform support and docs</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/whitepaper/ai-measurement-framework">The AI Measurement Framework</a></li><li><a href="https://www.experian.com/">Experian</a></li><li><a href="https://en.wikipedia.org/wiki/Transact-SQL">Transact-SQL - Wikipedia</a></li><li><a href="https://www.twilio.com/">Twilio</a></li><li><a href="https://kubernetes.io/">Kubernetes</a></li><li><a href="http://copilot.microsoft.com">Copilot</a></li><li><a href="https://www.anthropic.com/claude-code">Claude Code</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://aws.amazon.com/bedrock/">Bedrock</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 12 Sep 2025 06:45:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/4069bd5a/59ea9370.mp3" length="63672717" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/y_6W1FJstKp0pD25VuSX4xfknJcz5HayR2bHYm3rKlE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84N2Mx/MGIwNTk4YzVlNjlj/YThmODgxM2VmZmY0/NDQwNi5wbmc.jpg"/>
      <itunes:duration>3975</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, host Laura Tacho speaks with Jesse Adametz, Senior Engineering Leader on the Developer Platform at Twilio. Jesse is leading Twilio’s multi-year platform consolidation, unifying tech stacks across large acquisitions and driving migrations at enterprise scale. He discusses platform adoption, the limits of Kubernetes, and how Twilio balances modernization with pragmatism. The conversation also explores treating developer experience as a product, offering “change as a service,” and Twilio’s evolving approach to AI adoption and platform support.</p><p><strong>Where to find Jesse Adametz: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/jesseadametz/">https://www.linkedin.com/in/jesseadametz/</a></p><p>• X: <a href="https://x.com/jesseadametz">https://x.com/jesseadametz</a></p><p>• Website: <a href="https://www.jesseadametz.com/">https://www.jesseadametz.com/</a></p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:30) Jesse’s background and how he ended up at Twilio</p><p>(04:00) What SRE teaches leaders and ICs</p><p>(06:06) Where Twilio started the post-acquisition integration</p><p>(08:22) Why platform migrations can’t follow a straight-line plan</p><p>(10:05) How Twilio balances multiple strategies for migrations</p><p>(12:30) The human side of change: advocacy, training, and alignment</p><p>(17:46) Treating developer experience as a first-class product</p><p>(21:40) What “change as a service” looks like in practice</p><p>(24:57) A mandateless approach: creating voluntary adoption through value</p><p>(28:50) How Twilio demonstrates value with metrics and reviews</p><p>(30:41) Why Kubernetes wasn’t the right fit for all Twilio workloads </p><p>(36:12) How Twilio decides when to expose complexity</p><p>(38:23) Lessons from Kubernetes hype and how AI demands more experimentation</p><p>(44:48) Where AI fits into Twilio’s platform strategy</p><p>(49:45) How guilds fill needs the platform team hasn’t yet met</p><p>(51:17) The future of platform in centralizing knowledge and standards</p><p>(54:32) How Twilio evaluates tools for fit, pricing, and reliability </p><p>(57:53) Where Twilio applies AI in reliability, and where Jesse is skeptical</p><p>(59:26) Laura’s vibe-coded side project built on Twilio</p><p>(1:01:11) How external lessons shape Twilio’s approach to platform support and docs</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/whitepaper/ai-measurement-framework">The AI Measurement Framework</a></li><li><a href="https://www.experian.com/">Experian</a></li><li><a href="https://en.wikipedia.org/wiki/Transact-SQL">Transact-SQL - Wikipedia</a></li><li><a href="https://www.twilio.com/">Twilio</a></li><li><a href="https://kubernetes.io/">Kubernetes</a></li><li><a href="http://copilot.microsoft.com">Copilot</a></li><li><a href="https://www.anthropic.com/claude-code">Claude Code</a></li><li><a href="https://windsurf.com/">Windsurf</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://aws.amazon.com/bedrock/">Bedrock</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Driving enterprise-wide AI tool adoption</title>
      <itunes:episode>88</itunes:episode>
      <podcast:episode>88</podcast:episode>
      <itunes:title>Driving enterprise-wide AI tool adoption</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6e7b3036-8707-404a-8802-30e1d01a271e</guid>
      <link>https://share.transistor.fm/s/0a4501a8</link>
      <description>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, host Laura Tacho talks with Bruno Passos, Product Lead for Developer Experience at Booking.com, about how the company is rolling out AI tools across a 3,000-person engineering team.</p><p><br>Bruno shares how Booking.com set ambitious innovation goals, why cultural change mattered as much as technology, and the education practices that turned hesitant developers into daily users. He also reflects on the early barriers, from low adoption and knowledge gaps to procurement hurdles, and explains the interventions that worked, including learning paths, hackathon-style workshops, Slack communities, and centralized procurement. The result is that Booking.com now sits in the top 25 percent of companies for AI adoption.</p><p><strong>Where to find Bruno Passos:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brpassos/">https://www.linkedin.com/in/brpassos/</a></p><p>• X: <a href="https://x.com/brunopassos">https://x.com/brunopassos</a></p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:09) Bruno’s role at <a href="http://booking.com">Booking.com</a> and an overview of the business </p><p>(02:19) <a href="http://booking.com">Booking.com</a>’s goals when introducing AI tooling</p><p>(03:26) Why <a href="http://booking.com">Booking.com</a> made such an ambitious innovation ratio goal </p><p>(06:46) The beginning of <a href="http://booking.com">Booking.com</a>’s journey with AI</p><p>(08:54) Why the initial adoption of Cody was low</p><p>(13:17) How education and enablement fueled adoption</p><p>(15:48) The importance of a top-down cultural change for AI adoption</p><p>(17:38) The ongoing journey of determining the right metrics</p><p>(21:44) Measuring the longer-term impact of AI </p><p>(27:04) How Booking.com solved internal bottlenecks to testing new tools</p><p>(32:10) Booking.com’s framework for evaluating new tools</p><p>(35:50) The state of adoption at Booking.com and efforts to expand AI use</p><p>(37:07) What’s still undetermined about AI’s impact on PR/MR quality</p><p>(39:48) How Booking.com is addressing lagging adoption and monitoring churn</p><p>(43:24) How Booking.com’s Slack community lowers friction for questions and support</p><p>(44:35) Closing thoughts on what’s next for Booking.com’s AI plan</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/research/measuring-ai-code-assistants-and-agents/?utm_source=podcast">Measuring AI code assistants and agents</a></li><li><a href="https://getdx.com/core-4-reporting/">DX Core 4 Framework</a></li><li><a href="https://www.booking.com/">Booking.com</a></li><li><a href="https://sourcegraph.com/search">Sourcegraph Search</a></li><li><a href="https://sourcegraph.com/cody">Cody | AI coding assistant from Sourcegraph</a></li><li><a href="https://www.linkedin.com/in/greysonjunggren/">Greyson Junggren - DX | LinkedIn</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, host Laura Tacho talks with Bruno Passos, Product Lead for Developer Experience at Booking.com, about how the company is rolling out AI tools across a 3,000-person engineering team.</p><p><br>Bruno shares how Booking.com set ambitious innovation goals, why cultural change mattered as much as technology, and the education practices that turned hesitant developers into daily users. He also reflects on the early barriers, from low adoption and knowledge gaps to procurement hurdles, and explains the interventions that worked, including learning paths, hackathon-style workshops, Slack communities, and centralized procurement. The result is that Booking.com now sits in the top 25 percent of companies for AI adoption.</p><p><strong>Where to find Bruno Passos:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brpassos/">https://www.linkedin.com/in/brpassos/</a></p><p>• X: <a href="https://x.com/brunopassos">https://x.com/brunopassos</a></p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:09) Bruno’s role at <a href="http://booking.com">Booking.com</a> and an overview of the business </p><p>(02:19) <a href="http://booking.com">Booking.com</a>’s goals when introducing AI tooling</p><p>(03:26) Why <a href="http://booking.com">Booking.com</a> made such an ambitious innovation ratio goal </p><p>(06:46) The beginning of <a href="http://booking.com">Booking.com</a>’s journey with AI</p><p>(08:54) Why the initial adoption of Cody was low</p><p>(13:17) How education and enablement fueled adoption</p><p>(15:48) The importance of a top-down cultural change for AI adoption</p><p>(17:38) The ongoing journey of determining the right metrics</p><p>(21:44) Measuring the longer-term impact of AI </p><p>(27:04) How Booking.com solved internal bottlenecks to testing new tools</p><p>(32:10) Booking.com’s framework for evaluating new tools</p><p>(35:50) The state of adoption at Booking.com and efforts to expand AI use</p><p>(37:07) What’s still undetermined about AI’s impact on PR/MR quality</p><p>(39:48) How Booking.com is addressing lagging adoption and monitoring churn</p><p>(43:24) How Booking.com’s Slack community lowers friction for questions and support</p><p>(44:35) Closing thoughts on what’s next for Booking.com’s AI plan</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/research/measuring-ai-code-assistants-and-agents/?utm_source=podcast">Measuring AI code assistants and agents</a></li><li><a href="https://getdx.com/core-4-reporting/">DX Core 4 Framework</a></li><li><a href="https://www.booking.com/">Booking.com</a></li><li><a href="https://sourcegraph.com/search">Sourcegraph Search</a></li><li><a href="https://sourcegraph.com/cody">Cody | AI coding assistant from Sourcegraph</a></li><li><a href="https://www.linkedin.com/in/greysonjunggren/">Greyson Junggren - DX | LinkedIn</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 05 Sep 2025 07:30:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/0a4501a8/49a8ae01.mp3" length="45032182" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Ayw2Pzf3kihbhDKz1RIXKP5gU5IwReT3IDlY6x35q2U/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83YWQ0/MTVlZDJhZTM3OTk0/MDc3MjA0MDhjNDUz/MzdjNS5wbmc.jpg"/>
      <itunes:duration>2810</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, host Laura Tacho talks with Bruno Passos, Product Lead for Developer Experience at Booking.com, about how the company is rolling out AI tools across a 3,000-person engineering team.</p><p><br>Bruno shares how Booking.com set ambitious innovation goals, why cultural change mattered as much as technology, and the education practices that turned hesitant developers into daily users. He also reflects on the early barriers, from low adoption and knowledge gaps to procurement hurdles, and explains the interventions that worked, including learning paths, hackathon-style workshops, Slack communities, and centralized procurement. The result is that Booking.com now sits in the top 25 percent of companies for AI adoption.</p><p><strong>Where to find Bruno Passos:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brpassos/">https://www.linkedin.com/in/brpassos/</a></p><p>• X: <a href="https://x.com/brunopassos">https://x.com/brunopassos</a></p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p>• Laura’s course (Measuring Engineering Performance and AI Impact) <a href="https://lauratacho.com/developer-productivity-metrics-course">https://lauratacho.com/developer-productivity-metrics-course</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:09) Bruno’s role at <a href="http://booking.com">Booking.com</a> and an overview of the business </p><p>(02:19) <a href="http://booking.com">Booking.com</a>’s goals when introducing AI tooling</p><p>(03:26) Why <a href="http://booking.com">Booking.com</a> made such an ambitious innovation ratio goal </p><p>(06:46) The beginning of <a href="http://booking.com">Booking.com</a>’s journey with AI</p><p>(08:54) Why the initial adoption of Cody was low</p><p>(13:17) How education and enablement fueled adoption</p><p>(15:48) The importance of a top-down cultural change for AI adoption</p><p>(17:38) The ongoing journey of determining the right metrics</p><p>(21:44) Measuring the longer-term impact of AI </p><p>(27:04) How Booking.com solved internal bottlenecks to testing new tools</p><p>(32:10) Booking.com’s framework for evaluating new tools</p><p>(35:50) The state of adoption at Booking.com and efforts to expand AI use</p><p>(37:07) What’s still undetermined about AI’s impact on PR/MR quality</p><p>(39:48) How Booking.com is addressing lagging adoption and monitoring churn</p><p>(43:24) How Booking.com’s Slack community lowers friction for questions and support</p><p>(44:35) Closing thoughts on what’s next for Booking.com’s AI plan</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/research/measuring-ai-code-assistants-and-agents/?utm_source=podcast">Measuring AI code assistants and agents</a></li><li><a href="https://getdx.com/core-4-reporting/">DX Core 4 Framework</a></li><li><a href="https://www.booking.com/">Booking.com</a></li><li><a href="https://sourcegraph.com/search">Sourcegraph Search</a></li><li><a href="https://sourcegraph.com/cody">Cody | AI coding assistant from Sourcegraph</a></li><li><a href="https://www.linkedin.com/in/greysonjunggren/">Greyson Junggren - DX | LinkedIn</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Measuring AI code assistants and agents with the AI Measurement Framework</title>
      <itunes:episode>87</itunes:episode>
      <podcast:episode>87</podcast:episode>
      <itunes:title>Measuring AI code assistants and agents with the AI Measurement Framework</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">219757d7-59d6-4e50-b6c3-ef6fdda75080</guid>
      <link>https://share.transistor.fm/s/a501f4b2</link>
      <description>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, DX CTO<strong> </strong>Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX’s AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption.</p><p><br>They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI’s real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput.</p><p>Whether you’re rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI’s role in your organization—and ensuring it delivers sustainable, long-term gains.</p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:26) The challenge of measuring developer productivity in the AI age</p><p>(04:17) Measuring productivity in the AI era — what stays the same and what changes</p><p>(07:25) How to use DX’s AI Measurement Framework </p><p>(13:10) Measuring AI’s true impact from adoption rates to long-term quality and maintainability</p><p>(16:31) Why acceptance rate is flawed — and DX’s approach to tracking AI-authored code</p><p>(18:25) Three ways to gather measurement data</p><p>(21:55) How Google measures time savings and why self-reported data is misleading</p><p>(24:25) How to measure agentic workflows and a case for expanding the definition of developer</p><p>(28:50) A case for not overemphasizing AI’s role</p><p>(30:31) Measuring second-order effects </p><p>(32:26) Audience Q&amp;A: applying metrics in practice</p><p>(36:45) Wrap up: best practices for rollout and communication </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://www.businessinsider.com/ai-google-engineers-coding-productive-sundar-pichai-alphabet-2025-6">AI is making Google engineers 10% more productive, says Sundar Pichai - Business Insider</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, DX CTO<strong> </strong>Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX’s AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption.</p><p><br>They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI’s real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput.</p><p>Whether you’re rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI’s role in your organization—and ensuring it delivers sustainable, long-term gains.</p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:26) The challenge of measuring developer productivity in the AI age</p><p>(04:17) Measuring productivity in the AI era — what stays the same and what changes</p><p>(07:25) How to use DX’s AI Measurement Framework </p><p>(13:10) Measuring AI’s true impact from adoption rates to long-term quality and maintainability</p><p>(16:31) Why acceptance rate is flawed — and DX’s approach to tracking AI-authored code</p><p>(18:25) Three ways to gather measurement data</p><p>(21:55) How Google measures time savings and why self-reported data is misleading</p><p>(24:25) How to measure agentic workflows and a case for expanding the definition of developer</p><p>(28:50) A case for not overemphasizing AI’s role</p><p>(30:31) Measuring second-order effects </p><p>(32:26) Audience Q&amp;A: applying metrics in practice</p><p>(36:45) Wrap up: best practices for rollout and communication </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://www.businessinsider.com/ai-google-engineers-coding-productive-sundar-pichai-alphabet-2025-6">AI is making Google engineers 10% more productive, says Sundar Pichai - Business Insider</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 15 Aug 2025 08:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/a501f4b2/6feac0e8.mp3" length="39651827" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/7OOuVzNYwUXNgg90Og7bfPsxW7dFFWOOnormx3ct_dM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hOGVk/MjdkZTM0YWNmNDk5/YWZjN2IwODk2NDVj/YTMxZC5wbmc.jpg"/>
      <itunes:duration>2474</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>Engineering Enablement</em>, DX CTO<strong> </strong>Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX’s AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption.</p><p><br>They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI’s real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput.</p><p>Whether you’re rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI’s role in your organization—and ensuring it delivers sustainable, long-term gains.</p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p>• Substack: ​​<a href="https://substack.com/@abinoda">https://substack.com/@abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:26) The challenge of measuring developer productivity in the AI age</p><p>(04:17) Measuring productivity in the AI era — what stays the same and what changes</p><p>(07:25) How to use DX’s AI Measurement Framework </p><p>(13:10) Measuring AI’s true impact from adoption rates to long-term quality and maintainability</p><p>(16:31) Why acceptance rate is flawed — and DX’s approach to tracking AI-authored code</p><p>(18:25) Three ways to gather measurement data</p><p>(21:55) How Google measures time savings and why self-reported data is misleading</p><p>(24:25) How to measure agentic workflows and a case for expanding the definition of developer</p><p>(28:50) A case for not overemphasizing AI’s role</p><p>(30:31) Measuring second-order effects </p><p>(32:26) Audience Q&amp;A: applying metrics in practice</p><p>(36:45) Wrap up: best practices for rollout and communication </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/whitepaper/ai-measurement-framework/">Measuring AI code assistants and agents</a></li><li><a href="https://www.businessinsider.com/ai-google-engineers-coding-productive-sundar-pichai-alphabet-2025-6">AI is making Google engineers 10% more productive, says Sundar Pichai - Business Insider</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How to cut through the hype and measure AI’s real impact (Live from LeadDev London)</title>
      <itunes:episode>86</itunes:episode>
      <podcast:episode>86</podcast:episode>
      <itunes:title>How to cut through the hype and measure AI’s real impact (Live from LeadDev London)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">30aa9427-ceb7-48a5-bb7c-58b6bfe0a231</guid>
      <link>https://share.transistor.fm/s/7b7a28ff</link>
      <description>
        <![CDATA[<p>In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it.</p><p><br></p><p>Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI’s true impact, and build better software without compromising long-term performance.</p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Laura’s keynote from LDX3</p><p>(01:44) The problem with asking <em>how much faster can we go with AI?</em></p><p>(03:02) How the disappointment gap creates barriers to AI adoption</p><p>(06:20) What AI adoption looks like at top-performing organizations</p><p>(07:53) What leaders must do to turn AI into meaningful impact</p><p>(10:50) Why building better software with AI still depends on fundamentals</p><p>(12:03) An overview of the DX Core 4 Framework</p><p>(13:22) Why developer experience is the biggest performance lever</p><p>(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings</p><p>(16:08) How to get started with Core 4</p><p>(17:32) Measuring AI with the AI Measurement Framework</p><p>(21:45) Final takeaways and how to get started with confidence</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://leaddev.com/leaddev-london/">LDX3 by LeadDev | The Festival of Software Engineering Leadership | London</a></li><li><a href="https://www.youtube.com/watch?v=EO3_qN_Ynsk">Software engineering with LLMs in 2025: reality check</a></li><li><a href="https://getdx.com/podcast/developer-productivity-at-microsoft/">SPACE framework, PRs per engineer, AI research</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://nicolefv.com/">Nicole Forsgren</a></li><li><a href="https://www.margaretstorey.com/">Margaret-Anne Storey</a></li><li><a href="https://www.dropbox.com/">Dropbox.com</a></li><li><a href="https://www.etsy.com/">Etsy</a></li><li><a href="https://www.pfizer.com/">Pfizer</a></li><li><a href="https://www.linkedin.com/in/drewhouston/">Drew Houston - Dropbox | LinkedIn</a></li><li><a href="https://block.xyz/">Block</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://dora.dev/">Dora.dev</a></li><li><a href="https://sourcegraph.com/">Sourcegraph</a></li><li><a href="https://www.booking.com/">Booking.com</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it.</p><p><br></p><p>Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI’s true impact, and build better software without compromising long-term performance.</p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Laura’s keynote from LDX3</p><p>(01:44) The problem with asking <em>how much faster can we go with AI?</em></p><p>(03:02) How the disappointment gap creates barriers to AI adoption</p><p>(06:20) What AI adoption looks like at top-performing organizations</p><p>(07:53) What leaders must do to turn AI into meaningful impact</p><p>(10:50) Why building better software with AI still depends on fundamentals</p><p>(12:03) An overview of the DX Core 4 Framework</p><p>(13:22) Why developer experience is the biggest performance lever</p><p>(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings</p><p>(16:08) How to get started with Core 4</p><p>(17:32) Measuring AI with the AI Measurement Framework</p><p>(21:45) Final takeaways and how to get started with confidence</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://leaddev.com/leaddev-london/">LDX3 by LeadDev | The Festival of Software Engineering Leadership | London</a></li><li><a href="https://www.youtube.com/watch?v=EO3_qN_Ynsk">Software engineering with LLMs in 2025: reality check</a></li><li><a href="https://getdx.com/podcast/developer-productivity-at-microsoft/">SPACE framework, PRs per engineer, AI research</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://nicolefv.com/">Nicole Forsgren</a></li><li><a href="https://www.margaretstorey.com/">Margaret-Anne Storey</a></li><li><a href="https://www.dropbox.com/">Dropbox.com</a></li><li><a href="https://www.etsy.com/">Etsy</a></li><li><a href="https://www.pfizer.com/">Pfizer</a></li><li><a href="https://www.linkedin.com/in/drewhouston/">Drew Houston - Dropbox | LinkedIn</a></li><li><a href="https://block.xyz/">Block</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://dora.dev/">Dora.dev</a></li><li><a href="https://sourcegraph.com/">Sourcegraph</a></li><li><a href="https://www.booking.com/">Booking.com</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 08 Aug 2025 08:29:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/7b7a28ff/1753fa4e.mp3" length="22570612" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/T1Wq99zuJsIf4v6sJcqnERTz0udijntYGuqOAOODmIs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yNTU2/ZGIyZTM0YzZhMjg2/NWYxMGYwNTUxMzJh/NjFhOC5wbmc.jpg"/>
      <itunes:duration>1406</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it.</p><p><br></p><p>Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI’s true impact, and build better software without compromising long-term performance.</p><p><br></p><p><strong>Where to find Laura Tacho:</strong></p><p>• X: <a href="https://x.com/rhein_wein">https://x.com/rhein_wein</a></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Laura’s keynote from LDX3</p><p>(01:44) The problem with asking <em>how much faster can we go with AI?</em></p><p>(03:02) How the disappointment gap creates barriers to AI adoption</p><p>(06:20) What AI adoption looks like at top-performing organizations</p><p>(07:53) What leaders must do to turn AI into meaningful impact</p><p>(10:50) Why building better software with AI still depends on fundamentals</p><p>(12:03) An overview of the DX Core 4 Framework</p><p>(13:22) Why developer experience is the biggest performance lever</p><p>(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings</p><p>(16:08) How to get started with Core 4</p><p>(17:32) Measuring AI with the AI Measurement Framework</p><p>(21:45) Final takeaways and how to get started with confidence</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://leaddev.com/leaddev-london/">LDX3 by LeadDev | The Festival of Software Engineering Leadership | London</a></li><li><a href="https://www.youtube.com/watch?v=EO3_qN_Ynsk">Software engineering with LLMs in 2025: reality check</a></li><li><a href="https://getdx.com/podcast/developer-productivity-at-microsoft/">SPACE framework, PRs per engineer, AI research</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://nicolefv.com/">Nicole Forsgren</a></li><li><a href="https://www.margaretstorey.com/">Margaret-Anne Storey</a></li><li><a href="https://www.dropbox.com/">Dropbox.com</a></li><li><a href="https://www.etsy.com/">Etsy</a></li><li><a href="https://www.pfizer.com/">Pfizer</a></li><li><a href="https://www.linkedin.com/in/drewhouston/">Drew Houston - Dropbox | LinkedIn</a></li><li><a href="https://block.xyz/">Block</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://dora.dev/">Dora.dev</a></li><li><a href="https://sourcegraph.com/">Sourcegraph</a></li><li><a href="https://www.booking.com/">Booking.com</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Unpacking METR’s findings: Does AI slow developers down?</title>
      <itunes:episode>85</itunes:episode>
      <podcast:episode>85</podcast:episode>
      <itunes:title>Unpacking METR’s findings: Does AI slow developers down?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2b405144-1680-42ec-ad3b-fe72d8094f34</guid>
      <link>https://share.transistor.fm/s/da9f90f4</link>
      <description>
        <![CDATA[<p>In this episode of the Engineering Enablement podcast, host Abi Noda is joined by <a href="https://www.linkedin.com/in/quentin-anthony/">Quentin Anthony</a>, Head of Model Training at <strong>Zyphra</strong> and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.</p><p><br><strong>Where to find Quentin Anthony: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/quentin-anthony/">https://www.linkedin.com/in/quentin-anthony/</a></p><p>• X: <a href="https://x.com/QuentinAnthon15">https://x.com/QuentinAnthon15</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:32) A brief overview of Quentin’s background and current work</p><p>(02:05) An explanation of METR and the study Quentin participated in </p><p>(11:02) Surprising results of the METR study </p><p>(12:47) Quentin’s takeaways from the study’s results </p><p>(16:30) How developers can avoid bloated code bases through self-reflection</p><p>(19:31) Signs that you’re not making progress with a model </p><p>(21:25) What is “context rot”?</p><p>(23:04) Advice for combating context rot</p><p>(25:34) How to make the most of your idle time as a developer</p><p>(28:13) Developer hygiene: the case for selectively using AI tools</p><p>(33:28) How to interact effectively with new models</p><p>(35:28) Why organizations should focus on tasks that AI handles well</p><p>(38:01) Where AI fits in the software development lifecycle</p><p>(39:40) How to approach testing with models</p><p>(40:31) What makes models different </p><p>(42:05) Quentin’s thoughts on agents </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://www.zyphra.com/">Zyphra</a></li><li><a href="https://www.eleuther.ai/">EleutherAI</a></li><li><a href="https://metr.org/">METR</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://claude.ai/">Claude</a></li><li><a href="https://www.librechat.ai/">LibreChat</a></li><li><a href="https://gemini.google.com/">Google Gemini</a></li><li><a href="https://openai.com/index/introducing-o3-and-o4-mini/">Introducing OpenAI o3 and o4-mini</a></li><li><a href="https://newsletter.getdx.com/p/metr-study-on-how-ai-affects-developer-productivity">METR’s study on how AI affects developer productivity</a></li><li><a href="https://x.com/QuentinAnthon15/status/1943948791775998069">Quentin Anthony on X: "I was one of the 16 devs in this study."</a></li><li><a href="https://news.ycombinator.com/item?id=44310054">Context rot from Hacker News</a></li><li><a href="https://www.anthropic.com/research/tracing-thoughts-language-model">Tracing the thoughts of a large language model</a></li><li><a href="https://www.kimi.com/">Kimi</a></li><li><a href="https://x.ai/news/grok-4">Grok 4 | xAI</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of the Engineering Enablement podcast, host Abi Noda is joined by <a href="https://www.linkedin.com/in/quentin-anthony/">Quentin Anthony</a>, Head of Model Training at <strong>Zyphra</strong> and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.</p><p><br><strong>Where to find Quentin Anthony: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/quentin-anthony/">https://www.linkedin.com/in/quentin-anthony/</a></p><p>• X: <a href="https://x.com/QuentinAnthon15">https://x.com/QuentinAnthon15</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:32) A brief overview of Quentin’s background and current work</p><p>(02:05) An explanation of METR and the study Quentin participated in </p><p>(11:02) Surprising results of the METR study </p><p>(12:47) Quentin’s takeaways from the study’s results </p><p>(16:30) How developers can avoid bloated code bases through self-reflection</p><p>(19:31) Signs that you’re not making progress with a model </p><p>(21:25) What is “context rot”?</p><p>(23:04) Advice for combating context rot</p><p>(25:34) How to make the most of your idle time as a developer</p><p>(28:13) Developer hygiene: the case for selectively using AI tools</p><p>(33:28) How to interact effectively with new models</p><p>(35:28) Why organizations should focus on tasks that AI handles well</p><p>(38:01) Where AI fits in the software development lifecycle</p><p>(39:40) How to approach testing with models</p><p>(40:31) What makes models different </p><p>(42:05) Quentin’s thoughts on agents </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://www.zyphra.com/">Zyphra</a></li><li><a href="https://www.eleuther.ai/">EleutherAI</a></li><li><a href="https://metr.org/">METR</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://claude.ai/">Claude</a></li><li><a href="https://www.librechat.ai/">LibreChat</a></li><li><a href="https://gemini.google.com/">Google Gemini</a></li><li><a href="https://openai.com/index/introducing-o3-and-o4-mini/">Introducing OpenAI o3 and o4-mini</a></li><li><a href="https://newsletter.getdx.com/p/metr-study-on-how-ai-affects-developer-productivity">METR’s study on how AI affects developer productivity</a></li><li><a href="https://x.com/QuentinAnthon15/status/1943948791775998069">Quentin Anthony on X: "I was one of the 16 devs in this study."</a></li><li><a href="https://news.ycombinator.com/item?id=44310054">Context rot from Hacker News</a></li><li><a href="https://www.anthropic.com/research/tracing-thoughts-language-model">Tracing the thoughts of a large language model</a></li><li><a href="https://www.kimi.com/">Kimi</a></li><li><a href="https://x.ai/news/grok-4">Grok 4 | xAI</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 01 Aug 2025 07:59:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/da9f90f4/95ed962f.mp3" length="42071753" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Uu0a2L7LYF29-Hi6d7_AsK2G9vScLLFMzbo4LKTk8tY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84ODAx/NGY3NGI5Mjg3OWRk/YTM5YWU5NWQzNDdj/YTMxMi5wbmc.jpg"/>
      <itunes:duration>2625</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of the Engineering Enablement podcast, host Abi Noda is joined by <a href="https://www.linkedin.com/in/quentin-anthony/">Quentin Anthony</a>, Head of Model Training at <strong>Zyphra</strong> and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.</p><p><br><strong>Where to find Quentin Anthony: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/quentin-anthony/">https://www.linkedin.com/in/quentin-anthony/</a></p><p>• X: <a href="https://x.com/QuentinAnthon15">https://x.com/QuentinAnthon15</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro</p><p>(01:32) A brief overview of Quentin’s background and current work</p><p>(02:05) An explanation of METR and the study Quentin participated in </p><p>(11:02) Surprising results of the METR study </p><p>(12:47) Quentin’s takeaways from the study’s results </p><p>(16:30) How developers can avoid bloated code bases through self-reflection</p><p>(19:31) Signs that you’re not making progress with a model </p><p>(21:25) What is “context rot”?</p><p>(23:04) Advice for combating context rot</p><p>(25:34) How to make the most of your idle time as a developer</p><p>(28:13) Developer hygiene: the case for selectively using AI tools</p><p>(33:28) How to interact effectively with new models</p><p>(35:28) Why organizations should focus on tasks that AI handles well</p><p>(38:01) Where AI fits in the software development lifecycle</p><p>(39:40) How to approach testing with models</p><p>(40:31) What makes models different </p><p>(42:05) Quentin’s thoughts on agents </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://www.zyphra.com/">Zyphra</a></li><li><a href="https://www.eleuther.ai/">EleutherAI</a></li><li><a href="https://metr.org/">METR</a></li><li><a href="https://cursor.com/">Cursor</a></li><li><a href="https://claude.ai/">Claude</a></li><li><a href="https://www.librechat.ai/">LibreChat</a></li><li><a href="https://gemini.google.com/">Google Gemini</a></li><li><a href="https://openai.com/index/introducing-o3-and-o4-mini/">Introducing OpenAI o3 and o4-mini</a></li><li><a href="https://newsletter.getdx.com/p/metr-study-on-how-ai-affects-developer-productivity">METR’s study on how AI affects developer productivity</a></li><li><a href="https://x.com/QuentinAnthon15/status/1943948791775998069">Quentin Anthony on X: "I was one of the 16 devs in this study."</a></li><li><a href="https://news.ycombinator.com/item?id=44310054">Context rot from Hacker News</a></li><li><a href="https://www.anthropic.com/research/tracing-thoughts-language-model">Tracing the thoughts of a large language model</a></li><li><a href="https://www.kimi.com/">Kimi</a></li><li><a href="https://x.ai/news/grok-4">Grok 4 | xAI</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>CarGurus’ journey building a developer portal and increasing AI adoption</title>
      <itunes:episode>83</itunes:episode>
      <podcast:episode>83</podcast:episode>
      <itunes:title>CarGurus’ journey building a developer portal and increasing AI adoption</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fe53db53-a36e-4b1b-930b-043054cf5ee5</guid>
      <link>https://share.transistor.fm/s/40392c5f</link>
      <description>
        <![CDATA[<p>In this episode, Abi Noda talks with Frank Fodera, Director of Engineering for Developer Experience at CarGurus. Frank shares the story behind CarGurus’ transition from a monolithic architecture to microservices, and how that journey led to the creation of their internal developer portal, Showroom. He outlines the five pillars of the IDP, how it integrates with infrastructure, and why they chose to build rather than buy. The conversation also explores how CarGurus is approaching AI tool adoption across the engineering team, from experiments and metrics to culture change and leadership buy-in.</p><p><br><strong>Where to find Frank Fodera : </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/frankfodera/">https://www.linkedin.com/in/frankfodera/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: IDPs (Internal Developer Portals) and AI </p><p>(02:07) The IDP journey at CarGurus</p><p>(05:53) A breakdown of the people responsible for building the IDP</p><p>(07:05) The five pillars of the Showroom IDP</p><p>(09:12) How DevX worked with infrastructure</p><p>(11:13) The business impact of Showroom</p><p>(13:57) The transition from monolith to microservices and struggles along the way</p><p>(15:54) The benefits of building a custom IDP</p><p>(19:10) How CarGurus drives AI coding tool adoption </p><p>(28:48) Getting started with an AI initiative</p><p>(31:50) Metrics to track </p><p>(34:06) Tips for driving AI adoption</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a> </li><li><a href="https://getdx.com/webinar/internal-developer-portals-overview/">Internal Developer Portals: Use Cases and Key Components</a></li><li><a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig">Strangler Fig Pattern - Azure Architecture Center | Microsoft Learn</a></li><li><a href="https://backstage.spotify.com/">Spotify for Backstage</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi Noda talks with Frank Fodera, Director of Engineering for Developer Experience at CarGurus. Frank shares the story behind CarGurus’ transition from a monolithic architecture to microservices, and how that journey led to the creation of their internal developer portal, Showroom. He outlines the five pillars of the IDP, how it integrates with infrastructure, and why they chose to build rather than buy. The conversation also explores how CarGurus is approaching AI tool adoption across the engineering team, from experiments and metrics to culture change and leadership buy-in.</p><p><br><strong>Where to find Frank Fodera : </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/frankfodera/">https://www.linkedin.com/in/frankfodera/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: IDPs (Internal Developer Portals) and AI </p><p>(02:07) The IDP journey at CarGurus</p><p>(05:53) A breakdown of the people responsible for building the IDP</p><p>(07:05) The five pillars of the Showroom IDP</p><p>(09:12) How DevX worked with infrastructure</p><p>(11:13) The business impact of Showroom</p><p>(13:57) The transition from monolith to microservices and struggles along the way</p><p>(15:54) The benefits of building a custom IDP</p><p>(19:10) How CarGurus drives AI coding tool adoption </p><p>(28:48) Getting started with an AI initiative</p><p>(31:50) Metrics to track </p><p>(34:06) Tips for driving AI adoption</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a> </li><li><a href="https://getdx.com/webinar/internal-developer-portals-overview/">Internal Developer Portals: Use Cases and Key Components</a></li><li><a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig">Strangler Fig Pattern - Azure Architecture Center | Microsoft Learn</a></li><li><a href="https://backstage.spotify.com/">Spotify for Backstage</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 11 Jul 2025 08:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/40392c5f/f74e0b2c.mp3" length="37605154" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/wq8pz2u9bgaYPl2vJYC2VkPi23mdEcPg7DjZRPSgs4w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80Nzcy/YjIxNmQ0OTI0NTg4/YjBiNmFjMDA3OGJk/ZGYyMS5wbmc.jpg"/>
      <itunes:duration>2346</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi Noda talks with Frank Fodera, Director of Engineering for Developer Experience at CarGurus. Frank shares the story behind CarGurus’ transition from a monolithic architecture to microservices, and how that journey led to the creation of their internal developer portal, Showroom. He outlines the five pillars of the IDP, how it integrates with infrastructure, and why they chose to build rather than buy. The conversation also explores how CarGurus is approaching AI tool adoption across the engineering team, from experiments and metrics to culture change and leadership buy-in.</p><p><br><strong>Where to find Frank Fodera : </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/frankfodera/">https://www.linkedin.com/in/frankfodera/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: IDPs (Internal Developer Portals) and AI </p><p>(02:07) The IDP journey at CarGurus</p><p>(05:53) A breakdown of the people responsible for building the IDP</p><p>(07:05) The five pillars of the Showroom IDP</p><p>(09:12) How DevX worked with infrastructure</p><p>(11:13) The business impact of Showroom</p><p>(13:57) The transition from monolith to microservices and struggles along the way</p><p>(15:54) The benefits of building a custom IDP</p><p>(19:10) How CarGurus drives AI coding tool adoption </p><p>(28:48) Getting started with an AI initiative</p><p>(31:50) Metrics to track </p><p>(34:06) Tips for driving AI adoption</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a> </li><li><a href="https://getdx.com/webinar/internal-developer-portals-overview/">Internal Developer Portals: Use Cases and Key Components</a></li><li><a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig">Strangler Fig Pattern - Azure Architecture Center | Microsoft Learn</a></li><li><a href="https://backstage.spotify.com/">Spotify for Backstage</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Snowflake’s playbook for operational excellence</title>
      <itunes:episode>80</itunes:episode>
      <podcast:episode>80</podcast:episode>
      <itunes:title>Snowflake’s playbook for operational excellence</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">74a0b052-a839-4384-92b7-c3d4cb13abab</guid>
      <link>https://share.transistor.fm/s/341816a1</link>
      <description>
        <![CDATA[<p>In this episode, Abi Noda speaks with Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering at Snowflake, about how their team builds and sustains operational excellence. They break down the practices and principles that guide their work—from creating two-way communication channels to treating engineers as customers. The conversation explores how Snowflake fosters trust, uses feedback loops to shape priorities, and maintains alignment through thoughtful planning. You’ll also hear how they engage with teams across the org, convert detractors, and use Customer Advisory Boards to bring voices from across the company into the decision-making process.</p><p><br><strong>Where to find Amy Yuan: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/amy-yuan-a8ba783/">https://www.linkedin.com/in/amy-yuan-a8ba783/</a></p><p><br></p><p><strong>Where to find Gilad Turbahn:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/giladturbahn/">https://www.linkedin.com/in/giladturbahn/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: an overview of operational excellence</p><p>(04:13) Obstacles to executing with operational excellence</p><p>(05:51) An overview of the Snowflake playbook for operational excellence</p><p>(08:25) Who does the work of reaching out to customers</p><p>(09:06) The importance of customer engagement</p><p>(10:19) How Snowflake does customer engagement </p><p>(14:13) The types of feedback received and the two camps (supporters and detractors)</p><p>(16:55) How to influence detractors and how detractors actually help </p><p>(18:27) Using insiders as messengers</p><p>(22:48) An overview of Snowflake’s customer advisory board</p><p>(26:10) The importance of meeting in person (learnings from Warsaw and Berlin office visits)</p><p>(28:08) Managing up</p><p>(30:07) How planning is done at Snowflake</p><p>(36:25) Setting targets for OKRs, and Snowflake’s philosophy on metrics </p><p>(39:22) The annual plan and how it’s shared </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/podcast/developer-productivity-at-snowflake/">CTO buy-in, measuring sentiment, and customer focus</a></li><li><a href="https://www.snowflake.com/">Snowflake</a></li><li><a href="https://www.linkedin.com/in/benoit-dageville-3011845/">Benoit Dageville - Snowflake Computing | LinkedIn</a></li><li><a href="https://www.linkedin.com/in/thierry-cruanes-3927363/">Thierry Cruanes - Snowflake Computing | LinkedIn</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi Noda speaks with Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering at Snowflake, about how their team builds and sustains operational excellence. They break down the practices and principles that guide their work—from creating two-way communication channels to treating engineers as customers. The conversation explores how Snowflake fosters trust, uses feedback loops to shape priorities, and maintains alignment through thoughtful planning. You’ll also hear how they engage with teams across the org, convert detractors, and use Customer Advisory Boards to bring voices from across the company into the decision-making process.</p><p><br><strong>Where to find Amy Yuan: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/amy-yuan-a8ba783/">https://www.linkedin.com/in/amy-yuan-a8ba783/</a></p><p><br></p><p><strong>Where to find Gilad Turbahn:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/giladturbahn/">https://www.linkedin.com/in/giladturbahn/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: an overview of operational excellence</p><p>(04:13) Obstacles to executing with operational excellence</p><p>(05:51) An overview of the Snowflake playbook for operational excellence</p><p>(08:25) Who does the work of reaching out to customers</p><p>(09:06) The importance of customer engagement</p><p>(10:19) How Snowflake does customer engagement </p><p>(14:13) The types of feedback received and the two camps (supporters and detractors)</p><p>(16:55) How to influence detractors and how detractors actually help </p><p>(18:27) Using insiders as messengers</p><p>(22:48) An overview of Snowflake’s customer advisory board</p><p>(26:10) The importance of meeting in person (learnings from Warsaw and Berlin office visits)</p><p>(28:08) Managing up</p><p>(30:07) How planning is done at Snowflake</p><p>(36:25) Setting targets for OKRs, and Snowflake’s philosophy on metrics </p><p>(39:22) The annual plan and how it’s shared </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/podcast/developer-productivity-at-snowflake/">CTO buy-in, measuring sentiment, and customer focus</a></li><li><a href="https://www.snowflake.com/">Snowflake</a></li><li><a href="https://www.linkedin.com/in/benoit-dageville-3011845/">Benoit Dageville - Snowflake Computing | LinkedIn</a></li><li><a href="https://www.linkedin.com/in/thierry-cruanes-3927363/">Thierry Cruanes - Snowflake Computing | LinkedIn</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 20 Jun 2025 06:58:48 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/341816a1/e7cc2139.mp3" length="43323323" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/r_cK1ErK9znlcXYPpw7o9uaD2kHA3qbalbOUAxSS3y0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mZmIw/MDlkYWFmZmViYjU3/NmY2MjU5YjVlZmQ0/NDE0OC5wbmc.jpg"/>
      <itunes:duration>2705</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi Noda speaks with Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering at Snowflake, about how their team builds and sustains operational excellence. They break down the practices and principles that guide their work—from creating two-way communication channels to treating engineers as customers. The conversation explores how Snowflake fosters trust, uses feedback loops to shape priorities, and maintains alignment through thoughtful planning. You’ll also hear how they engage with teams across the org, convert detractors, and use Customer Advisory Boards to bring voices from across the company into the decision-making process.</p><p><br><strong>Where to find Amy Yuan: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/amy-yuan-a8ba783/">https://www.linkedin.com/in/amy-yuan-a8ba783/</a></p><p><br></p><p><strong>Where to find Gilad Turbahn:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/giladturbahn/">https://www.linkedin.com/in/giladturbahn/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: an overview of operational excellence</p><p>(04:13) Obstacles to executing with operational excellence</p><p>(05:51) An overview of the Snowflake playbook for operational excellence</p><p>(08:25) Who does the work of reaching out to customers</p><p>(09:06) The importance of customer engagement</p><p>(10:19) How Snowflake does customer engagement </p><p>(14:13) The types of feedback received and the two camps (supporters and detractors)</p><p>(16:55) How to influence detractors and how detractors actually help </p><p>(18:27) Using insiders as messengers</p><p>(22:48) An overview of Snowflake’s customer advisory board</p><p>(26:10) The importance of meeting in person (learnings from Warsaw and Berlin office visits)</p><p>(28:08) Managing up</p><p>(30:07) How planning is done at Snowflake</p><p>(36:25) Setting targets for OKRs, and Snowflake’s philosophy on metrics </p><p>(39:22) The annual plan and how it’s shared </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/podcast/developer-productivity-at-snowflake/">CTO buy-in, measuring sentiment, and customer focus</a></li><li><a href="https://www.snowflake.com/">Snowflake</a></li><li><a href="https://www.linkedin.com/in/benoit-dageville-3011845/">Benoit Dageville - Snowflake Computing | LinkedIn</a></li><li><a href="https://www.linkedin.com/in/thierry-cruanes-3927363/">Thierry Cruanes - Snowflake Computing | LinkedIn</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>The biggest obstacles preventing GenAI adoption — and how to overcome them</title>
      <itunes:episode>82</itunes:episode>
      <podcast:episode>82</podcast:episode>
      <itunes:title>The biggest obstacles preventing GenAI adoption — and how to overcome them</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ede96dad-7e9c-4f69-a0c5-e04c84c5ece9</guid>
      <link>https://share.transistor.fm/s/8fdf87ba</link>
      <description>
        <![CDATA[<p>In this episode, Abi Noda speaks with DX CTO Laura Tacho about the real obstacles holding back AI adoption in engineering teams. They discuss why technical challenges are rarely the blocker, and how fear, unclear expectations, and inflated hype can stall progress. Laura shares practical strategies for driving adoption, including how to model usage from the top down, build momentum through champions and training programs, and measure impact effectively—starting with establishing a baseline before introducing AI tools.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: The full spectrum of AI adoption</p><p>(03:02) The hype of AI</p><p>(04:46) Some statistics around the current state of AI coding tool adoption</p><p>(07:27) The real barriers to AI adoption</p><p>(09:31) How to drive AI adoption </p><p>(15:47) Measuring AI’s impact </p><p>(19:49) More strategies for driving AI adoption </p><p>(23:54) The Methods companies are actually using to drive impact</p><p>(29:15) Questions from the chat </p><p>(39:48) Wrapping up</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li><li><a href="https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-30-of-the-companys-code-was-written-by-ai/">Microsoft CEO says up to 30% of the company's code was written by AI | TechCrunch</a></li><li><a href="https://www.forbes.com/sites/douglaslaney/2025/04/09/selling-ai-strategy-to-employees-shopify-ceos-manifesto/">Viral Shopify CEO Manifesto Says AI Now Mandatory For All Employees</a></li><li><a href="https://dora.dev/research/ai/gen-ai-report/">DORA | Impact of Generative AI in Software Development</a></li><li><a href="https://getdx.com/guide/ai-assisted-engineering/">Guide to AI assisted engineering</a></li><li><a href="https://www.linkedin.com/in/justinreock/">Justin Reock - DX | LinkedIn</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi Noda speaks with DX CTO Laura Tacho about the real obstacles holding back AI adoption in engineering teams. They discuss why technical challenges are rarely the blocker, and how fear, unclear expectations, and inflated hype can stall progress. Laura shares practical strategies for driving adoption, including how to model usage from the top down, build momentum through champions and training programs, and measure impact effectively—starting with establishing a baseline before introducing AI tools.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: The full spectrum of AI adoption</p><p>(03:02) The hype of AI</p><p>(04:46) Some statistics around the current state of AI coding tool adoption</p><p>(07:27) The real barriers to AI adoption</p><p>(09:31) How to drive AI adoption </p><p>(15:47) Measuring AI’s impact </p><p>(19:49) More strategies for driving AI adoption </p><p>(23:54) The Methods companies are actually using to drive impact</p><p>(29:15) Questions from the chat </p><p>(39:48) Wrapping up</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li><li><a href="https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-30-of-the-companys-code-was-written-by-ai/">Microsoft CEO says up to 30% of the company's code was written by AI | TechCrunch</a></li><li><a href="https://www.forbes.com/sites/douglaslaney/2025/04/09/selling-ai-strategy-to-employees-shopify-ceos-manifesto/">Viral Shopify CEO Manifesto Says AI Now Mandatory For All Employees</a></li><li><a href="https://dora.dev/research/ai/gen-ai-report/">DORA | Impact of Generative AI in Software Development</a></li><li><a href="https://getdx.com/guide/ai-assisted-engineering/">Guide to AI assisted engineering</a></li><li><a href="https://www.linkedin.com/in/justinreock/">Justin Reock - DX | LinkedIn</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 06 Jun 2025 07:30:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/8fdf87ba/e500b3c4.mp3" length="40426800" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Zv6Jro_ME3UTDQW-hFQjNqLEcBOjZuq6b-MqSpPOI6o/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iYjQz/NTU3OTVhNDgzMjEz/MTc3YWZhNTAzYWMz/MDlhZi5wbmc.jpg"/>
      <itunes:duration>2522</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi Noda speaks with DX CTO Laura Tacho about the real obstacles holding back AI adoption in engineering teams. They discuss why technical challenges are rarely the blocker, and how fear, unclear expectations, and inflated hype can stall progress. Laura shares practical strategies for driving adoption, including how to model usage from the top down, build momentum through champions and training programs, and measure impact effectively—starting with establishing a baseline before introducing AI tools.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: The full spectrum of AI adoption</p><p>(03:02) The hype of AI</p><p>(04:46) Some statistics around the current state of AI coding tool adoption</p><p>(07:27) The real barriers to AI adoption</p><p>(09:31) How to drive AI adoption </p><p>(15:47) Measuring AI’s impact </p><p>(19:49) More strategies for driving AI adoption </p><p>(23:54) The Methods companies are actually using to drive impact</p><p>(29:15) Questions from the chat </p><p>(39:48) Wrapping up</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/brian-houck-ai-adoption-playbook/">The AI adoption playbook: Lessons from Microsoft's internal strategy</a></li><li><a href="https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-30-of-the-companys-code-was-written-by-ai/">Microsoft CEO says up to 30% of the company's code was written by AI | TechCrunch</a></li><li><a href="https://www.forbes.com/sites/douglaslaney/2025/04/09/selling-ai-strategy-to-employees-shopify-ceos-manifesto/">Viral Shopify CEO Manifesto Says AI Now Mandatory For All Employees</a></li><li><a href="https://dora.dev/research/ai/gen-ai-report/">DORA | Impact of Generative AI in Software Development</a></li><li><a href="https://getdx.com/guide/ai-assisted-engineering/">Guide to AI assisted engineering</a></li><li><a href="https://www.linkedin.com/in/justinreock/">Justin Reock - DX | LinkedIn</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>DORA’s latest research on AI impact</title>
      <itunes:episode>81</itunes:episode>
      <podcast:episode>81</podcast:episode>
      <itunes:title>DORA’s latest research on AI impact</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6f276206-3ffd-4fad-ab02-a72da56d1497</guid>
      <link>https://share.transistor.fm/s/e6229301</link>
      <description>
        <![CDATA[<p>In this episode, Abi Noda speaks with Derek DeBellis, lead researcher at Google’s DORA team, about their latest report on generative AI’s impact on software productivity.</p><p>They dive into how the survey was built, what it reveals about developer time and “flow,” and the surprising gap between individual and team outcomes. Derek also shares practical advice for leaders on measuring AI impact and aligning metrics with organizational goals.</p><p><br></p><p><strong>Where to find Derek DeBellis: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/derekdebellis/">https://www.linkedin.com/in/derekdebellis/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: DORA’s new Impact of Gen AI report</p><p>(03:24) The methodology used to put together the surveys DORA used for the report </p><p>(06:44) An example of how a single word can throw off a question </p><p>(07:59) How DORA measures flow </p><p>(10:38) The two ways time was measured in the recent survey</p><p>(14:30) An overview of experiential surveying </p><p>(16:14) Why DORA asks about time </p><p>(19:50) Why Derek calls survey results ‘observational data’ </p><p>(21:49) Interesting findings from the report </p><p>(24:17) DORA’s definition of productivity </p><p>(26:22) Why a 2.1% increase in individual productivity is significant </p><p>(30:00) The report’s findings on decreased team delivery throughput and stability </p><p>(32:40) Tips for measuring AI’s impact on productivity </p><p>(38:20) Wrap up: understanding the data </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://dora.dev/research/ai/gen-ai-report/">DORA | Impact of Generative AI in Software Development</a></li><li><a href="https://getdx.com/podcast/dora-research-google/">The science behind DORA</a></li><li><a href="https://nihrecord.nih.gov/2020/03/20/yale-professor-divulges-strategies-happy-life">Yale Professor Divulges Strategies for a Happy Life </a></li><li><a href="https://www.cognitionandculture.net/blogs/olivier-morin/incredible-listening-to-when-im-64-makes-you-forget-your-age/index.html">Incredible! Listening to ‘When I’m 64’ makes you forget your age</a></li><li><a href="https://www.amazon.com/Slow-Productivity-Accomplishment-Without-Burnout/dp/0593544854">Slow Productivity: The Lost Art of Accomplishment without Burnout</a></li><li><a href="https://getdx.com/guide/dora-space-devex/">DORA, SPACE, and DevEx: Which framework should you use?</a></li><li><a href="https://getdx.com/podcast/developer-productivity-at-microsoft/">SPACE framework, PRs per engineer, AI research</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi Noda speaks with Derek DeBellis, lead researcher at Google’s DORA team, about their latest report on generative AI’s impact on software productivity.</p><p>They dive into how the survey was built, what it reveals about developer time and “flow,” and the surprising gap between individual and team outcomes. Derek also shares practical advice for leaders on measuring AI impact and aligning metrics with organizational goals.</p><p><br></p><p><strong>Where to find Derek DeBellis: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/derekdebellis/">https://www.linkedin.com/in/derekdebellis/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: DORA’s new Impact of Gen AI report</p><p>(03:24) The methodology used to put together the surveys DORA used for the report </p><p>(06:44) An example of how a single word can throw off a question </p><p>(07:59) How DORA measures flow </p><p>(10:38) The two ways time was measured in the recent survey</p><p>(14:30) An overview of experiential surveying </p><p>(16:14) Why DORA asks about time </p><p>(19:50) Why Derek calls survey results ‘observational data’ </p><p>(21:49) Interesting findings from the report </p><p>(24:17) DORA’s definition of productivity </p><p>(26:22) Why a 2.1% increase in individual productivity is significant </p><p>(30:00) The report’s findings on decreased team delivery throughput and stability </p><p>(32:40) Tips for measuring AI’s impact on productivity </p><p>(38:20) Wrap up: understanding the data </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://dora.dev/research/ai/gen-ai-report/">DORA | Impact of Generative AI in Software Development</a></li><li><a href="https://getdx.com/podcast/dora-research-google/">The science behind DORA</a></li><li><a href="https://nihrecord.nih.gov/2020/03/20/yale-professor-divulges-strategies-happy-life">Yale Professor Divulges Strategies for a Happy Life </a></li><li><a href="https://www.cognitionandculture.net/blogs/olivier-morin/incredible-listening-to-when-im-64-makes-you-forget-your-age/index.html">Incredible! Listening to ‘When I’m 64’ makes you forget your age</a></li><li><a href="https://www.amazon.com/Slow-Productivity-Accomplishment-Without-Burnout/dp/0593544854">Slow Productivity: The Lost Art of Accomplishment without Burnout</a></li><li><a href="https://getdx.com/guide/dora-space-devex/">DORA, SPACE, and DevEx: Which framework should you use?</a></li><li><a href="https://getdx.com/podcast/developer-productivity-at-microsoft/">SPACE framework, PRs per engineer, AI research</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 23 May 2025 07:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/e6229301/a1843071.mp3" length="38854720" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/_glDbdVCXjRJhuGix9Cl2H2LjF58sUnpQaE7srxl1qY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84Y2Vj/ZTI2MWQ1MTkyZjRk/OTIxNjE5MGI0MTY3/NWNmNy5wbmc.jpg"/>
      <itunes:duration>2424</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi Noda speaks with Derek DeBellis, lead researcher at Google’s DORA team, about their latest report on generative AI’s impact on software productivity.</p><p>They dive into how the survey was built, what it reveals about developer time and “flow,” and the surprising gap between individual and team outcomes. Derek also shares practical advice for leaders on measuring AI impact and aligning metrics with organizational goals.</p><p><br></p><p><strong>Where to find Derek DeBellis: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/derekdebellis/">https://www.linkedin.com/in/derekdebellis/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: DORA’s new Impact of Gen AI report</p><p>(03:24) The methodology used to put together the surveys DORA used for the report </p><p>(06:44) An example of how a single word can throw off a question </p><p>(07:59) How DORA measures flow </p><p>(10:38) The two ways time was measured in the recent survey</p><p>(14:30) An overview of experiential surveying </p><p>(16:14) Why DORA asks about time </p><p>(19:50) Why Derek calls survey results ‘observational data’ </p><p>(21:49) Interesting findings from the report </p><p>(24:17) DORA’s definition of productivity </p><p>(26:22) Why a 2.1% increase in individual productivity is significant </p><p>(30:00) The report’s findings on decreased team delivery throughput and stability </p><p>(32:40) Tips for measuring AI’s impact on productivity </p><p>(38:20) Wrap up: understanding the data </p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://dora.dev/research/ai/gen-ai-report/">DORA | Impact of Generative AI in Software Development</a></li><li><a href="https://getdx.com/podcast/dora-research-google/">The science behind DORA</a></li><li><a href="https://nihrecord.nih.gov/2020/03/20/yale-professor-divulges-strategies-happy-life">Yale Professor Divulges Strategies for a Happy Life </a></li><li><a href="https://www.cognitionandculture.net/blogs/olivier-morin/incredible-listening-to-when-im-64-makes-you-forget-your-age/index.html">Incredible! Listening to ‘When I’m 64’ makes you forget your age</a></li><li><a href="https://www.amazon.com/Slow-Productivity-Accomplishment-Without-Burnout/dp/0593544854">Slow Productivity: The Lost Art of Accomplishment without Burnout</a></li><li><a href="https://getdx.com/guide/dora-space-devex/">DORA, SPACE, and DevEx: Which framework should you use?</a></li><li><a href="https://getdx.com/podcast/developer-productivity-at-microsoft/">SPACE framework, PRs per engineer, AI research</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Setting targets for developer productivity metrics</title>
      <itunes:episode>79</itunes:episode>
      <podcast:episode>79</podcast:episode>
      <itunes:title>Setting targets for developer productivity metrics</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d07f55ef-e4de-4936-9622-b8375e2a2c0f</guid>
      <link>https://share.transistor.fm/s/7a3afea2</link>
      <description>
        <![CDATA[<p>In this episode, Abi Noda is joined by Laura Tacho, CTO at DX, engineering leadership coach, and creator of the Core 4 framework. They explore how engineering organizations can avoid common pitfalls when adopting metrics frameworks like SPACE, DORA, and Core 4.</p><p>Laura shares a practical guide to getting started with Core 4—beginning with controllable input metrics that teams can actually influence. The conversation touches on Goodhart’s Law, why focusing too much on output metrics can lead to data distortion, and how leaders can build a culture of continuous improvement rooted in meaningful measurement.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Improving systems, not distorting data</p><p>(02:20) Goal setting with the new Core 4 framework</p><p>(08:01) A quick primer on Goodhart’s law</p><p>(10:02) Input vs. output metrics—and why targeting outputs is problematic</p><p>(13:38) A health analogy demonstrating input vs. output</p><p>(17:03) A look at how the key input metrics in Core 4 drive output metrics </p><p>(24:08) How to counteract gamification </p><p>(28:24) How to get developer buy-in</p><p>(30:48) The number of metrics to focus on </p><p>(32:44) Helping leadership and teams connect the dots to how input goals drive output</p><p>(35:20) Demonstrating business impact </p><p>(38:10) Best practices for goal setting</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/">Engineering Enablement Podcast</a></li><li><a href="https://dora.dev/guides/dora-metrics-four-keys/">DORA’s software delivery metrics: the four keys</a></li><li><a href="https://getdx.com/research/space-of-developer-productivity/">The SPACE of Developer Productivity: There’s more to it than you think</a></li><li><a href="https://getdx.com/research/devex-what-actually-drives-productivity/">DevEx: What Actually Drives Productivity</a></li><li><a href="https://getdx.com/guide/dora-space-devex/">DORA, SPACE, and DevEx: Which framework should you use?</a></li><li><a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart's law </a></li><li><a href="https://www.linkedin.com/in/nicolefv/">Nicole Forsgren - Microsoft | LinkedIn</a></li><li><a href="https://en.wikipedia.org/wiki/Campbell%27s_law">Campbell's law </a></li><li><a href="https://www.lennysnewsletter.com/p/introducing-core-4-the-best-way-to">Introducing Core 4: The best way to measure and improve your product velocity</a></li><li><a href="https://getdx.com/podcast/dx-core-4-framework-overview/">DX Core 4: Framework overview, key design principles, and practical applications</a></li><li><a href="https://newsletter.getdx.com/p/dx-core-4-2024-benchmarks">DX Core 4: 2024 benchmarks - by Abi Noda</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi Noda is joined by Laura Tacho, CTO at DX, engineering leadership coach, and creator of the Core 4 framework. They explore how engineering organizations can avoid common pitfalls when adopting metrics frameworks like SPACE, DORA, and Core 4.</p><p>Laura shares a practical guide to getting started with Core 4—beginning with controllable input metrics that teams can actually influence. The conversation touches on Goodhart’s Law, why focusing too much on output metrics can lead to data distortion, and how leaders can build a culture of continuous improvement rooted in meaningful measurement.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Improving systems, not distorting data</p><p>(02:20) Goal setting with the new Core 4 framework</p><p>(08:01) A quick primer on Goodhart’s law</p><p>(10:02) Input vs. output metrics—and why targeting outputs is problematic</p><p>(13:38) A health analogy demonstrating input vs. output</p><p>(17:03) A look at how the key input metrics in Core 4 drive output metrics </p><p>(24:08) How to counteract gamification </p><p>(28:24) How to get developer buy-in</p><p>(30:48) The number of metrics to focus on </p><p>(32:44) Helping leadership and teams connect the dots to how input goals drive output</p><p>(35:20) Demonstrating business impact </p><p>(38:10) Best practices for goal setting</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/">Engineering Enablement Podcast</a></li><li><a href="https://dora.dev/guides/dora-metrics-four-keys/">DORA’s software delivery metrics: the four keys</a></li><li><a href="https://getdx.com/research/space-of-developer-productivity/">The SPACE of Developer Productivity: There’s more to it than you think</a></li><li><a href="https://getdx.com/research/devex-what-actually-drives-productivity/">DevEx: What Actually Drives Productivity</a></li><li><a href="https://getdx.com/guide/dora-space-devex/">DORA, SPACE, and DevEx: Which framework should you use?</a></li><li><a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart's law </a></li><li><a href="https://www.linkedin.com/in/nicolefv/">Nicole Forsgren - Microsoft | LinkedIn</a></li><li><a href="https://en.wikipedia.org/wiki/Campbell%27s_law">Campbell's law </a></li><li><a href="https://www.lennysnewsletter.com/p/introducing-core-4-the-best-way-to">Introducing Core 4: The best way to measure and improve your product velocity</a></li><li><a href="https://getdx.com/podcast/dx-core-4-framework-overview/">DX Core 4: Framework overview, key design principles, and practical applications</a></li><li><a href="https://newsletter.getdx.com/p/dx-core-4-2024-benchmarks">DX Core 4: 2024 benchmarks - by Abi Noda</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 09 May 2025 07:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/7a3afea2/e94e3d1d.mp3" length="41762459" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/jYOg6nfnCINv6VeEI0H_3liEN7YEzqdC316qTD8YztE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kYmNl/MTZlNTg3NmNjYThj/ZGU4OWE3YzZhNThj/ZWEyMi5wbmc.jpg"/>
      <itunes:duration>2606</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi Noda is joined by Laura Tacho, CTO at DX, engineering leadership coach, and creator of the Core 4 framework. They explore how engineering organizations can avoid common pitfalls when adopting metrics frameworks like SPACE, DORA, and Core 4.</p><p>Laura shares a practical guide to getting started with Core 4—beginning with controllable input metrics that teams can actually influence. The conversation touches on Goodhart’s Law, why focusing too much on output metrics can lead to data distortion, and how leaders can build a culture of continuous improvement rooted in meaningful measurement.</p><p><br></p><p><strong>Where to find Laura Tacho: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/lauratacho/">https://www.linkedin.com/in/lauratacho/</a></p><p>• Website: <a href="https://lauratacho.com/">https://lauratacho.com/</a></p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Improving systems, not distorting data</p><p>(02:20) Goal setting with the new Core 4 framework</p><p>(08:01) A quick primer on Goodhart’s law</p><p>(10:02) Input vs. output metrics—and why targeting outputs is problematic</p><p>(13:38) A health analogy demonstrating input vs. output</p><p>(17:03) A look at how the key input metrics in Core 4 drive output metrics </p><p>(24:08) How to counteract gamification </p><p>(28:24) How to get developer buy-in</p><p>(30:48) The number of metrics to focus on </p><p>(32:44) Helping leadership and teams connect the dots to how input goals drive output</p><p>(35:20) Demonstrating business impact </p><p>(38:10) Best practices for goal setting</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/">Engineering Enablement Podcast</a></li><li><a href="https://dora.dev/guides/dora-metrics-four-keys/">DORA’s software delivery metrics: the four keys</a></li><li><a href="https://getdx.com/research/space-of-developer-productivity/">The SPACE of Developer Productivity: There’s more to it than you think</a></li><li><a href="https://getdx.com/research/devex-what-actually-drives-productivity/">DevEx: What Actually Drives Productivity</a></li><li><a href="https://getdx.com/guide/dora-space-devex/">DORA, SPACE, and DevEx: Which framework should you use?</a></li><li><a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart's law </a></li><li><a href="https://www.linkedin.com/in/nicolefv/">Nicole Forsgren - Microsoft | LinkedIn</a></li><li><a href="https://en.wikipedia.org/wiki/Campbell%27s_law">Campbell's law </a></li><li><a href="https://www.lennysnewsletter.com/p/introducing-core-4-the-best-way-to">Introducing Core 4: The best way to measure and improve your product velocity</a></li><li><a href="https://getdx.com/podcast/dx-core-4-framework-overview/">DX Core 4: Framework overview, key design principles, and practical applications</a></li><li><a href="https://newsletter.getdx.com/p/dx-core-4-2024-benchmarks">DX Core 4: 2024 benchmarks - by Abi Noda</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>The AI adoption playbook: Lessons from Microsoft's internal strategy</title>
      <itunes:episode>78</itunes:episode>
      <podcast:episode>78</podcast:episode>
      <itunes:title>The AI adoption playbook: Lessons from Microsoft's internal strategy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">15826f0b-ac7c-4324-a638-a88925d25b02</guid>
      <link>https://share.transistor.fm/s/34f87a77</link>
      <description>
        <![CDATA[<p>Brian Houck from Microsoft returns to discuss effective strategies for driving AI adoption among software development teams. Brian shares his insights into why the immense hype around AI often serves as a barrier rather than a facilitator for adoption, citing skepticism and inflated expectations among developers. He highlights the most effective approaches, including leadership advocacy, structured training, and cultivating local champions within teams to demonstrate practical use cases. </p><p>Brian emphasizes the importance of honest communication about AI's capabilities, avoiding over-promises, and ensuring that teams clearly understand what AI tools are best suited for. Additionally, he discusses common pitfalls, such as placing excessive pressure on individuals through leaderboards and unrealistic mandates, and stresses the importance of framing AI as an assistant rather than a replacement for developer skills. Finally, Brian explores the role of data and metrics in adoption efforts, offering practical advice on how to measure usage effectively and sustainably.</p><p><strong>Where to find Brian Houck: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brianhouck/">https://www.linkedin.com/in/brianhouck/</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/bhouck/">https://www.microsoft.com/en-us/research/people/bhouck/</a> </p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Why AI hype can hinder adoption among teams</p><p>(01:47) Key strategies companies use to successfully implement AI</p><p>(04:47) Understanding why adopting AI tools is uniquely challenging</p><p>(07:09) How clear and consistent leadership communication boosts AI adoption</p><p>(10:46) The value of team leaders ("local champions") demonstrating practical AI use</p><p>(14:26) Practical advice for identifying and empowering team champions</p><p>(16:31) Common mistakes companies make when encouraging AI adoption</p><p>(19:21) Simple technical reminders and nudges that encourage AI use</p><p>(20:24) Effective ways to track and measure AI usage through dashboards</p><p>(23:18) Working with team leaders and infrastructure teams to promote AI tools</p><p>(24:20) Understanding when to shift from adoption efforts to sustained use</p><p>(25:59) Insights into the real-world productivity impact of AI</p><p>(27:52) Discussing how AI affects long-term code maintenance</p><p>(29:02) Updates on ongoing research linking sleep quality to productivity</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/">Engineering Enablement Podcast</a></li><li><a href="https://cloud.google.com/devops/dora">DORA Metrics</a></li><li><a href="https://dropbox.tech/">Dropbox Engineering Blog</a></li><li><a href="https://codeascraft.com/">Etsy Engineering Blog</a></li><li><a href="https://www.pfizer.com/science/innovation/digital">Pfizer Digital Innovation</a></li><li><a href="https://www.atlassian.com/team-playbook/plays/brown-bag">Brown Bag Sessions – A Guide</a></li><li><a href="https://visualstudio.microsoft.com/services/github-copilot/">IDE Integration and AI Tools</a></li><li><a href="https://getdx.com/developer-productivity-dashboards/">Developer Productivity Dashboard Examples</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Brian Houck from Microsoft returns to discuss effective strategies for driving AI adoption among software development teams. Brian shares his insights into why the immense hype around AI often serves as a barrier rather than a facilitator for adoption, citing skepticism and inflated expectations among developers. He highlights the most effective approaches, including leadership advocacy, structured training, and cultivating local champions within teams to demonstrate practical use cases. </p><p>Brian emphasizes the importance of honest communication about AI's capabilities, avoiding over-promises, and ensuring that teams clearly understand what AI tools are best suited for. Additionally, he discusses common pitfalls, such as placing excessive pressure on individuals through leaderboards and unrealistic mandates, and stresses the importance of framing AI as an assistant rather than a replacement for developer skills. Finally, Brian explores the role of data and metrics in adoption efforts, offering practical advice on how to measure usage effectively and sustainably.</p><p><strong>Where to find Brian Houck: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brianhouck/">https://www.linkedin.com/in/brianhouck/</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/bhouck/">https://www.microsoft.com/en-us/research/people/bhouck/</a> </p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Why AI hype can hinder adoption among teams</p><p>(01:47) Key strategies companies use to successfully implement AI</p><p>(04:47) Understanding why adopting AI tools is uniquely challenging</p><p>(07:09) How clear and consistent leadership communication boosts AI adoption</p><p>(10:46) The value of team leaders ("local champions") demonstrating practical AI use</p><p>(14:26) Practical advice for identifying and empowering team champions</p><p>(16:31) Common mistakes companies make when encouraging AI adoption</p><p>(19:21) Simple technical reminders and nudges that encourage AI use</p><p>(20:24) Effective ways to track and measure AI usage through dashboards</p><p>(23:18) Working with team leaders and infrastructure teams to promote AI tools</p><p>(24:20) Understanding when to shift from adoption efforts to sustained use</p><p>(25:59) Insights into the real-world productivity impact of AI</p><p>(27:52) Discussing how AI affects long-term code maintenance</p><p>(29:02) Updates on ongoing research linking sleep quality to productivity</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/">Engineering Enablement Podcast</a></li><li><a href="https://cloud.google.com/devops/dora">DORA Metrics</a></li><li><a href="https://dropbox.tech/">Dropbox Engineering Blog</a></li><li><a href="https://codeascraft.com/">Etsy Engineering Blog</a></li><li><a href="https://www.pfizer.com/science/innovation/digital">Pfizer Digital Innovation</a></li><li><a href="https://www.atlassian.com/team-playbook/plays/brown-bag">Brown Bag Sessions – A Guide</a></li><li><a href="https://visualstudio.microsoft.com/services/github-copilot/">IDE Integration and AI Tools</a></li><li><a href="https://getdx.com/developer-productivity-dashboards/">Developer Productivity Dashboard Examples</a></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 18 Apr 2025 07:30:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/34f87a77/c7b34c49.mp3" length="28071845" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Gp5ZBmYQvVn7aa_I4NUQc0flr2y6lG8IYLLUrag1RBs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81NGFk/NGJhZmM4NjZlZmQ3/OGJhYjM4YTk1OTI3/ZmZiZS5wbmc.jpg"/>
      <itunes:duration>1750</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Brian Houck from Microsoft returns to discuss effective strategies for driving AI adoption among software development teams. Brian shares his insights into why the immense hype around AI often serves as a barrier rather than a facilitator for adoption, citing skepticism and inflated expectations among developers. He highlights the most effective approaches, including leadership advocacy, structured training, and cultivating local champions within teams to demonstrate practical use cases. </p><p>Brian emphasizes the importance of honest communication about AI's capabilities, avoiding over-promises, and ensuring that teams clearly understand what AI tools are best suited for. Additionally, he discusses common pitfalls, such as placing excessive pressure on individuals through leaderboards and unrealistic mandates, and stresses the importance of framing AI as an assistant rather than a replacement for developer skills. Finally, Brian explores the role of data and metrics in adoption efforts, offering practical advice on how to measure usage effectively and sustainably.</p><p><strong>Where to find Brian Houck: </strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/brianhouck/">https://www.linkedin.com/in/brianhouck/</a> </p><p>• Website: <a href="https://www.microsoft.com/en-us/research/people/bhouck/">https://www.microsoft.com/en-us/research/people/bhouck/</a> </p><p><br></p><p><strong>Where to find Abi Noda:</strong></p><p>• LinkedIn: <a href="https://www.linkedin.com/in/abinoda">https://www.linkedin.com/in/abinoda</a> </p><p><br></p><p><strong>In this episode, we cover:</strong></p><p>(00:00) Intro: Why AI hype can hinder adoption among teams</p><p>(01:47) Key strategies companies use to successfully implement AI</p><p>(04:47) Understanding why adopting AI tools is uniquely challenging</p><p>(07:09) How clear and consistent leadership communication boosts AI adoption</p><p>(10:46) The value of team leaders ("local champions") demonstrating practical AI use</p><p>(14:26) Practical advice for identifying and empowering team champions</p><p>(16:31) Common mistakes companies make when encouraging AI adoption</p><p>(19:21) Simple technical reminders and nudges that encourage AI use</p><p>(20:24) Effective ways to track and measure AI usage through dashboards</p><p>(23:18) Working with team leaders and infrastructure teams to promote AI tools</p><p>(24:20) Understanding when to shift from adoption efforts to sustained use</p><p>(25:59) Insights into the real-world productivity impact of AI</p><p>(27:52) Discussing how AI affects long-term code maintenance</p><p>(29:02) Updates on ongoing research linking sleep quality to productivity</p><p><br></p><p><strong>Referenced:</strong></p><ul><li><a href="https://getdx.com/corefour">DX Core 4 Productivity Framework</a></li><li><a href="https://getdx.com/podcast/">Engineering Enablement Podcast</a></li><li><a href="https://cloud.google.com/devops/dora">DORA Metrics</a></li><li><a href="https://dropbox.tech/">Dropbox Engineering Blog</a></li><li><a href="https://codeascraft.com/">Etsy Engineering Blog</a></li><li><a href="https://www.pfizer.com/science/innovation/digital">Pfizer Digital Innovation</a></li><li><a href="https://www.atlassian.com/team-playbook/plays/brown-bag">Brown Bag Sessions – A Guide</a></li><li><a href="https://visualstudio.microsoft.com/services/github-copilot/">IDE Integration and AI Tools</a></li><li><a href="https://getdx.com/developer-productivity-dashboards/">Developer Productivity Dashboard Examples</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Gene Kim on developer experience and AI engineering</title>
      <itunes:episode>77</itunes:episode>
      <podcast:episode>77</podcast:episode>
      <itunes:title>Gene Kim on developer experience and AI engineering</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">631260ec-5df1-4891-883c-14d1ff864741</guid>
      <link>https://share.transistor.fm/s/f54c1801</link>
      <description>
        <![CDATA[<p>In this episode, we’re joined by author and researcher Gene Kim for a wide-ranging conversation on the evolution of DevOps, developer experience, and the systems thinking behind organizational performance. Gene shares insights from his latest work on socio-technical systems, the role of developer platforms, and how AI is reshaping the shape of engineering teams. We also explore the coordination challenges facing modern organizations, the limits of tooling, and the deeper principles that unite DevOps, lean, and platform engineering.</p><p><br><strong>Mentions and links:</strong></p><ul><li><a href="https://itrevolution.com/product/the-phoenix-project/">Phoenix Project</a></li><li><a href="https://hbr.org/1999/09/decoding-the-dna-of-the-toyota-production-system">Decoding the DNA of the Toyota Production System</a></li><li><a href="https://itrevolution.com/product/wiring-the-winning-organization/">Wiring the Winning Organization</a></li><li><a href="https://itrevolution.com/events/">ETLS Vegas</a></li><li>Find <a href="https://www.linkedin.com/in/realgenekim/">Gene</a> on LinkedIn</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Introduction</li><li>(2:12) The evolving landscape of developer experience</li><li>(10:34) Option Value theory, and how GenAI helps developers</li><li>(13:45) The aim of developer experience work</li><li>(19:59) The significance of layer three changes</li><li>(23:23) Framing developer experience</li><li>(32:12) GenAI’s part in ‘the death of the stubborn developer”</li><li>(36:05) GenAI’s implications on the workforce</li><li>(38:05) Where Gene’s work is heading</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we’re joined by author and researcher Gene Kim for a wide-ranging conversation on the evolution of DevOps, developer experience, and the systems thinking behind organizational performance. Gene shares insights from his latest work on socio-technical systems, the role of developer platforms, and how AI is reshaping the shape of engineering teams. We also explore the coordination challenges facing modern organizations, the limits of tooling, and the deeper principles that unite DevOps, lean, and platform engineering.</p><p><br><strong>Mentions and links:</strong></p><ul><li><a href="https://itrevolution.com/product/the-phoenix-project/">Phoenix Project</a></li><li><a href="https://hbr.org/1999/09/decoding-the-dna-of-the-toyota-production-system">Decoding the DNA of the Toyota Production System</a></li><li><a href="https://itrevolution.com/product/wiring-the-winning-organization/">Wiring the Winning Organization</a></li><li><a href="https://itrevolution.com/events/">ETLS Vegas</a></li><li>Find <a href="https://www.linkedin.com/in/realgenekim/">Gene</a> on LinkedIn</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Introduction</li><li>(2:12) The evolving landscape of developer experience</li><li>(10:34) Option Value theory, and how GenAI helps developers</li><li>(13:45) The aim of developer experience work</li><li>(19:59) The significance of layer three changes</li><li>(23:23) Framing developer experience</li><li>(32:12) GenAI’s part in ‘the death of the stubborn developer”</li><li>(36:05) GenAI’s implications on the workforce</li><li>(38:05) Where Gene’s work is heading</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 04 Apr 2025 12:22:24 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/f54c1801/643d59b2.mp3" length="92968473" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/q4ORokTOownpIxcbivEJ1BzCfP2MvaJqWXuWqVRVme0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80MjQw/NGE2NTJkZTJkMTEy/N2EzNjZiNDkzYzA3/NjY1My5wbmc.jpg"/>
      <itunes:duration>2320</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, we’re joined by author and researcher Gene Kim for a wide-ranging conversation on the evolution of DevOps, developer experience, and the systems thinking behind organizational performance. Gene shares insights from his latest work on socio-technical systems, the role of developer platforms, and how AI is reshaping the shape of engineering teams. We also explore the coordination challenges facing modern organizations, the limits of tooling, and the deeper principles that unite DevOps, lean, and platform engineering.</p><p><br><strong>Mentions and links:</strong></p><ul><li><a href="https://itrevolution.com/product/the-phoenix-project/">Phoenix Project</a></li><li><a href="https://hbr.org/1999/09/decoding-the-dna-of-the-toyota-production-system">Decoding the DNA of the Toyota Production System</a></li><li><a href="https://itrevolution.com/product/wiring-the-winning-organization/">Wiring the Winning Organization</a></li><li><a href="https://itrevolution.com/events/">ETLS Vegas</a></li><li>Find <a href="https://www.linkedin.com/in/realgenekim/">Gene</a> on LinkedIn</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Introduction</li><li>(2:12) The evolving landscape of developer experience</li><li>(10:34) Option Value theory, and how GenAI helps developers</li><li>(13:45) The aim of developer experience work</li><li>(19:59) The significance of layer three changes</li><li>(23:23) Framing developer experience</li><li>(32:12) GenAI’s part in ‘the death of the stubborn developer”</li><li>(36:05) GenAI’s implications on the workforce</li><li>(38:05) Where Gene’s work is heading</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/f54c1801/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Getting Airbnb’s Platform team to drive more impact: Reorganizing, defining strategy, and metrics</title>
      <itunes:episode>75</itunes:episode>
      <podcast:episode>75</podcast:episode>
      <itunes:title>Getting Airbnb’s Platform team to drive more impact: Reorganizing, defining strategy, and metrics</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b9e8ca0f-4ddd-46c6-a632-9e960a4f1ab3</guid>
      <link>https://share.transistor.fm/s/d87c74f8</link>
      <description>
        <![CDATA[<p>In this episode, Airbnb Developer Productivity leader Anna Sulkina shares the story of how her team transformed itself and became more impactful within the organization. She starts by describing how the team previously operated, where teams were delivering but felt they needed more clarity and alignment across teams. Then, the conversation digs into the key changes they made, including reorganizing the team, clarifying team roles, defining strategy, and improving their measurement systems. </p><p><strong>Mentions and links</strong></p><ul><li>Follow <a href="https://www.linkedin.com/in/annasulkina/">Anna</a> on LinkedIn</li><li>For A deeper look into how our Engineers and Data Scientists build a world of belonging, check out <a href="https://medium.com/airbnb-engineering">The Airbnb Tech Blog</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Intro</li><li>(1:40) Skills that make a great developer productivity leader</li><li>(4:36) Challenges in how the team operated previously</li><li>(10:49) Changing the platform org’s focus and structure</li><li>(16:04) Clarifying roles for EM’s, PM’s, and tech leads</li><li>(20:22) How Airbnb defined its infrastructure org’s strategy</li><li>(28:23) Improvements they’ve seen to developer experience satisfaction</li><li>(32:13) The evolution of Airbnb’s developer experience survey</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Airbnb Developer Productivity leader Anna Sulkina shares the story of how her team transformed itself and became more impactful within the organization. She starts by describing how the team previously operated, where teams were delivering but felt they needed more clarity and alignment across teams. Then, the conversation digs into the key changes they made, including reorganizing the team, clarifying team roles, defining strategy, and improving their measurement systems. </p><p><strong>Mentions and links</strong></p><ul><li>Follow <a href="https://www.linkedin.com/in/annasulkina/">Anna</a> on LinkedIn</li><li>For A deeper look into how our Engineers and Data Scientists build a world of belonging, check out <a href="https://medium.com/airbnb-engineering">The Airbnb Tech Blog</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Intro</li><li>(1:40) Skills that make a great developer productivity leader</li><li>(4:36) Challenges in how the team operated previously</li><li>(10:49) Changing the platform org’s focus and structure</li><li>(16:04) Clarifying roles for EM’s, PM’s, and tech leads</li><li>(20:22) How Airbnb defined its infrastructure org’s strategy</li><li>(28:23) Improvements they’ve seen to developer experience satisfaction</li><li>(32:13) The evolution of Airbnb’s developer experience survey</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 07 Mar 2025 15:02:07 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/d87c74f8/c6b26009.mp3" length="79268824" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Kx6Z0xW05Lg7xunWFuP0y7amCzgj36W5AgfKbVyAi-4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yODVi/M2I1NmYyZjg0ZDk4/ZTU4NzM3N2ZkNDlh/OTlkZC5wbmc.jpg"/>
      <itunes:duration>1978</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Airbnb Developer Productivity leader Anna Sulkina shares the story of how her team transformed itself and became more impactful within the organization. She starts by describing how the team previously operated, where teams were delivering but felt they needed more clarity and alignment across teams. Then, the conversation digs into the key changes they made, including reorganizing the team, clarifying team roles, defining strategy, and improving their measurement systems. </p><p><strong>Mentions and links</strong></p><ul><li>Follow <a href="https://www.linkedin.com/in/annasulkina/">Anna</a> on LinkedIn</li><li>For A deeper look into how our Engineers and Data Scientists build a world of belonging, check out <a href="https://medium.com/airbnb-engineering">The Airbnb Tech Blog</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Intro</li><li>(1:40) Skills that make a great developer productivity leader</li><li>(4:36) Challenges in how the team operated previously</li><li>(10:49) Changing the platform org’s focus and structure</li><li>(16:04) Clarifying roles for EM’s, PM’s, and tech leads</li><li>(20:22) How Airbnb defined its infrastructure org’s strategy</li><li>(28:23) Improvements they’ve seen to developer experience satisfaction</li><li>(32:13) The evolution of Airbnb’s developer experience survey</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/d87c74f8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>You have developer productivity metrics.  Now what?</title>
      <itunes:episode>74</itunes:episode>
      <podcast:episode>74</podcast:episode>
      <itunes:title>You have developer productivity metrics.  Now what?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d4a548db-8cd6-4b16-86a6-e38f9ad73e42</guid>
      <link>https://share.transistor.fm/s/a3463ddf</link>
      <description>
        <![CDATA[<p>Many teams struggle to use developer productivity data effectively because they don’t know how to use it to decide what to do next. We know that data is here to help us improve, but how do you know where to look? And even then, what do you actually do to put the wheels of change in motion? Listen to this conversation with  Abi Noda and Laura Tacho (CEO and CTO at DX) about data-driven management and how to take a structured, analytical approach to using data for improvement.</p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://lauratacho.com/developer-productivity-metrics-course">Laura’s developer productivity metrics course</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Intro</li><li>(2:07) The challenge we’re seeing</li><li>(6:53) Overview on using data</li><li>(8:58) Use cases for data-engineering organizations</li><li>(15:57) Use cases for data - engineering systems teams</li><li>(21:38) Two types of metrics - Diagnostics and Improvement</li><li>(38:09) Summary</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Many teams struggle to use developer productivity data effectively because they don’t know how to use it to decide what to do next. We know that data is here to help us improve, but how do you know where to look? And even then, what do you actually do to put the wheels of change in motion? Listen to this conversation with  Abi Noda and Laura Tacho (CEO and CTO at DX) about data-driven management and how to take a structured, analytical approach to using data for improvement.</p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://lauratacho.com/developer-productivity-metrics-course">Laura’s developer productivity metrics course</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Intro</li><li>(2:07) The challenge we’re seeing</li><li>(6:53) Overview on using data</li><li>(8:58) Use cases for data-engineering organizations</li><li>(15:57) Use cases for data - engineering systems teams</li><li>(21:38) Two types of metrics - Diagnostics and Improvement</li><li>(38:09) Summary</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 21 Feb 2025 17:04:23 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/a3463ddf/b0632670.mp3" length="94084692" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/VeJhLXOf7Pj1vVMgRr7x7C_45C4vWm8i_u8k1hOMpVc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lM2Vm/NDg3MjhiY2YzOTIz/Y2FkYmQ2Njk4ODUx/YmY1OC5wbmc.jpg"/>
      <itunes:duration>2349</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Many teams struggle to use developer productivity data effectively because they don’t know how to use it to decide what to do next. We know that data is here to help us improve, but how do you know where to look? And even then, what do you actually do to put the wheels of change in motion? Listen to this conversation with  Abi Noda and Laura Tacho (CEO and CTO at DX) about data-driven management and how to take a structured, analytical approach to using data for improvement.</p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://lauratacho.com/developer-productivity-metrics-course">Laura’s developer productivity metrics course</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Intro</li><li>(2:07) The challenge we’re seeing</li><li>(6:53) Overview on using data</li><li>(8:58) Use cases for data-engineering organizations</li><li>(15:57) Use cases for data - engineering systems teams</li><li>(21:38) Two types of metrics - Diagnostics and Improvement</li><li>(38:09) Summary</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/a3463ddf/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Leveraging sentiment data, driving org-wide action, and executive engagement</title>
      <itunes:episode>73</itunes:episode>
      <podcast:episode>73</podcast:episode>
      <itunes:title>Leveraging sentiment data, driving org-wide action, and executive engagement</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">315fabdb-3e58-4adc-9393-57e333824b29</guid>
      <link>https://share.transistor.fm/s/35295ce0</link>
      <description>
        <![CDATA[<p>In this episode, David Betts, leader of Twilio’s developer platform team, shares how Twilio leverages developer sentiment data to drive platform engineering initiatives, optimize Kubernetes adoption, and demonstrate ROI for leadership. David details Twilio’s journey from traditional metrics to sentiment-driven insights, the innovative tools his teams have built to streamline CI/CD workflows, and the strategies they use to align platform investments with organizational goals.</p><p><strong>Mentions and links:</strong></p><ul><li>Find <a href="https://www.linkedin.com/in/wdavidbetts/">David</a> on LinkedIn</li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://www.askyourdeveloper.com/">Ask Your Developer</a> by Jeff Lawson, former CEO of Twilio</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Introduction</li><li>(0:49) Twilio's developer platform team</li><li>(2:03) Twilio's approach to release engineering and CD</li><li>(4:10) How they use sentiment data and telemetry metrics</li><li>(7:27) Comparing sentiment data and telemetry metrics</li><li>(10:25) How to take action on sentiment data</li><li>(13:16) What resonates with execs</li><li>(15:44) Proving DX value: sentiment, efficiency, and ROI</li><li>(19:15) Balancing quarterly and real-time developer feedback</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, David Betts, leader of Twilio’s developer platform team, shares how Twilio leverages developer sentiment data to drive platform engineering initiatives, optimize Kubernetes adoption, and demonstrate ROI for leadership. David details Twilio’s journey from traditional metrics to sentiment-driven insights, the innovative tools his teams have built to streamline CI/CD workflows, and the strategies they use to align platform investments with organizational goals.</p><p><strong>Mentions and links:</strong></p><ul><li>Find <a href="https://www.linkedin.com/in/wdavidbetts/">David</a> on LinkedIn</li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://www.askyourdeveloper.com/">Ask Your Developer</a> by Jeff Lawson, former CEO of Twilio</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Introduction</li><li>(0:49) Twilio's developer platform team</li><li>(2:03) Twilio's approach to release engineering and CD</li><li>(4:10) How they use sentiment data and telemetry metrics</li><li>(7:27) Comparing sentiment data and telemetry metrics</li><li>(10:25) How to take action on sentiment data</li><li>(13:16) What resonates with execs</li><li>(15:44) Proving DX value: sentiment, efficiency, and ROI</li><li>(19:15) Balancing quarterly and real-time developer feedback</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 31 Jan 2025 16:03:15 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/35295ce0/92a8f9a2.mp3" length="59069464" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/LtMxZ5mI3malbUD8BYDn46kPjREH85APBxEenrgMnBY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83YTQ0/ODJiMDk5NWYyMDA2/ZmNlYTU1NzVkNjli/MjU2NC5wbmc.jpg"/>
      <itunes:duration>1473</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, David Betts, leader of Twilio’s developer platform team, shares how Twilio leverages developer sentiment data to drive platform engineering initiatives, optimize Kubernetes adoption, and demonstrate ROI for leadership. David details Twilio’s journey from traditional metrics to sentiment-driven insights, the innovative tools his teams have built to streamline CI/CD workflows, and the strategies they use to align platform investments with organizational goals.</p><p><strong>Mentions and links:</strong></p><ul><li>Find <a href="https://www.linkedin.com/in/wdavidbetts/">David</a> on LinkedIn</li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://www.askyourdeveloper.com/">Ask Your Developer</a> by Jeff Lawson, former CEO of Twilio</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:00) Introduction</li><li>(0:49) Twilio's developer platform team</li><li>(2:03) Twilio's approach to release engineering and CD</li><li>(4:10) How they use sentiment data and telemetry metrics</li><li>(7:27) Comparing sentiment data and telemetry metrics</li><li>(10:25) How to take action on sentiment data</li><li>(13:16) What resonates with execs</li><li>(15:44) Proving DX value: sentiment, efficiency, and ROI</li><li>(19:15) Balancing quarterly and real-time developer feedback</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/35295ce0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Rethinking developer experience at T-Mobile: DevEx vs devprod, exec buy-in, and developer self-service</title>
      <itunes:episode>72</itunes:episode>
      <podcast:episode>72</podcast:episode>
      <itunes:title>Rethinking developer experience at T-Mobile: DevEx vs devprod, exec buy-in, and developer self-service</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ae4224f8-566c-4a45-a397-acf396c84a60</guid>
      <link>https://share.transistor.fm/s/ef6f9ba7</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/cjchand/">Chris Chandler</a><strong> </strong>is a Senior Member of the Technical Staff for Developer Productivity at T-Mobile. Chris has led several major initiatives to improve developer experience including their internal developer portal, Starter Kits (a patented developer platform that predates Backstage), and Workforce Transformation Bootcamps for onboarding developers faster.</p><p><strong>Mentions and links:</strong></p><ul><li>Follow Chris <a href="https://www.linkedin.com/in/cjchand/">on LinkedIn</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li>Listen to <a href="https://podcasts.apple.com/us/podcast/decoder-with-nilay-patel/id1011668648">Decoder</a> with Nilay Patel.</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:47) From developer experience to developer productivity</li><li>(7:03) Getting executive buy-in for developer productivity initiatives</li><li>(13:54) What Chris’s team is responsible for</li><li>(17:02) How they’ve built relationships with other teams</li><li>(20:57) How they built and got funding for Dev Console and Starter Kits</li><li>(27:23) Homegrown solution vs Backstage</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/cjchand/">Chris Chandler</a><strong> </strong>is a Senior Member of the Technical Staff for Developer Productivity at T-Mobile. Chris has led several major initiatives to improve developer experience including their internal developer portal, Starter Kits (a patented developer platform that predates Backstage), and Workforce Transformation Bootcamps for onboarding developers faster.</p><p><strong>Mentions and links:</strong></p><ul><li>Follow Chris <a href="https://www.linkedin.com/in/cjchand/">on LinkedIn</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li>Listen to <a href="https://podcasts.apple.com/us/podcast/decoder-with-nilay-patel/id1011668648">Decoder</a> with Nilay Patel.</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:47) From developer experience to developer productivity</li><li>(7:03) Getting executive buy-in for developer productivity initiatives</li><li>(13:54) What Chris’s team is responsible for</li><li>(17:02) How they’ve built relationships with other teams</li><li>(20:57) How they built and got funding for Dev Console and Starter Kits</li><li>(27:23) Homegrown solution vs Backstage</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 22 Jan 2025 15:53:41 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/ef6f9ba7/ebef873f.mp3" length="76713091" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/N3LGhhKcQ9vS_mkUHtg25sp6CReBf1yy_jV9CbBPJvQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85N2Jl/MzE3YzY2MjNhYWZl/YmZlNWNhOGU1ZWIw/NDZiYy5wbmc.jpg"/>
      <itunes:duration>1916</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.linkedin.com/in/cjchand/">Chris Chandler</a><strong> </strong>is a Senior Member of the Technical Staff for Developer Productivity at T-Mobile. Chris has led several major initiatives to improve developer experience including their internal developer portal, Starter Kits (a patented developer platform that predates Backstage), and Workforce Transformation Bootcamps for onboarding developers faster.</p><p><strong>Mentions and links:</strong></p><ul><li>Follow Chris <a href="https://www.linkedin.com/in/cjchand/">on LinkedIn</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li>Listen to <a href="https://podcasts.apple.com/us/podcast/decoder-with-nilay-patel/id1011668648">Decoder</a> with Nilay Patel.</li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:47) From developer experience to developer productivity</li><li>(7:03) Getting executive buy-in for developer productivity initiatives</li><li>(13:54) What Chris’s team is responsible for</li><li>(17:02) How they’ve built relationships with other teams</li><li>(20:57) How they built and got funding for Dev Console and Starter Kits</li><li>(27:23) Homegrown solution vs Backstage</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/ef6f9ba7/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>DX Core 4: 2024 benchmarks</title>
      <itunes:episode>71</itunes:episode>
      <podcast:episode>71</podcast:episode>
      <itunes:title>DX Core 4: 2024 benchmarks</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b1c74b4d-e003-4849-a2c4-6f77a9357bd5</guid>
      <link>https://share.transistor.fm/s/d64c0ca4</link>
      <description>
        <![CDATA[<p>In this episode, Abi and Laura dive into the 2024 DX Core 4 benchmarks, sharing insights across data from 500+ companies. They discuss what these benchmarks mean for engineering leaders, how to interpret key metrics like the Developer Experience Index, and offer advice on how to best use benchmarking data in your organization. </p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/benchmarks/">DX core 4 benchmarks</a></li><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://getdx.com/research/the-one-number-you-need-to-increase-roi-per-engineer/">Developer experience index (DXI)</a></li><li>Will Larson’s article on <a href="https://lethain.com/measuring-developer-experience-benchmarks-theory-of-improvement/">the Core 4 and power of benchmarking data</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:42) What benchmarks are for</li><li>(3:44) Overview of the DX Core 4 benchmarks</li><li>(6:07) PR throughput data </li><li>(11:05) Key insights related to startups and mobile teams </li><li>(14:54) Change fail rate data </li><li>(19:42) How to best use benchmarking data</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi and Laura dive into the 2024 DX Core 4 benchmarks, sharing insights across data from 500+ companies. They discuss what these benchmarks mean for engineering leaders, how to interpret key metrics like the Developer Experience Index, and offer advice on how to best use benchmarking data in your organization. </p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/benchmarks/">DX core 4 benchmarks</a></li><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://getdx.com/research/the-one-number-you-need-to-increase-roi-per-engineer/">Developer experience index (DXI)</a></li><li>Will Larson’s article on <a href="https://lethain.com/measuring-developer-experience-benchmarks-theory-of-improvement/">the Core 4 and power of benchmarking data</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:42) What benchmarks are for</li><li>(3:44) Overview of the DX Core 4 benchmarks</li><li>(6:07) PR throughput data </li><li>(11:05) Key insights related to startups and mobile teams </li><li>(14:54) Change fail rate data </li><li>(19:42) How to best use benchmarking data</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 08 Jan 2025 14:37:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/d64c0ca4/fd010dcb.mp3" length="68123108" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/h2Fl1G1ZnqpwkjtJPDSiB5wE_W7cXesIRsawm6EcjSs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81NTUy/MGFkNTQ4ZWZiZjA0/MTQ0OTM3Nzc4ODcy/MTI0ZC5wbmc.jpg"/>
      <itunes:duration>1700</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi and Laura dive into the 2024 DX Core 4 benchmarks, sharing insights across data from 500+ companies. They discuss what these benchmarks mean for engineering leaders, how to interpret key metrics like the Developer Experience Index, and offer advice on how to best use benchmarking data in your organization. </p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/benchmarks/">DX core 4 benchmarks</a></li><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://getdx.com/research/the-one-number-you-need-to-increase-roi-per-engineer/">Developer experience index (DXI)</a></li><li>Will Larson’s article on <a href="https://lethain.com/measuring-developer-experience-benchmarks-theory-of-improvement/">the Core 4 and power of benchmarking data</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(0:42) What benchmarks are for</li><li>(3:44) Overview of the DX Core 4 benchmarks</li><li>(6:07) PR throughput data </li><li>(11:05) Key insights related to startups and mobile teams </li><li>(14:54) Change fail rate data </li><li>(19:42) How to best use benchmarking data</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/d64c0ca4/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>DX Core 4: Framework overview, key design principles, and practical applications </title>
      <itunes:episode>70</itunes:episode>
      <podcast:episode>70</podcast:episode>
      <itunes:title>DX Core 4: Framework overview, key design principles, and practical applications </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">811bf1cd-5444-4f71-9355-f7a95dc3bfb8</guid>
      <link>https://share.transistor.fm/s/8698e4c7</link>
      <description>
        <![CDATA[<p>In this episode, Abi and Laura introduce the DX Core 4, a new framework designed to simplify how organizations measure developer productivity. They discuss the evolution of productivity metrics, comparing Core 4 with frameworks like DORA, SPACE, and DevEx, and emphasize its focus on speed, effectiveness, quality, and impact. They explore why each metric was chosen, the importance of balancing productivity measures with developer experience, and how Core 4 can help engineering leaders align productivity goals with broader business objectives.   </p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://lauratacho.com/developer-productivity-metrics-course">Laura’s developer productivity metrics course</a></li></ul><p><strong>Discussion Points:</strong></p><ul><li>(2:42) Introduction to the DX Core 4</li><li>(3:42) Identifying the Core 4's target audience and key stakeholders</li><li>(4:38) Origins and purpose</li><li>(9:20) Building executive alignment</li><li>(14:15) Tying metrics to business value through output-oriented measures</li><li>(24:45) Defining impact</li><li>(32:42) Choosing between DORA, SPACE, and Core 4 frameworks</li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi and Laura introduce the DX Core 4, a new framework designed to simplify how organizations measure developer productivity. They discuss the evolution of productivity metrics, comparing Core 4 with frameworks like DORA, SPACE, and DevEx, and emphasize its focus on speed, effectiveness, quality, and impact. They explore why each metric was chosen, the importance of balancing productivity measures with developer experience, and how Core 4 can help engineering leaders align productivity goals with broader business objectives.   </p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://lauratacho.com/developer-productivity-metrics-course">Laura’s developer productivity metrics course</a></li></ul><p><strong>Discussion Points:</strong></p><ul><li>(2:42) Introduction to the DX Core 4</li><li>(3:42) Identifying the Core 4's target audience and key stakeholders</li><li>(4:38) Origins and purpose</li><li>(9:20) Building executive alignment</li><li>(14:15) Tying metrics to business value through output-oriented measures</li><li>(24:45) Defining impact</li><li>(32:42) Choosing between DORA, SPACE, and Core 4 frameworks</li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Thu, 12 Dec 2024 12:15:11 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/8698e4c7/e90cce56.mp3" length="88892508" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/pvJyZrSZeaDA5eaLJf5v_vX0Gt_oMe1Yvq5DlahnEVU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYWY1/ZjIwMmYxMGJhOWEx/MzY4NjE5NzU2ZjYy/MWZkNy5wbmc.jpg"/>
      <itunes:duration>2219</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi and Laura introduce the DX Core 4, a new framework designed to simplify how organizations measure developer productivity. They discuss the evolution of productivity metrics, comparing Core 4 with frameworks like DORA, SPACE, and DevEx, and emphasize its focus on speed, effectiveness, quality, and impact. They explore why each metric was chosen, the importance of balancing productivity measures with developer experience, and how Core 4 can help engineering leaders align productivity goals with broader business objectives.   </p><p><strong>Mentions and Links:</strong></p><ul><li><a href="https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://lauratacho.com/developer-productivity-metrics-course">Laura’s developer productivity metrics course</a></li></ul><p><strong>Discussion Points:</strong></p><ul><li>(2:42) Introduction to the DX Core 4</li><li>(3:42) Identifying the Core 4's target audience and key stakeholders</li><li>(4:38) Origins and purpose</li><li>(9:20) Building executive alignment</li><li>(14:15) Tying metrics to business value through output-oriented measures</li><li>(24:45) Defining impact</li><li>(32:42) Choosing between DORA, SPACE, and Core 4 frameworks</li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/8698e4c7/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>SPACE framework, PRs per engineer, AI research</title>
      <itunes:episode>69</itunes:episode>
      <podcast:episode>69</podcast:episode>
      <itunes:title>SPACE framework, PRs per engineer, AI research</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7577081d-d5d0-438d-8361-96d230006e20</guid>
      <link>https://share.transistor.fm/s/68f9c0a2</link>
      <description>
        <![CDATA[<p>In this episode, Brian Houck, Applied Scientist, Developer Productivity at Microsoft, covers SPACE, DORA, and some specific metrics the developer productivity research team is finding useful. The conversation starts by comparing DORA and SPACE. Brian explains why activity metrics were included in the SPACE framework, then dives into one metric in particular: pull request throughput. Brian also describes another metric Microsoft is finding useful, and gives a preview into where his research is heading. </p><p><br></p><p><strong>Mentions and links</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/brianhouck/">Brian</a> on LinkedIn</li><li><a href="https://queue.acm.org/detail.cfm?id=3454124">The SPACE of Developer Productivity: There's More to It Than You Think</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://getdx.com/research/devex-in-action/">DevEx in action</a></li><li><a href="https://getdx.com/guide/dora-space-devex/?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=21009902311&amp;utm_content=163140442073&amp;utm_term=space%20productivity%20framework&amp;gad_source=1&amp;gclid=Cj0KCQiA6Ou5BhCrARIsAPoTxrADqVkg9M7QVmkFS793TQ3JW8G0FUryev_Evpr2N1ytpJEvgogMcvQaAtSJEALw_wcB">DORA, SPACE, and DevEx: Which framework should you use?</a><p></p></li></ul><p><strong>Discussion points</strong></p><ul><li>(0:48) SPACE framework's growth and adoption</li><li>(3:47) Comparing DORA and SPACE</li><li>(6:30) SPACE misconceptions and common implementation challenges</li><li>(9:34) Whether PR throughput is useful  </li><li>(15:13) Real-world example of using PR throughput </li><li>(21:33) Talking about metrics like PR throughput internally </li><li>(24:39) Where Brian’s research is heading </li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Brian Houck, Applied Scientist, Developer Productivity at Microsoft, covers SPACE, DORA, and some specific metrics the developer productivity research team is finding useful. The conversation starts by comparing DORA and SPACE. Brian explains why activity metrics were included in the SPACE framework, then dives into one metric in particular: pull request throughput. Brian also describes another metric Microsoft is finding useful, and gives a preview into where his research is heading. </p><p><br></p><p><strong>Mentions and links</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/brianhouck/">Brian</a> on LinkedIn</li><li><a href="https://queue.acm.org/detail.cfm?id=3454124">The SPACE of Developer Productivity: There's More to It Than You Think</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://getdx.com/research/devex-in-action/">DevEx in action</a></li><li><a href="https://getdx.com/guide/dora-space-devex/?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=21009902311&amp;utm_content=163140442073&amp;utm_term=space%20productivity%20framework&amp;gad_source=1&amp;gclid=Cj0KCQiA6Ou5BhCrARIsAPoTxrADqVkg9M7QVmkFS793TQ3JW8G0FUryev_Evpr2N1ytpJEvgogMcvQaAtSJEALw_wcB">DORA, SPACE, and DevEx: Which framework should you use?</a><p></p></li></ul><p><strong>Discussion points</strong></p><ul><li>(0:48) SPACE framework's growth and adoption</li><li>(3:47) Comparing DORA and SPACE</li><li>(6:30) SPACE misconceptions and common implementation challenges</li><li>(9:34) Whether PR throughput is useful  </li><li>(15:13) Real-world example of using PR throughput </li><li>(21:33) Talking about metrics like PR throughput internally </li><li>(24:39) Where Brian’s research is heading </li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 26 Nov 2024 15:40:34 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/68f9c0a2/27a7b309.mp3" length="77506598" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/OPoduvSwsxahiNSMFbQ7OGrEM7hN2yoSltCKV5xl7QI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMmFk/NDRhZjAyZmM4Nzll/YWQ2N2MyZTFkYmQ3/MzE2Yi5wbmc.jpg"/>
      <itunes:duration>1934</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Brian Houck, Applied Scientist, Developer Productivity at Microsoft, covers SPACE, DORA, and some specific metrics the developer productivity research team is finding useful. The conversation starts by comparing DORA and SPACE. Brian explains why activity metrics were included in the SPACE framework, then dives into one metric in particular: pull request throughput. Brian also describes another metric Microsoft is finding useful, and gives a preview into where his research is heading. </p><p><br></p><p><strong>Mentions and links</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/brianhouck/">Brian</a> on LinkedIn</li><li><a href="https://queue.acm.org/detail.cfm?id=3454124">The SPACE of Developer Productivity: There's More to It Than You Think</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4</a></li><li><a href="https://getdx.com/research/devex-in-action/">DevEx in action</a></li><li><a href="https://getdx.com/guide/dora-space-devex/?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=21009902311&amp;utm_content=163140442073&amp;utm_term=space%20productivity%20framework&amp;gad_source=1&amp;gclid=Cj0KCQiA6Ou5BhCrARIsAPoTxrADqVkg9M7QVmkFS793TQ3JW8G0FUryev_Evpr2N1ytpJEvgogMcvQaAtSJEALw_wcB">DORA, SPACE, and DevEx: Which framework should you use?</a><p></p></li></ul><p><strong>Discussion points</strong></p><ul><li>(0:48) SPACE framework's growth and adoption</li><li>(3:47) Comparing DORA and SPACE</li><li>(6:30) SPACE misconceptions and common implementation challenges</li><li>(9:34) Whether PR throughput is useful  </li><li>(15:13) Real-world example of using PR throughput </li><li>(21:33) Talking about metrics like PR throughput internally </li><li>(24:39) Where Brian’s research is heading </li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/68f9c0a2/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>CTO buy-in, measuring sentiment, and customer focus</title>
      <itunes:episode>68</itunes:episode>
      <podcast:episode>68</podcast:episode>
      <itunes:title>CTO buy-in, measuring sentiment, and customer focus</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ae966f54-78e5-488a-9a44-509de346bf46</guid>
      <link>https://share.transistor.fm/s/58fd8fa9</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/58fd8fa9/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this episode, Snowflake’s Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering, dive into how they elevated developer productivity to a top company priority. They discuss the pivotal role of Snowflake’s CTO, who personally invested over half his time to guide the initiative, and how leadership's hands-on involvement secured buy-in across teams. The conversation also explores the importance of collaboration between engineering and product management, and how measuring user sentiment helped them deliver meaningful, long-lasting improvements.</p><p><strong>Mentions and links</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/giladturbahn/">Gilad</a> and <a href="https://www.linkedin.com/in/amy-yuan-a8ba783/">Amy</a> on LinkedIn</li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4<br></a><br></li></ul><p><strong>Discussion Points</strong></p><ul><li>(0:48) The need for a shift at Snowflake</li><li>(3:59) Leadership involvement and prioritization of developer productivity</li><li>(8:56) The partnership between engineering and product managers</li><li>(20:01) From feature factory to customer outcome-focused development</li><li>(27:36) Shifting measurement focus to user sentiment and customer outcomes</li><li>(39:13) Gaining buy-in for sentiment metrics and tying them to business impact</li><li>(51:11) How Snowflake’s CTO and volunteers accelerated developer productivity improvements.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/58fd8fa9/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this episode, Snowflake’s Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering, dive into how they elevated developer productivity to a top company priority. They discuss the pivotal role of Snowflake’s CTO, who personally invested over half his time to guide the initiative, and how leadership's hands-on involvement secured buy-in across teams. The conversation also explores the importance of collaboration between engineering and product management, and how measuring user sentiment helped them deliver meaningful, long-lasting improvements.</p><p><strong>Mentions and links</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/giladturbahn/">Gilad</a> and <a href="https://www.linkedin.com/in/amy-yuan-a8ba783/">Amy</a> on LinkedIn</li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4<br></a><br></li></ul><p><strong>Discussion Points</strong></p><ul><li>(0:48) The need for a shift at Snowflake</li><li>(3:59) Leadership involvement and prioritization of developer productivity</li><li>(8:56) The partnership between engineering and product managers</li><li>(20:01) From feature factory to customer outcome-focused development</li><li>(27:36) Shifting measurement focus to user sentiment and customer outcomes</li><li>(39:13) Gaining buy-in for sentiment metrics and tying them to business impact</li><li>(51:11) How Snowflake’s CTO and volunteers accelerated developer productivity improvements.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 12 Nov 2024 13:15:23 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/58fd8fa9/6fd94721.mp3" length="139501804" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/zmaVDLDeBpbNo40tfwg42spwXCO5Le6DMmr4scKfni0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lNDY5/YTExYTliMmQ3MjI4/YTJhYTVhYTRhNThk/MDc5Yi5wbmc.jpg"/>
      <itunes:duration>3484</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/58fd8fa9/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this episode, Snowflake’s Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering, dive into how they elevated developer productivity to a top company priority. They discuss the pivotal role of Snowflake’s CTO, who personally invested over half his time to guide the initiative, and how leadership's hands-on involvement secured buy-in across teams. The conversation also explores the importance of collaboration between engineering and product management, and how measuring user sentiment helped them deliver meaningful, long-lasting improvements.</p><p><strong>Mentions and links</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/giladturbahn/">Gilad</a> and <a href="https://www.linkedin.com/in/amy-yuan-a8ba783/">Amy</a> on LinkedIn</li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4<br></a><br></li></ul><p><strong>Discussion Points</strong></p><ul><li>(0:48) The need for a shift at Snowflake</li><li>(3:59) Leadership involvement and prioritization of developer productivity</li><li>(8:56) The partnership between engineering and product managers</li><li>(20:01) From feature factory to customer outcome-focused development</li><li>(27:36) Shifting measurement focus to user sentiment and customer outcomes</li><li>(39:13) Gaining buy-in for sentiment metrics and tying them to business impact</li><li>(51:11) How Snowflake’s CTO and volunteers accelerated developer productivity improvements.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/58fd8fa9/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/58fd8fa9/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Platform team challenges, realigning on DevEx, and change management</title>
      <itunes:episode>67</itunes:episode>
      <podcast:episode>67</podcast:episode>
      <itunes:title>Platform team challenges, realigning on DevEx, and change management</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3229bbe3-494e-44e3-9816-12065ad49d0d</guid>
      <link>https://share.transistor.fm/s/8a5580a2</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/8a5580a2/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this episode, Emanuel Mueller Ramos, Head of Developer Experience at Skyscanner, discusses the evolution of his team as they transitioned from focusing on frameworks and middleware to becoming a customer-centric, impact-driven organization. Emanuel details the strategies he used to gain stakeholder buy-in, why it's crucial to rethink traditional productivity metrics, and how they made a cultural shift to prioritize developer happiness and effectiveness. This conversation highlights the steps necessary to build a developer experience function that delivers meaningful impact.</p><p><br><strong>Mentions and links:</strong></p><ul><li><a href="https://www.linkedin.com/in/emanuel-muller-ramos/">Follow Emanuel on LinkedIn</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4<br></a><br></li></ul><p><strong>Discussion points:<br></strong><br></p><ul><li>(1:14) The beginning of Skyscanner's developer productivity division</li><li>(3:53) Gaining stakeholder buy-in and refocusing the teams</li><li>(5:57) Redefining success metrics for developer productivity</li><li>(8:57) Pitching the developer experience focus to leadership</li><li>(17:26) Moving from frameworks to feedback loops</li><li>(20:45) Fostering a customer-centric culture</li><li>(23:20) Defining the collaboration between platform and developer experience teams</li><li>(26:41) Choosing the right metrics for developer experience success </li><li>(31:31) Risks and challenges ahead<p></p></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/8a5580a2/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this episode, Emanuel Mueller Ramos, Head of Developer Experience at Skyscanner, discusses the evolution of his team as they transitioned from focusing on frameworks and middleware to becoming a customer-centric, impact-driven organization. Emanuel details the strategies he used to gain stakeholder buy-in, why it's crucial to rethink traditional productivity metrics, and how they made a cultural shift to prioritize developer happiness and effectiveness. This conversation highlights the steps necessary to build a developer experience function that delivers meaningful impact.</p><p><br><strong>Mentions and links:</strong></p><ul><li><a href="https://www.linkedin.com/in/emanuel-muller-ramos/">Follow Emanuel on LinkedIn</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4<br></a><br></li></ul><p><strong>Discussion points:<br></strong><br></p><ul><li>(1:14) The beginning of Skyscanner's developer productivity division</li><li>(3:53) Gaining stakeholder buy-in and refocusing the teams</li><li>(5:57) Redefining success metrics for developer productivity</li><li>(8:57) Pitching the developer experience focus to leadership</li><li>(17:26) Moving from frameworks to feedback loops</li><li>(20:45) Fostering a customer-centric culture</li><li>(23:20) Defining the collaboration between platform and developer experience teams</li><li>(26:41) Choosing the right metrics for developer experience success </li><li>(31:31) Risks and challenges ahead<p></p></li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 25 Oct 2024 14:00:27 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/8a5580a2/e998240a.mp3" length="82554418" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/8GeBxzkXo3OkMubVd2-bK_mEZxDJi1md3az3KN91S9w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wMmVl/N2FiMTllNTY3OWIz/ZGNhYjgyMzYwZDdj/OGQ1Mi5wbmc.jpg"/>
      <itunes:duration>2061</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/8a5580a2/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this episode, Emanuel Mueller Ramos, Head of Developer Experience at Skyscanner, discusses the evolution of his team as they transitioned from focusing on frameworks and middleware to becoming a customer-centric, impact-driven organization. Emanuel details the strategies he used to gain stakeholder buy-in, why it's crucial to rethink traditional productivity metrics, and how they made a cultural shift to prioritize developer happiness and effectiveness. This conversation highlights the steps necessary to build a developer experience function that delivers meaningful impact.</p><p><br><strong>Mentions and links:</strong></p><ul><li><a href="https://www.linkedin.com/in/emanuel-muller-ramos/">Follow Emanuel on LinkedIn</a></li><li><a href="https://getdx.com/report/dx-core-4/">Measuring developer productivity with the DX Core 4<br></a><br></li></ul><p><strong>Discussion points:<br></strong><br></p><ul><li>(1:14) The beginning of Skyscanner's developer productivity division</li><li>(3:53) Gaining stakeholder buy-in and refocusing the teams</li><li>(5:57) Redefining success metrics for developer productivity</li><li>(8:57) Pitching the developer experience focus to leadership</li><li>(17:26) Moving from frameworks to feedback loops</li><li>(20:45) Fostering a customer-centric culture</li><li>(23:20) Defining the collaboration between platform and developer experience teams</li><li>(26:41) Choosing the right metrics for developer experience success </li><li>(31:31) Risks and challenges ahead<p></p></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/8a5580a2/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/8a5580a2/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Exploring Developer Productivity with AI: Insights from Airbnb, GitHub, and Jumio</title>
      <itunes:episode>66</itunes:episode>
      <podcast:episode>66</podcast:episode>
      <itunes:title>Exploring Developer Productivity with AI: Insights from Airbnb, GitHub, and Jumio</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">89fd9f2c-3f08-44f0-9258-fd2fa0459688</guid>
      <link>https://share.transistor.fm/s/267f3dc9</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/267f3dc9/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, Abi is joined by industry leaders Idan Gazit from GitHub, Anna Sulkina from Airbnb, and Alix Melchy from Jumio. Together, they discuss the impact of GenAI tools on developer productivity, exploring challenges in measurement and enhancement. They delve into AI's evolving role in engineering, from overcoming friction points to exploring real-world applications and the future of technology. Gain insights into how AI-driven chat assistants are reshaping workflows and the vision for coding.</p><p><br><strong>Links:</strong> </p><ul><li><a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/">How to measure GenAI adoption and impact</a></li></ul><p><strong>Timestamps:</strong></p><ul><li>(2:58) Challenges of Measuring AI Productivity</li><li>(6:02) Use cases for GenAI within the Airbnb developer organization</li><li>(10:26) GitHub’s process for developing and testing new GenAI tools for developers</li><li>(12:42) Driving GenAI adoption strategies at Airbnb</li><li>(14:20) Research impact and productivity gains with GenAI tools at Airbnb</li><li>(17:03) Copilot use cases surveyed among Jumio's developers</li><li>(18:46) Challenges measuring impact of AI products at GitHub</li><li>(21:33) Biggest gains of GenAI usage at Airbnb</li><li>(24:19) Future opportunities in GenAI</li><li>(30:31) Challenges in GenAI for developers</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/267f3dc9/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, Abi is joined by industry leaders Idan Gazit from GitHub, Anna Sulkina from Airbnb, and Alix Melchy from Jumio. Together, they discuss the impact of GenAI tools on developer productivity, exploring challenges in measurement and enhancement. They delve into AI's evolving role in engineering, from overcoming friction points to exploring real-world applications and the future of technology. Gain insights into how AI-driven chat assistants are reshaping workflows and the vision for coding.</p><p><br><strong>Links:</strong> </p><ul><li><a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/">How to measure GenAI adoption and impact</a></li></ul><p><strong>Timestamps:</strong></p><ul><li>(2:58) Challenges of Measuring AI Productivity</li><li>(6:02) Use cases for GenAI within the Airbnb developer organization</li><li>(10:26) GitHub’s process for developing and testing new GenAI tools for developers</li><li>(12:42) Driving GenAI adoption strategies at Airbnb</li><li>(14:20) Research impact and productivity gains with GenAI tools at Airbnb</li><li>(17:03) Copilot use cases surveyed among Jumio's developers</li><li>(18:46) Challenges measuring impact of AI products at GitHub</li><li>(21:33) Biggest gains of GenAI usage at Airbnb</li><li>(24:19) Future opportunities in GenAI</li><li>(30:31) Challenges in GenAI for developers</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 16 Jul 2024 19:16:49 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/267f3dc9/96b81bd9.mp3" length="85069423" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/3zaEWR_NQVi78T1Ig-UKsquTHTb2E3BtXli7NdsBzR0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMmZl/ZGMwZmFmOTk1MjNj/YTlhYzFjM2U3YzQ0/ZGIwNC5wbmc.jpg"/>
      <itunes:duration>2125</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/267f3dc9/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, Abi is joined by industry leaders Idan Gazit from GitHub, Anna Sulkina from Airbnb, and Alix Melchy from Jumio. Together, they discuss the impact of GenAI tools on developer productivity, exploring challenges in measurement and enhancement. They delve into AI's evolving role in engineering, from overcoming friction points to exploring real-world applications and the future of technology. Gain insights into how AI-driven chat assistants are reshaping workflows and the vision for coding.</p><p><br><strong>Links:</strong> </p><ul><li><a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/">How to measure GenAI adoption and impact</a></li></ul><p><strong>Timestamps:</strong></p><ul><li>(2:58) Challenges of Measuring AI Productivity</li><li>(6:02) Use cases for GenAI within the Airbnb developer organization</li><li>(10:26) GitHub’s process for developing and testing new GenAI tools for developers</li><li>(12:42) Driving GenAI adoption strategies at Airbnb</li><li>(14:20) Research impact and productivity gains with GenAI tools at Airbnb</li><li>(17:03) Copilot use cases surveyed among Jumio's developers</li><li>(18:46) Challenges measuring impact of AI products at GitHub</li><li>(21:33) Biggest gains of GenAI usage at Airbnb</li><li>(24:19) Future opportunities in GenAI</li><li>(30:31) Challenges in GenAI for developers</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/267f3dc9/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/267f3dc9/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>How SiriusXM revamped their platform and developer experience | Jared Wolinsky</title>
      <itunes:episode>65</itunes:episode>
      <podcast:episode>65</podcast:episode>
      <itunes:title>How SiriusXM revamped their platform and developer experience | Jared Wolinsky</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7a789b19-a5a3-4265-b66d-355f3c2b529f</guid>
      <link>https://share.transistor.fm/s/d83c82bd</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/d83c82bd/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, Abi welcomes Jared Wolinsky, Vice President of Platform Engineering at SiriusXM, to delve into the inner workings of platform engineering at SiriusXM. Jared sheds light on their innovative approach to prioritizing projects, emphasizing alignment with overarching business goals. They explore how these strategies boost developer speed and drive technological advancement within the organization.</p><p><br><strong>Links: </strong></p><ul><li><a href="https://getdx.com/report/when-to-hire-developer-productivity-team/">When is the right time to establish a DevProd team report</a></li></ul><p><strong>Timestamps:</strong></p><ul><li>(1:39) SiriusXM's major rebuild</li><li>(4:46) Challenges of building a platform during a major product revamp</li><li>(7:22) Navigating trade-offs</li><li>(10:06) Defining the ideal developer journey at SiriusXM</li><li>(17:28) Navigating the path to a user-centric developer experience</li><li>(23:05) Collaborating with leadership to iterate and gain approval</li><li>(25:05) Balancing enablement and platform</li><li>(28:28) Implementing a data-driven prioritization framework</li><li>(34:29) Aligning projects with business goals</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/d83c82bd/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, Abi welcomes Jared Wolinsky, Vice President of Platform Engineering at SiriusXM, to delve into the inner workings of platform engineering at SiriusXM. Jared sheds light on their innovative approach to prioritizing projects, emphasizing alignment with overarching business goals. They explore how these strategies boost developer speed and drive technological advancement within the organization.</p><p><br><strong>Links: </strong></p><ul><li><a href="https://getdx.com/report/when-to-hire-developer-productivity-team/">When is the right time to establish a DevProd team report</a></li></ul><p><strong>Timestamps:</strong></p><ul><li>(1:39) SiriusXM's major rebuild</li><li>(4:46) Challenges of building a platform during a major product revamp</li><li>(7:22) Navigating trade-offs</li><li>(10:06) Defining the ideal developer journey at SiriusXM</li><li>(17:28) Navigating the path to a user-centric developer experience</li><li>(23:05) Collaborating with leadership to iterate and gain approval</li><li>(25:05) Balancing enablement and platform</li><li>(28:28) Implementing a data-driven prioritization framework</li><li>(34:29) Aligning projects with business goals</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 18 Jun 2024 21:18:22 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/d83c82bd/88363cb9.mp3" length="98409711" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Kjz5zklkPPaLdfKfjGJgKa-sJGVii5Ul8A0rFbyirds/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83NmQ0/NDM0NzE1ZDkwYmY5/MDM1OTRiNjIxYTk1/ODJiNy5wbmc.jpg"/>
      <itunes:duration>2458</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/d83c82bd/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, Abi welcomes Jared Wolinsky, Vice President of Platform Engineering at SiriusXM, to delve into the inner workings of platform engineering at SiriusXM. Jared sheds light on their innovative approach to prioritizing projects, emphasizing alignment with overarching business goals. They explore how these strategies boost developer speed and drive technological advancement within the organization.</p><p><br><strong>Links: </strong></p><ul><li><a href="https://getdx.com/report/when-to-hire-developer-productivity-team/">When is the right time to establish a DevProd team report</a></li></ul><p><strong>Timestamps:</strong></p><ul><li>(1:39) SiriusXM's major rebuild</li><li>(4:46) Challenges of building a platform during a major product revamp</li><li>(7:22) Navigating trade-offs</li><li>(10:06) Defining the ideal developer journey at SiriusXM</li><li>(17:28) Navigating the path to a user-centric developer experience</li><li>(23:05) Collaborating with leadership to iterate and gain approval</li><li>(25:05) Balancing enablement and platform</li><li>(28:28) Implementing a data-driven prioritization framework</li><li>(34:29) Aligning projects with business goals</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d83c82bd/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/d83c82bd/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Developer Experience at American Express | Michelle Swartz (American Express)</title>
      <itunes:episode>64</itunes:episode>
      <podcast:episode>64</podcast:episode>
      <itunes:title>Developer Experience at American Express | Michelle Swartz (American Express)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9191e7d3-872a-4c72-b431-e5bcdc63a9ba</guid>
      <link>https://share.transistor.fm/s/b73f15bf</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/b73f15bf/transcript" title="Click here to view the episode transcript">Click here to view the episode transcript'.</a><br>
<br>In this episode, Michelle Swartz, Vice president of Developer Enablement American Express, shares insights on improving developer experience. She discusses the creation of an onboarding bootcamp and the development of the AmEx Way Library for better knowledge management. Michelle explains how AmEx balances standardization and flexibility with the concept of Paved Roads. She also highlights the importance of measuring success, fostering community, and elevating the company's tech credibility.</p><p><strong>Mentions and links</strong></p><ul><li><a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/?utm_source=podcast">GenAI Guide</a></li></ul><p><strong>Timestamps</strong></p><ul><li>(5:45) Challenges of advocating for DevEx in non-tech companies</li><li>(7:43) Importance of senior leadership buy-in for DevEx</li><li>(9:58) Genesis of the DevEx organization and Jedi Council</li><li>(12:12) Transition to a dedicated DevEx function</li><li>(13:17) Formalizing investment in DevEx</li><li>(18:02) Initial efforts and learning in improving DevEx</li><li>(19:25) Using sentiment surveys to prioritize DevEx areas</li><li>(27:26) Addressing knowledge management challenges</li><li>(29:49) Balancing standardization and freedom in DevEx</li><li>(36:21) Implementing Paved Roads: evolution vs. revolution</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/b73f15bf/transcript" title="Click here to view the episode transcript">Click here to view the episode transcript'.</a><br>
<br>In this episode, Michelle Swartz, Vice president of Developer Enablement American Express, shares insights on improving developer experience. She discusses the creation of an onboarding bootcamp and the development of the AmEx Way Library for better knowledge management. Michelle explains how AmEx balances standardization and flexibility with the concept of Paved Roads. She also highlights the importance of measuring success, fostering community, and elevating the company's tech credibility.</p><p><strong>Mentions and links</strong></p><ul><li><a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/?utm_source=podcast">GenAI Guide</a></li></ul><p><strong>Timestamps</strong></p><ul><li>(5:45) Challenges of advocating for DevEx in non-tech companies</li><li>(7:43) Importance of senior leadership buy-in for DevEx</li><li>(9:58) Genesis of the DevEx organization and Jedi Council</li><li>(12:12) Transition to a dedicated DevEx function</li><li>(13:17) Formalizing investment in DevEx</li><li>(18:02) Initial efforts and learning in improving DevEx</li><li>(19:25) Using sentiment surveys to prioritize DevEx areas</li><li>(27:26) Addressing knowledge management challenges</li><li>(29:49) Balancing standardization and freedom in DevEx</li><li>(36:21) Implementing Paved Roads: evolution vs. revolution</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 04 Jun 2024 13:21:26 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/b73f15bf/c69bc270.mp3" length="108068267" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/x5uPB9m9yiamrxf8MnRfxOMEaALwbc0wFHj2PLvegsg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hMGE4/ODI4Yjg1ZDgzYTA0/MDgzYTRiNzMxNTBm/YjVjOS5wbmc.jpg"/>
      <itunes:duration>2700</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/b73f15bf/transcript" title="Click here to view the episode transcript">Click here to view the episode transcript'.</a><br>
<br>In this episode, Michelle Swartz, Vice president of Developer Enablement American Express, shares insights on improving developer experience. She discusses the creation of an onboarding bootcamp and the development of the AmEx Way Library for better knowledge management. Michelle explains how AmEx balances standardization and flexibility with the concept of Paved Roads. She also highlights the importance of measuring success, fostering community, and elevating the company's tech credibility.</p><p><strong>Mentions and links</strong></p><ul><li><a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/?utm_source=podcast">GenAI Guide</a></li></ul><p><strong>Timestamps</strong></p><ul><li>(5:45) Challenges of advocating for DevEx in non-tech companies</li><li>(7:43) Importance of senior leadership buy-in for DevEx</li><li>(9:58) Genesis of the DevEx organization and Jedi Council</li><li>(12:12) Transition to a dedicated DevEx function</li><li>(13:17) Formalizing investment in DevEx</li><li>(18:02) Initial efforts and learning in improving DevEx</li><li>(19:25) Using sentiment surveys to prioritize DevEx areas</li><li>(27:26) Addressing knowledge management challenges</li><li>(29:49) Balancing standardization and freedom in DevEx</li><li>(36:21) Implementing Paved Roads: evolution vs. revolution</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b73f15bf/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/b73f15bf/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>DORA, SPACE, and DevEx: Choosing the right framework | Laura Tacho + Abi Noda</title>
      <itunes:episode>63</itunes:episode>
      <podcast:episode>63</podcast:episode>
      <itunes:title>DORA, SPACE, and DevEx: Choosing the right framework | Laura Tacho + Abi Noda</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">23c93b36-81c0-4229-9eb1-969b34af4d77</guid>
      <link>https://share.transistor.fm/s/df881d86</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/df881d86/transcript" title="Click here to view the episode transcript">Click here to view the episode transcript'.</a><br>
<br>This week’s episode is a recording from a recent event hosted by Abi Noda (CEO of DX) and Laura Tacho (CTO at DX). The episode begins with an overview of the DORA, SPACE, and DevEx frameworks, including where they overlap and common misconceptions about each. Laura and Abi discuss the advantages and drawbacks of each framework, then discuss how to choose which framework to use.</p><p><br></p><p>Mentions and Links: </p><ul><li><a href="https://dora.dev/">Dora.dev</a></li></ul><p>Discussion points:</p><ul><li>2:50- DORA, SPACE, DevEx overview</li><li>10:35- Choosing which framework to use</li><li>13:15- Using DORA</li><li>22:42-Using SPACE</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/df881d86/transcript" title="Click here to view the episode transcript">Click here to view the episode transcript'.</a><br>
<br>This week’s episode is a recording from a recent event hosted by Abi Noda (CEO of DX) and Laura Tacho (CTO at DX). The episode begins with an overview of the DORA, SPACE, and DevEx frameworks, including where they overlap and common misconceptions about each. Laura and Abi discuss the advantages and drawbacks of each framework, then discuss how to choose which framework to use.</p><p><br></p><p>Mentions and Links: </p><ul><li><a href="https://dora.dev/">Dora.dev</a></li></ul><p>Discussion points:</p><ul><li>2:50- DORA, SPACE, DevEx overview</li><li>10:35- Choosing which framework to use</li><li>13:15- Using DORA</li><li>22:42-Using SPACE</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 21 May 2024 13:58:10 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/df881d86/a680f89b.mp3" length="38414105" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/sYQF0uxiEwWlLFutgrhvLc7MN_5AKVGFZgTz9BmcjY0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85ODY4/MGJhOWRkYTQ4Njhi/NGVmMjczMDFjYmRk/ZDc2Zi5wbmc.jpg"/>
      <itunes:duration>2375</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/df881d86/transcript" title="Click here to view the episode transcript">Click here to view the episode transcript'.</a><br>
<br>This week’s episode is a recording from a recent event hosted by Abi Noda (CEO of DX) and Laura Tacho (CTO at DX). The episode begins with an overview of the DORA, SPACE, and DevEx frameworks, including where they overlap and common misconceptions about each. Laura and Abi discuss the advantages and drawbacks of each framework, then discuss how to choose which framework to use.</p><p><br></p><p>Mentions and Links: </p><ul><li><a href="https://dora.dev/">Dora.dev</a></li></ul><p>Discussion points:</p><ul><li>2:50- DORA, SPACE, DevEx overview</li><li>10:35- Choosing which framework to use</li><li>13:15- Using DORA</li><li>22:42-Using SPACE</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/df881d86/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/df881d86/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>The science behind DORA | Derek DeBellis (Google)</title>
      <itunes:episode>62</itunes:episode>
      <podcast:episode>62</podcast:episode>
      <itunes:title>The science behind DORA | Derek DeBellis (Google)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">73a8d506-2b73-4f70-a844-116ac436fae6</guid>
      <link>https://share.transistor.fm/s/1bff223d</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/1bff223d/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, we welcome Derek DeBellis, lead researcher on Google's DORA team, for a deep dive into the science and methodology behind DORA's research. We explore Derek's background, his role at Google, and how DORA intersects with other research disciplines. Derek takes us through DORA's research process step by step, from defining outcomes and factors to survey design, analysis, and structural equation modeling.</p><p>Mentions and Links:</p><ul><li>Derek DeBellis on <a href="https://www.linkedin.com/in/derekdebellis/">LinkedIn</a></li><li>DX’s guide to <a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/?utm_source=podcast">measuring GenAI adoption and impact</a></li><li><a href="https://cloud.google.com/blog/products/devops-sre/announcing-the-2023-state-of-devops-report">2023 Accelerate State of DevOps Report</a></li></ul><p>Discussion points:</p><ul><li>(3:00)<strong> </strong>Derek’s transition from Microsoft to the DORA team at Google</li><li>(4:28) Derek talks about his connection to surveys</li><li>(6:16) Derek’s journey to becoming a quantitative user experience researcher</li><li>(7:48) Derek simplifies DORA</li><li>(8:19) DORA - Philosophy vs practice</li><li>(11:09) Understanding desired outcomes</li><li>(12:45) Self reported outcomes vs objective outcomes</li><li>(16:16) Derek and Abi discuss the nuances of literature review</li><li>(19:57) Derek details survey development</li><li>(27:55) Pretesting issues</li><li>(29:30) Designing surveys for other companies</li><li>(35:02) Derek simplifies model analysis and validation techniques</li><li>(38:48) Benchmarks: Balancing data limitations with method sensitivity</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/1bff223d/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, we welcome Derek DeBellis, lead researcher on Google's DORA team, for a deep dive into the science and methodology behind DORA's research. We explore Derek's background, his role at Google, and how DORA intersects with other research disciplines. Derek takes us through DORA's research process step by step, from defining outcomes and factors to survey design, analysis, and structural equation modeling.</p><p>Mentions and Links:</p><ul><li>Derek DeBellis on <a href="https://www.linkedin.com/in/derekdebellis/">LinkedIn</a></li><li>DX’s guide to <a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/?utm_source=podcast">measuring GenAI adoption and impact</a></li><li><a href="https://cloud.google.com/blog/products/devops-sre/announcing-the-2023-state-of-devops-report">2023 Accelerate State of DevOps Report</a></li></ul><p>Discussion points:</p><ul><li>(3:00)<strong> </strong>Derek’s transition from Microsoft to the DORA team at Google</li><li>(4:28) Derek talks about his connection to surveys</li><li>(6:16) Derek’s journey to becoming a quantitative user experience researcher</li><li>(7:48) Derek simplifies DORA</li><li>(8:19) DORA - Philosophy vs practice</li><li>(11:09) Understanding desired outcomes</li><li>(12:45) Self reported outcomes vs objective outcomes</li><li>(16:16) Derek and Abi discuss the nuances of literature review</li><li>(19:57) Derek details survey development</li><li>(27:55) Pretesting issues</li><li>(29:30) Designing surveys for other companies</li><li>(35:02) Derek simplifies model analysis and validation techniques</li><li>(38:48) Benchmarks: Balancing data limitations with method sensitivity</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 07 May 2024 13:59:55 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/1bff223d/6fef6453.mp3" length="114813937" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1KCnSvt8Tpky7ZET8LRHPuTBkXfQlvOzjJH2a_xbHlQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wNGZh/MDVlNWEyYjc3YjZk/MzBkYTdhOWU4NGQz/OGM3Yy5wbmc.jpg"/>
      <itunes:duration>2870</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/1bff223d/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>In this week's episode, we welcome Derek DeBellis, lead researcher on Google's DORA team, for a deep dive into the science and methodology behind DORA's research. We explore Derek's background, his role at Google, and how DORA intersects with other research disciplines. Derek takes us through DORA's research process step by step, from defining outcomes and factors to survey design, analysis, and structural equation modeling.</p><p>Mentions and Links:</p><ul><li>Derek DeBellis on <a href="https://www.linkedin.com/in/derekdebellis/">LinkedIn</a></li><li>DX’s guide to <a href="https://getdx.com/guide/unlocking-developer-productivity-with-generative-ai/?utm_source=podcast">measuring GenAI adoption and impact</a></li><li><a href="https://cloud.google.com/blog/products/devops-sre/announcing-the-2023-state-of-devops-report">2023 Accelerate State of DevOps Report</a></li></ul><p>Discussion points:</p><ul><li>(3:00)<strong> </strong>Derek’s transition from Microsoft to the DORA team at Google</li><li>(4:28) Derek talks about his connection to surveys</li><li>(6:16) Derek’s journey to becoming a quantitative user experience researcher</li><li>(7:48) Derek simplifies DORA</li><li>(8:19) DORA - Philosophy vs practice</li><li>(11:09) Understanding desired outcomes</li><li>(12:45) Self reported outcomes vs objective outcomes</li><li>(16:16) Derek and Abi discuss the nuances of literature review</li><li>(19:57) Derek details survey development</li><li>(27:55) Pretesting issues</li><li>(29:30) Designing surveys for other companies</li><li>(35:02) Derek simplifies model analysis and validation techniques</li><li>(38:48) Benchmarks: Balancing data limitations with method sensitivity</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/1bff223d/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>How Slack fully automates deploys and anomaly detection with Z-scores | Sean Mcllroy (Slack)</title>
      <itunes:episode>61</itunes:episode>
      <podcast:episode>61</podcast:episode>
      <itunes:title>How Slack fully automates deploys and anomaly detection with Z-scores | Sean Mcllroy (Slack)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">063308d7-c517-4f05-b776-a966cd8ff663</guid>
      <link>https://share.transistor.fm/s/07930123</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/07930123/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>This week we’re joined by Sean Mcllroy from Slack’s Release Engineering team to learn about how they’ve fully automated their deployment process. This conversation covers Slack’s original release process, key changes Sean’s team has made, and the latest challenges they’re working on today. </p><p><strong>Mentions and links:</strong></p><ul><li>Read Sean’s blog post, <a href="https://slack.engineering/the-scary-thing-about-automating-deploys/">The Scary Thing About Deploys</a></li><li>Follow Sean on <a href="https://www.linkedin.com/in/abinoda/">LinkedIn</a></li></ul><p><strong>Time Stamps:</strong></p><ul><li>(1:34): The Release Engineering team</li><li>(2:13): How the monolith has served Slack </li><li>(3:24): How the deployment process used to work </li><li>(6:23): The complexity of the deploy itself</li><li>(7:39): Early ideas for improving the deployment process</li><li>(9:07): Why anomaly detection is challenging</li><li>(10:32): What a Z-score is</li><li>(13:23): Managing noise with Z-scores</li><li>(16:49): Presenting this information to people that need it</li><li>(19:54): Taking humans out of the process</li><li>(23:13): Handling rollbacks</li><li>(25:27): Not overloading developers with information</li><li>(28:26): Handling large deployments</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/07930123/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>This week we’re joined by Sean Mcllroy from Slack’s Release Engineering team to learn about how they’ve fully automated their deployment process. This conversation covers Slack’s original release process, key changes Sean’s team has made, and the latest challenges they’re working on today. </p><p><strong>Mentions and links:</strong></p><ul><li>Read Sean’s blog post, <a href="https://slack.engineering/the-scary-thing-about-automating-deploys/">The Scary Thing About Deploys</a></li><li>Follow Sean on <a href="https://www.linkedin.com/in/abinoda/">LinkedIn</a></li></ul><p><strong>Time Stamps:</strong></p><ul><li>(1:34): The Release Engineering team</li><li>(2:13): How the monolith has served Slack </li><li>(3:24): How the deployment process used to work </li><li>(6:23): The complexity of the deploy itself</li><li>(7:39): Early ideas for improving the deployment process</li><li>(9:07): Why anomaly detection is challenging</li><li>(10:32): What a Z-score is</li><li>(13:23): Managing noise with Z-scores</li><li>(16:49): Presenting this information to people that need it</li><li>(19:54): Taking humans out of the process</li><li>(23:13): Handling rollbacks</li><li>(25:27): Not overloading developers with information</li><li>(28:26): Handling large deployments</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 23 Apr 2024 13:13:22 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/07930123/1d7efa6a.mp3" length="81263275" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/KrLNvuF4inAl3zs3Axf5wc_Bp988s9owbjmDwxb6i6E/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hMWM2/NWE1OTRlMWM0YWVj/YzQzNDI3ZGZiNzcx/NjE4Yi5wbmc.jpg"/>
      <itunes:duration>2029</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/07930123/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>This week we’re joined by Sean Mcllroy from Slack’s Release Engineering team to learn about how they’ve fully automated their deployment process. This conversation covers Slack’s original release process, key changes Sean’s team has made, and the latest challenges they’re working on today. </p><p><strong>Mentions and links:</strong></p><ul><li>Read Sean’s blog post, <a href="https://slack.engineering/the-scary-thing-about-automating-deploys/">The Scary Thing About Deploys</a></li><li>Follow Sean on <a href="https://www.linkedin.com/in/abinoda/">LinkedIn</a></li></ul><p><strong>Time Stamps:</strong></p><ul><li>(1:34): The Release Engineering team</li><li>(2:13): How the monolith has served Slack </li><li>(3:24): How the deployment process used to work </li><li>(6:23): The complexity of the deploy itself</li><li>(7:39): Early ideas for improving the deployment process</li><li>(9:07): Why anomaly detection is challenging</li><li>(10:32): What a Z-score is</li><li>(13:23): Managing noise with Z-scores</li><li>(16:49): Presenting this information to people that need it</li><li>(19:54): Taking humans out of the process</li><li>(23:13): Handling rollbacks</li><li>(25:27): Not overloading developers with information</li><li>(28:26): Handling large deployments</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/07930123/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/07930123/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>What’s up with internal developer portals? | Chris Westerhold (Thoughtworks)</title>
      <itunes:episode>60</itunes:episode>
      <podcast:episode>60</podcast:episode>
      <itunes:title>What’s up with internal developer portals? | Chris Westerhold (Thoughtworks)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4108e3bd-76c8-4a95-86b9-a8a2f2e9ae6a</guid>
      <link>https://share.transistor.fm/s/e0a70cac</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/e0a70cac/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>This week’s episode is the recording of a live conversation between Abi and Chris Westerhold (Thoughtworks Head of Developer Experience). This conversation is useful for anyone early in their journey with developer portals or platforms: Abi and Chris discuss common approaches to solving these problems, pitfalls to avoid, building vs. buying, and more. </p><p><strong>Mentions and Links</strong></p><ul><li>Follow Chris on <a href="https://www.linkedin.com/in/chriswesterhold">LinkedIn</a></li><li><a href="https://youtu.be/8_B3R_ZvKrI?si=LBBfHZG8hCZiu73T">Watch the recording</a> of this conversation </li><li><a href="https://getdx.com/webinar/internal-developer-portals-landscape/">Watch part 2</a> of this conversation on the market landscape</li><li>Learn about <a href="https://getdx.com/blog/announcing-general-availability-of-platformx/">PlatformX</a>, DX’s product mentioned in the conversation</li></ul><p><strong>Time Stamps:</strong></p><ul><li>(3:09) Why there’s an increased interest in developer portals</li><li>(5:33) Chris’ background with dev portals</li><li>(6:37) Homegrown solutions for developer portals</li><li>(9:22) How developer portal initiatives begin</li><li>(11:24) Internal developer portal vs service catalogs and IDPs</li><li>(16:18) Mistakes companies make with developer portals</li><li>(21:05) Approaches to solving this problem</li><li>(24:28) How can developer portals drive value</li><li>(32:07) Common traps to avoid</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/e0a70cac/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>This week’s episode is the recording of a live conversation between Abi and Chris Westerhold (Thoughtworks Head of Developer Experience). This conversation is useful for anyone early in their journey with developer portals or platforms: Abi and Chris discuss common approaches to solving these problems, pitfalls to avoid, building vs. buying, and more. </p><p><strong>Mentions and Links</strong></p><ul><li>Follow Chris on <a href="https://www.linkedin.com/in/chriswesterhold">LinkedIn</a></li><li><a href="https://youtu.be/8_B3R_ZvKrI?si=LBBfHZG8hCZiu73T">Watch the recording</a> of this conversation </li><li><a href="https://getdx.com/webinar/internal-developer-portals-landscape/">Watch part 2</a> of this conversation on the market landscape</li><li>Learn about <a href="https://getdx.com/blog/announcing-general-availability-of-platformx/">PlatformX</a>, DX’s product mentioned in the conversation</li></ul><p><strong>Time Stamps:</strong></p><ul><li>(3:09) Why there’s an increased interest in developer portals</li><li>(5:33) Chris’ background with dev portals</li><li>(6:37) Homegrown solutions for developer portals</li><li>(9:22) How developer portal initiatives begin</li><li>(11:24) Internal developer portal vs service catalogs and IDPs</li><li>(16:18) Mistakes companies make with developer portals</li><li>(21:05) Approaches to solving this problem</li><li>(24:28) How can developer portals drive value</li><li>(32:07) Common traps to avoid</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 09 Apr 2024 16:11:48 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/e0a70cac/cf851292.mp3" length="87009472" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/lWzNZzytLTrJXeyiAX6krUUy_HbXHZNj1Qq8VBRtq94/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xZGQ4/ZTgzOGQ3MGZmMDc3/MzM5NDc0MGUxZjlk/NzcwYy5qcGc.jpg"/>
      <itunes:duration>2173</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/e0a70cac/transcript" title="Click here to view the episode transcript.">Click here to view the episode transcript.</a><br>
<br>This week’s episode is the recording of a live conversation between Abi and Chris Westerhold (Thoughtworks Head of Developer Experience). This conversation is useful for anyone early in their journey with developer portals or platforms: Abi and Chris discuss common approaches to solving these problems, pitfalls to avoid, building vs. buying, and more. </p><p><strong>Mentions and Links</strong></p><ul><li>Follow Chris on <a href="https://www.linkedin.com/in/chriswesterhold">LinkedIn</a></li><li><a href="https://youtu.be/8_B3R_ZvKrI?si=LBBfHZG8hCZiu73T">Watch the recording</a> of this conversation </li><li><a href="https://getdx.com/webinar/internal-developer-portals-landscape/">Watch part 2</a> of this conversation on the market landscape</li><li>Learn about <a href="https://getdx.com/blog/announcing-general-availability-of-platformx/">PlatformX</a>, DX’s product mentioned in the conversation</li></ul><p><strong>Time Stamps:</strong></p><ul><li>(3:09) Why there’s an increased interest in developer portals</li><li>(5:33) Chris’ background with dev portals</li><li>(6:37) Homegrown solutions for developer portals</li><li>(9:22) How developer portal initiatives begin</li><li>(11:24) Internal developer portal vs service catalogs and IDPs</li><li>(16:18) Mistakes companies make with developer portals</li><li>(21:05) Approaches to solving this problem</li><li>(24:28) How can developer portals drive value</li><li>(32:07) Common traps to avoid</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e0a70cac/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/e0a70cac/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>10 years of driving developer productivity at Yelp | Kent Wills (Yelp)</title>
      <itunes:episode>59</itunes:episode>
      <podcast:episode>59</podcast:episode>
      <itunes:title>10 years of driving developer productivity at Yelp | Kent Wills (Yelp)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">219166e5-9325-4683-ab0b-0f46bc9e0fbe</guid>
      <link>https://share.transistor.fm/s/cb4f339c</link>
      <description>
        <![CDATA[<p><a href="https://share.transistor.fm/s/cb4f339c/transcript" title="Click here to listen to the episode transcript.">Click here to listen to the episode transcript.</a><br>
<br>On this week's episode, Abi interviews Kent Wills, Director of Engineering Effectiveness at Yelp.  He shares insights into the evolution of their developer productivity efforts over the past decade. From tackling challenges with their monolithic architecture to scaling productivity initiatives for over 1,300 developers. Kent also touches on his experience in building a business case for developer productivity.</p><p><br><strong>Discussion points:</strong></p><ul><li>(1:42) Forming the developer productivity team</li><li>(3:25) Naming the team engineering effectiveness</li><li>(4:30) Getting leadership buy-in for focusing on this work</li><li>(7:54) Managing code ownership in Yelp’s monolith</li><li>(12:23) Supporting the design system</li><li>(16:00) The business case for forming a dedicated team </li><li>(19:45) How to standardize </li><li>(23:50) How their approach to standardization might be different in another company</li><li>(27:08) Demonstrating the value of their work </li><li>(32:21) Building an insights platform</li><li>(38:47) How Yelp is using LLM’s</li></ul><p><strong>Mentions and Links</strong></p><ul><li>Connect with Kent Wills on <a href="https://www.linkedin.com/in/rkentwills/">LinkedIn</a></li><li>Watch <a href="https://www.youtube.com/watch?v=ukv7tEjXO0I">Kent’s 2023 talk</a> at Elevate</li><li>Listen to the interview with <a href="https://getdx.com/podcast/when-to-hire-engineering-effectiveness/">Peter Seibel</a> (“Let 1,000 flowers bloom”)</li><li>Download the recently published benchmarks on <a href="http://getdx.com/allocation">developer productivity team headcount</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://share.transistor.fm/s/cb4f339c/transcript" title="Click here to listen to the episode transcript.">Click here to listen to the episode transcript.</a><br>
<br>On this week's episode, Abi interviews Kent Wills, Director of Engineering Effectiveness at Yelp.  He shares insights into the evolution of their developer productivity efforts over the past decade. From tackling challenges with their monolithic architecture to scaling productivity initiatives for over 1,300 developers. Kent also touches on his experience in building a business case for developer productivity.</p><p><br><strong>Discussion points:</strong></p><ul><li>(1:42) Forming the developer productivity team</li><li>(3:25) Naming the team engineering effectiveness</li><li>(4:30) Getting leadership buy-in for focusing on this work</li><li>(7:54) Managing code ownership in Yelp’s monolith</li><li>(12:23) Supporting the design system</li><li>(16:00) The business case for forming a dedicated team </li><li>(19:45) How to standardize </li><li>(23:50) How their approach to standardization might be different in another company</li><li>(27:08) Demonstrating the value of their work </li><li>(32:21) Building an insights platform</li><li>(38:47) How Yelp is using LLM’s</li></ul><p><strong>Mentions and Links</strong></p><ul><li>Connect with Kent Wills on <a href="https://www.linkedin.com/in/rkentwills/">LinkedIn</a></li><li>Watch <a href="https://www.youtube.com/watch?v=ukv7tEjXO0I">Kent’s 2023 talk</a> at Elevate</li><li>Listen to the interview with <a href="https://getdx.com/podcast/when-to-hire-engineering-effectiveness/">Peter Seibel</a> (“Let 1,000 flowers bloom”)</li><li>Download the recently published benchmarks on <a href="http://getdx.com/allocation">developer productivity team headcount</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 26 Mar 2024 09:04:59 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/cb4f339c/179c8eb6.mp3" length="108628203" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/VKrU2wNJaVvruCQ2nXbkMpkOsNG9n_Xc4651YI7YAAs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE4MTI1MTUv/MTcxMTQ2NTQ5OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2714</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://share.transistor.fm/s/cb4f339c/transcript" title="Click here to listen to the episode transcript.">Click here to listen to the episode transcript.</a><br>
<br>On this week's episode, Abi interviews Kent Wills, Director of Engineering Effectiveness at Yelp.  He shares insights into the evolution of their developer productivity efforts over the past decade. From tackling challenges with their monolithic architecture to scaling productivity initiatives for over 1,300 developers. Kent also touches on his experience in building a business case for developer productivity.</p><p><br><strong>Discussion points:</strong></p><ul><li>(1:42) Forming the developer productivity team</li><li>(3:25) Naming the team engineering effectiveness</li><li>(4:30) Getting leadership buy-in for focusing on this work</li><li>(7:54) Managing code ownership in Yelp’s monolith</li><li>(12:23) Supporting the design system</li><li>(16:00) The business case for forming a dedicated team </li><li>(19:45) How to standardize </li><li>(23:50) How their approach to standardization might be different in another company</li><li>(27:08) Demonstrating the value of their work </li><li>(32:21) Building an insights platform</li><li>(38:47) How Yelp is using LLM’s</li></ul><p><strong>Mentions and Links</strong></p><ul><li>Connect with Kent Wills on <a href="https://www.linkedin.com/in/rkentwills/">LinkedIn</a></li><li>Watch <a href="https://www.youtube.com/watch?v=ukv7tEjXO0I">Kent’s 2023 talk</a> at Elevate</li><li>Listen to the interview with <a href="https://getdx.com/podcast/when-to-hire-engineering-effectiveness/">Peter Seibel</a> (“Let 1,000 flowers bloom”)</li><li>Download the recently published benchmarks on <a href="http://getdx.com/allocation">developer productivity team headcount</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/cb4f339c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/cb4f339c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>How “instructional engineers” improve developer onboarding at Splunk | Gail Carmichael (Splunk)</title>
      <itunes:episode>58</itunes:episode>
      <podcast:episode>58</podcast:episode>
      <itunes:title>How “instructional engineers” improve developer onboarding at Splunk | Gail Carmichael (Splunk)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d7abf68e-5634-49fe-96ca-38e16d61f702</guid>
      <link>https://share.transistor.fm/s/e7c03b9e</link>
      <description>
        <![CDATA[<p>This week we’re joined by Gail Carmichael, Principal Instructional Engineer at Splunk. At Splunk, Gail’s team is responsible for improving developer onboarding, which they do through a multi-day learning program. Here, Gail shares how this program works and how they measure developer onboarding. The conversation also covers what instructional engineers are generally, and how Gail demonstrates the impact of her team’s work. </p><p><strong><br>Discussion points:</strong></p><ul><li>(1:16) The Engineering Enablement &amp; Engagement Team at Splunk</li><li>(8:01) What an Instructional Engineer is</li><li>(14:36) The developer onboarding program at Splunk</li><li>(16:05) Components of a good onboarding program</li><li>(21:11) Why having an onboarding program matters</li><li>(28:17) Measuring onboarding at Shopify (Gail’s previous company)</li><li>(31:39) Measuring developer onboarding at Splunk</li></ul><p><strong>Mentions and Links</strong></p><ul><li>Connect with Gail on <a href="https://www.linkedin.com/in/gailcarmichael/">LinkedIn</a></li><li>Download the report on <a href="https://getdx.com/guide/how-top-companies-measure-productivity/">Developer productivity metrics at top tech companies</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re joined by Gail Carmichael, Principal Instructional Engineer at Splunk. At Splunk, Gail’s team is responsible for improving developer onboarding, which they do through a multi-day learning program. Here, Gail shares how this program works and how they measure developer onboarding. The conversation also covers what instructional engineers are generally, and how Gail demonstrates the impact of her team’s work. </p><p><strong><br>Discussion points:</strong></p><ul><li>(1:16) The Engineering Enablement &amp; Engagement Team at Splunk</li><li>(8:01) What an Instructional Engineer is</li><li>(14:36) The developer onboarding program at Splunk</li><li>(16:05) Components of a good onboarding program</li><li>(21:11) Why having an onboarding program matters</li><li>(28:17) Measuring onboarding at Shopify (Gail’s previous company)</li><li>(31:39) Measuring developer onboarding at Splunk</li></ul><p><strong>Mentions and Links</strong></p><ul><li>Connect with Gail on <a href="https://www.linkedin.com/in/gailcarmichael/">LinkedIn</a></li><li>Download the report on <a href="https://getdx.com/guide/how-top-companies-measure-productivity/">Developer productivity metrics at top tech companies</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 12 Mar 2024 14:37:06 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/e7c03b9e/b3d9fb2a.mp3" length="94923271" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/l91i1pd522G-xKpVeK1u5F8DS499xKjwyOCckkikggM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3ODcwNzAv/MTcxMDI3NTgyNi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2371</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re joined by Gail Carmichael, Principal Instructional Engineer at Splunk. At Splunk, Gail’s team is responsible for improving developer onboarding, which they do through a multi-day learning program. Here, Gail shares how this program works and how they measure developer onboarding. The conversation also covers what instructional engineers are generally, and how Gail demonstrates the impact of her team’s work. </p><p><strong><br>Discussion points:</strong></p><ul><li>(1:16) The Engineering Enablement &amp; Engagement Team at Splunk</li><li>(8:01) What an Instructional Engineer is</li><li>(14:36) The developer onboarding program at Splunk</li><li>(16:05) Components of a good onboarding program</li><li>(21:11) Why having an onboarding program matters</li><li>(28:17) Measuring onboarding at Shopify (Gail’s previous company)</li><li>(31:39) Measuring developer onboarding at Splunk</li></ul><p><strong>Mentions and Links</strong></p><ul><li>Connect with Gail on <a href="https://www.linkedin.com/in/gailcarmichael/">LinkedIn</a></li><li>Download the report on <a href="https://getdx.com/guide/how-top-companies-measure-productivity/">Developer productivity metrics at top tech companies</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/e7c03b9e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Bootstrapping a developer portal | Adam Rogal (DoorDash)</title>
      <itunes:episode>57</itunes:episode>
      <podcast:episode>57</podcast:episode>
      <itunes:title>Bootstrapping a developer portal | Adam Rogal (DoorDash)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0370a934-00d2-422f-bcfe-c14f2f009d16</guid>
      <link>https://share.transistor.fm/s/c1336ddf</link>
      <description>
        <![CDATA[<p>In this episode we’re joined by Adam Rogal, who leads Developer Productivity and Platform at DoorDash. Adam describes DoorDash’s journey with their internal developer portal, and gives advice for  other teams looking to follow a similar path. Adam also describes how his team delivered value quickly and drove adoption for their developer platform.</p><p><br></p><p>Discussion points:</p><ul><li>(1:47) Why DoorDash explored implementing a developer portal</li><li>(6:59) The initial vision for the developer portal </li><li>12:19 Funding ongoing development </li><li>16:01 Deciding what to include in the portal </li><li>19:15 Coming up with a name for the portal </li><li>20:01 Advice for interested beginners</li><li>23:55 Putting together a business case</li><li>32:32 Getting adoption for the portal </li><li>37:27 Driving initial awareness </li><li>41:29 Getting feedback from developers</li><li>48:33 What Adam would have done differently</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Adam Rogal on <a href="https://www.linkedin.com/in/adamrogal/">LinkedIn</a></li><li><a href="https://developer.doordash.com/en-US/docs/drive/tutorials/get_started/">Get started (API)</a></li><li><a href="https://developer.doordash.com/en-US/blog/">New testing and monitoring tools</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode we’re joined by Adam Rogal, who leads Developer Productivity and Platform at DoorDash. Adam describes DoorDash’s journey with their internal developer portal, and gives advice for  other teams looking to follow a similar path. Adam also describes how his team delivered value quickly and drove adoption for their developer platform.</p><p><br></p><p>Discussion points:</p><ul><li>(1:47) Why DoorDash explored implementing a developer portal</li><li>(6:59) The initial vision for the developer portal </li><li>12:19 Funding ongoing development </li><li>16:01 Deciding what to include in the portal </li><li>19:15 Coming up with a name for the portal </li><li>20:01 Advice for interested beginners</li><li>23:55 Putting together a business case</li><li>32:32 Getting adoption for the portal </li><li>37:27 Driving initial awareness </li><li>41:29 Getting feedback from developers</li><li>48:33 What Adam would have done differently</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Adam Rogal on <a href="https://www.linkedin.com/in/adamrogal/">LinkedIn</a></li><li><a href="https://developer.doordash.com/en-US/docs/drive/tutorials/get_started/">Get started (API)</a></li><li><a href="https://developer.doordash.com/en-US/blog/">New testing and monitoring tools</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 27 Feb 2024 16:00:58 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/c1336ddf/9510e912.mp3" length="131404125" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/qfJGHRxyGFpemyYugBhXiDjuT6Dkd0N9JPsbtY3AsuY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3NjIyODkv/MTcwOTA3NDg1OC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3283</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode we’re joined by Adam Rogal, who leads Developer Productivity and Platform at DoorDash. Adam describes DoorDash’s journey with their internal developer portal, and gives advice for  other teams looking to follow a similar path. Adam also describes how his team delivered value quickly and drove adoption for their developer platform.</p><p><br></p><p>Discussion points:</p><ul><li>(1:47) Why DoorDash explored implementing a developer portal</li><li>(6:59) The initial vision for the developer portal </li><li>12:19 Funding ongoing development </li><li>16:01 Deciding what to include in the portal </li><li>19:15 Coming up with a name for the portal </li><li>20:01 Advice for interested beginners</li><li>23:55 Putting together a business case</li><li>32:32 Getting adoption for the portal </li><li>37:27 Driving initial awareness </li><li>41:29 Getting feedback from developers</li><li>48:33 What Adam would have done differently</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Adam Rogal on <a href="https://www.linkedin.com/in/adamrogal/">LinkedIn</a></li><li><a href="https://developer.doordash.com/en-US/docs/drive/tutorials/get_started/">Get started (API)</a></li><li><a href="https://developer.doordash.com/en-US/blog/">New testing and monitoring tools</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/c1336ddf/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>A deep-dive on the Thoughtworks Tech Radar | Rebecca Parsons, Camilla Crispim, Erik Dörnenburg (Thoughtworks)</title>
      <itunes:episode>56</itunes:episode>
      <podcast:episode>56</podcast:episode>
      <itunes:title>A deep-dive on the Thoughtworks Tech Radar | Rebecca Parsons, Camilla Crispim, Erik Dörnenburg (Thoughtworks)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c8be2e91-9cb1-4353-b006-32adb1dcdec2</guid>
      <link>https://share.transistor.fm/s/ba1116ea</link>
      <description>
        <![CDATA[<p>In this episode, Abi has a fascinating conversation with Rebecca Parsons, ThoughtWorks's CTO, Camilla Crispim, and Erik Dörnenburg on the <a href="https://www.thoughtworks.com/en-us/radar">ThoughtWorks Tech Radar</a>. The trio begins with an overview of Tech Radar and its history before delving into the intricate process of creating each report involving multiple teams and stakeholders. The conversation concludes with a focus on the evolution of Tech Radar's design and process and potential future changes. This episode offers Tech Radar fans an exclusive behind-the-scenes look at its history and production.</p><p><br></p><p>Discussion points:</p><ul><li>1:20-An introduction to the Tech Radar</li><li>6:06-Common terms used in this episode</li><li>6:27-The origin of the Tech Radar</li><li>8:50-Problems that the Tech Radar was aiming to solve</li><li>12:23-The impact on internal decision making-a tool for driving change</li><li>14:30-The teams philosophy behind Tech Radar</li><li>18:33-What sets the Tech Radar apart</li><li>21:11-Why maintaining independence is crucial for their audience</li><li>25:08-How Tech Radar publishes their reports</li><li>29:36-A look into Thoughtworks live meeting sessions</li><li>34:51-Tech Radars Git repository</li><li>42:20-Recent changes and upcoming shifts</li></ul><p><br></p><p>Mentions and links:</p><ul><li><a href="https://www.thoughtworks.com/en-us/radar">ThoughtWorks TechRadar</a></li><li><a href="https://www.linkedin.com/in/dr-rebecca-parsons">Rebecca Parsons on LinkedIn</a></li><li><a href="https://br.linkedin.com/in/camillafalconi">Camilla Crispim on LinkedIn</a></li><li><a href="https://de.linkedin.com/in/edoernenburg">Erik Dörnenburg on LinkedIn</a></li><li><a href="https://github.com/thoughtworks">Thoughtworks Git repository</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi has a fascinating conversation with Rebecca Parsons, ThoughtWorks's CTO, Camilla Crispim, and Erik Dörnenburg on the <a href="https://www.thoughtworks.com/en-us/radar">ThoughtWorks Tech Radar</a>. The trio begins with an overview of Tech Radar and its history before delving into the intricate process of creating each report involving multiple teams and stakeholders. The conversation concludes with a focus on the evolution of Tech Radar's design and process and potential future changes. This episode offers Tech Radar fans an exclusive behind-the-scenes look at its history and production.</p><p><br></p><p>Discussion points:</p><ul><li>1:20-An introduction to the Tech Radar</li><li>6:06-Common terms used in this episode</li><li>6:27-The origin of the Tech Radar</li><li>8:50-Problems that the Tech Radar was aiming to solve</li><li>12:23-The impact on internal decision making-a tool for driving change</li><li>14:30-The teams philosophy behind Tech Radar</li><li>18:33-What sets the Tech Radar apart</li><li>21:11-Why maintaining independence is crucial for their audience</li><li>25:08-How Tech Radar publishes their reports</li><li>29:36-A look into Thoughtworks live meeting sessions</li><li>34:51-Tech Radars Git repository</li><li>42:20-Recent changes and upcoming shifts</li></ul><p><br></p><p>Mentions and links:</p><ul><li><a href="https://www.thoughtworks.com/en-us/radar">ThoughtWorks TechRadar</a></li><li><a href="https://www.linkedin.com/in/dr-rebecca-parsons">Rebecca Parsons on LinkedIn</a></li><li><a href="https://br.linkedin.com/in/camillafalconi">Camilla Crispim on LinkedIn</a></li><li><a href="https://de.linkedin.com/in/edoernenburg">Erik Dörnenburg on LinkedIn</a></li><li><a href="https://github.com/thoughtworks">Thoughtworks Git repository</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 13 Feb 2024 14:03:04 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/ba1116ea/358a8159.mp3" length="109518173" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Ti8ePlxmFN3_Y-crA65mVAlGPW3vt4tYlMqBN_hZVTU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3MzA5Nzkv/MTcwNzg1ODE4NC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2736</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi has a fascinating conversation with Rebecca Parsons, ThoughtWorks's CTO, Camilla Crispim, and Erik Dörnenburg on the <a href="https://www.thoughtworks.com/en-us/radar">ThoughtWorks Tech Radar</a>. The trio begins with an overview of Tech Radar and its history before delving into the intricate process of creating each report involving multiple teams and stakeholders. The conversation concludes with a focus on the evolution of Tech Radar's design and process and potential future changes. This episode offers Tech Radar fans an exclusive behind-the-scenes look at its history and production.</p><p><br></p><p>Discussion points:</p><ul><li>1:20-An introduction to the Tech Radar</li><li>6:06-Common terms used in this episode</li><li>6:27-The origin of the Tech Radar</li><li>8:50-Problems that the Tech Radar was aiming to solve</li><li>12:23-The impact on internal decision making-a tool for driving change</li><li>14:30-The teams philosophy behind Tech Radar</li><li>18:33-What sets the Tech Radar apart</li><li>21:11-Why maintaining independence is crucial for their audience</li><li>25:08-How Tech Radar publishes their reports</li><li>29:36-A look into Thoughtworks live meeting sessions</li><li>34:51-Tech Radars Git repository</li><li>42:20-Recent changes and upcoming shifts</li></ul><p><br></p><p>Mentions and links:</p><ul><li><a href="https://www.thoughtworks.com/en-us/radar">ThoughtWorks TechRadar</a></li><li><a href="https://www.linkedin.com/in/dr-rebecca-parsons">Rebecca Parsons on LinkedIn</a></li><li><a href="https://br.linkedin.com/in/camillafalconi">Camilla Crispim on LinkedIn</a></li><li><a href="https://de.linkedin.com/in/edoernenburg">Erik Dörnenburg on LinkedIn</a></li><li><a href="https://github.com/thoughtworks">Thoughtworks Git repository</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/ba1116ea/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Measuring and rolling out AI coding assistants | Eirini Kalliamvakou (GitHub)</title>
      <itunes:episode>55</itunes:episode>
      <podcast:episode>55</podcast:episode>
      <itunes:title>Measuring and rolling out AI coding assistants | Eirini Kalliamvakou (GitHub)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e5576e8c-6c72-484b-ac6a-9373dd8566e4</guid>
      <link>https://share.transistor.fm/s/9b9b1591</link>
      <description>
        <![CDATA[<p>This week's guest is Eirini Kalliamvakou, a staff researcher at GitHub focused on AI and developer experience. Eirini sits at the forefront of research into GitHub Copilot. Abi and Eirini discuss recent research on how AI coding assistance impacts developer productivity. They talk about how leaders should build business cases for AI tools. They also preview what's to come with AI tools and implications for how developer productivity is measured.</p><p><br>Discussion points:</p><ul><li>(1:49) Overview of GitHub’s research on AI</li><li>(2:59) The research study on Copilot</li><li>(4:48) Defining and measuring productivity for this study</li><li>(7:44) Exact measures and factors studied</li><li>(8:16) Key findings from the study</li><li>(9:45) How the study was conducted </li><li>(11:17) Most surprising findings for the researchers</li><li>(14:01) The motivation for conducting a follow-up study</li><li>(15:34) How the follow-up study was conducted</li><li>(18:42) Findings from the follow-up study</li><li>(21:13) Is AI just hype? </li><li>(26:34) How to begin advocating for AI tools</li><li>(34:44) How to translate data into dollars</li><li>(37:06) How to roll out AI tools to an organization</li><li>(38:47) The impact of AI on developer experience</li><li>(43:24) Implications of AI on how we measure productivity</li></ul><p><br></p><p>Mentions and links:</p><ul><li><a href="https://www.linkedin.com/in/eirini-kalliamvakou-1016865/">Eirini Kalliamvakou on LinkedIn</a></li><li><a href="https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">Research on the impact of Copilot </a></li><li><a href="https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream/dp/0062292986">Crossing the Chasm by Geoffrey Moore</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week's guest is Eirini Kalliamvakou, a staff researcher at GitHub focused on AI and developer experience. Eirini sits at the forefront of research into GitHub Copilot. Abi and Eirini discuss recent research on how AI coding assistance impacts developer productivity. They talk about how leaders should build business cases for AI tools. They also preview what's to come with AI tools and implications for how developer productivity is measured.</p><p><br>Discussion points:</p><ul><li>(1:49) Overview of GitHub’s research on AI</li><li>(2:59) The research study on Copilot</li><li>(4:48) Defining and measuring productivity for this study</li><li>(7:44) Exact measures and factors studied</li><li>(8:16) Key findings from the study</li><li>(9:45) How the study was conducted </li><li>(11:17) Most surprising findings for the researchers</li><li>(14:01) The motivation for conducting a follow-up study</li><li>(15:34) How the follow-up study was conducted</li><li>(18:42) Findings from the follow-up study</li><li>(21:13) Is AI just hype? </li><li>(26:34) How to begin advocating for AI tools</li><li>(34:44) How to translate data into dollars</li><li>(37:06) How to roll out AI tools to an organization</li><li>(38:47) The impact of AI on developer experience</li><li>(43:24) Implications of AI on how we measure productivity</li></ul><p><br></p><p>Mentions and links:</p><ul><li><a href="https://www.linkedin.com/in/eirini-kalliamvakou-1016865/">Eirini Kalliamvakou on LinkedIn</a></li><li><a href="https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">Research on the impact of Copilot </a></li><li><a href="https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream/dp/0062292986">Crossing the Chasm by Geoffrey Moore</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 31 Jan 2024 01:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/9b9b1591/709fde20.mp3" length="114599880" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/YjAFl2hLQIUfXPZOrT3X-Ingnn_0aQoRo3haf4sk008/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3MDk1MDEv/MTcwNjYzNzgyMC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2863</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week's guest is Eirini Kalliamvakou, a staff researcher at GitHub focused on AI and developer experience. Eirini sits at the forefront of research into GitHub Copilot. Abi and Eirini discuss recent research on how AI coding assistance impacts developer productivity. They talk about how leaders should build business cases for AI tools. They also preview what's to come with AI tools and implications for how developer productivity is measured.</p><p><br>Discussion points:</p><ul><li>(1:49) Overview of GitHub’s research on AI</li><li>(2:59) The research study on Copilot</li><li>(4:48) Defining and measuring productivity for this study</li><li>(7:44) Exact measures and factors studied</li><li>(8:16) Key findings from the study</li><li>(9:45) How the study was conducted </li><li>(11:17) Most surprising findings for the researchers</li><li>(14:01) The motivation for conducting a follow-up study</li><li>(15:34) How the follow-up study was conducted</li><li>(18:42) Findings from the follow-up study</li><li>(21:13) Is AI just hype? </li><li>(26:34) How to begin advocating for AI tools</li><li>(34:44) How to translate data into dollars</li><li>(37:06) How to roll out AI tools to an organization</li><li>(38:47) The impact of AI on developer experience</li><li>(43:24) Implications of AI on how we measure productivity</li></ul><p><br></p><p>Mentions and links:</p><ul><li><a href="https://www.linkedin.com/in/eirini-kalliamvakou-1016865/">Eirini Kalliamvakou on LinkedIn</a></li><li><a href="https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">Research on the impact of Copilot </a></li><li><a href="https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream/dp/0062292986">Crossing the Chasm by Geoffrey Moore</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/9b9b1591/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Measuring developer productivity at Airbnb | Christopher Sanson (Airbnb)</title>
      <itunes:episode>54</itunes:episode>
      <podcast:episode>54</podcast:episode>
      <itunes:title>Measuring developer productivity at Airbnb | Christopher Sanson (Airbnb)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">156cba5e-70cc-4be3-846a-079b52155881</guid>
      <link>https://share.transistor.fm/s/dcdcea0a</link>
      <description>
        <![CDATA[<p>Christopher Sanson is a product manager at Airbnb who is dedicated to enhancing developer productivity and tooling. Today, we learn more about Airbnb's developer productivity team and how various teams use metrics, both within and outside the organization. From there, we dive even deeper into their measurement journey, highlighting their implementation of DORA metrics and the challenges they overcame throughout the process.</p><p><br></p><p>Discussion points:</p><ul><li>(2:43) Who is the developer productivity customer</li><li>(4:49) The evolution of developer productivity at Airbnb</li><li>(9:26) Approach before DORA metrics</li><li>(14:29) Getting buy-in for DORA metrics</li><li>(17:49) Planning how to deliver new metrics to the organization</li><li>(21:12) How Airbnb calculates deployment frequency</li><li>(23:29) Implementing a proof of concept</li><li>(27:20) Statistical measurement strategies and tactics</li><li>(31:11) Operationalizing developer productivity metrics</li><li>(34:26) How Airbnb reviews data</li><li>(35:41) How Airbnb uses DORA metrics</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Christopher Sanson on <a href="https://www.linkedin.com/in/christophersanson/">LinkedIn</a></li><li><a href="https://dpesummit.com/sessions/christopher-sanson/embracing-dora-metrics-the-airbnb-journey-towards-enhanced-developer-productivity/">Christopher’s talk at DPE Summit</a></li><li><a href="https://getdx.com/guide/how-top-companies-measure-productivity/">How Top Companies Measure Developer Productivity</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Christopher Sanson is a product manager at Airbnb who is dedicated to enhancing developer productivity and tooling. Today, we learn more about Airbnb's developer productivity team and how various teams use metrics, both within and outside the organization. From there, we dive even deeper into their measurement journey, highlighting their implementation of DORA metrics and the challenges they overcame throughout the process.</p><p><br></p><p>Discussion points:</p><ul><li>(2:43) Who is the developer productivity customer</li><li>(4:49) The evolution of developer productivity at Airbnb</li><li>(9:26) Approach before DORA metrics</li><li>(14:29) Getting buy-in for DORA metrics</li><li>(17:49) Planning how to deliver new metrics to the organization</li><li>(21:12) How Airbnb calculates deployment frequency</li><li>(23:29) Implementing a proof of concept</li><li>(27:20) Statistical measurement strategies and tactics</li><li>(31:11) Operationalizing developer productivity metrics</li><li>(34:26) How Airbnb reviews data</li><li>(35:41) How Airbnb uses DORA metrics</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Christopher Sanson on <a href="https://www.linkedin.com/in/christophersanson/">LinkedIn</a></li><li><a href="https://dpesummit.com/sessions/christopher-sanson/embracing-dora-metrics-the-airbnb-journey-towards-enhanced-developer-productivity/">Christopher’s talk at DPE Summit</a></li><li><a href="https://getdx.com/guide/how-top-companies-measure-productivity/">How Top Companies Measure Developer Productivity</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 03 Jan 2024 01:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/dcdcea0a/de3e354a.mp3" length="98336430" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/LPcQ7gd88FzzmplD-TXNez6GnKusWpz0GdIke_vlx4o/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2Njk1OTgv/MTcwNDIxMDY1My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2457</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Christopher Sanson is a product manager at Airbnb who is dedicated to enhancing developer productivity and tooling. Today, we learn more about Airbnb's developer productivity team and how various teams use metrics, both within and outside the organization. From there, we dive even deeper into their measurement journey, highlighting their implementation of DORA metrics and the challenges they overcame throughout the process.</p><p><br></p><p>Discussion points:</p><ul><li>(2:43) Who is the developer productivity customer</li><li>(4:49) The evolution of developer productivity at Airbnb</li><li>(9:26) Approach before DORA metrics</li><li>(14:29) Getting buy-in for DORA metrics</li><li>(17:49) Planning how to deliver new metrics to the organization</li><li>(21:12) How Airbnb calculates deployment frequency</li><li>(23:29) Implementing a proof of concept</li><li>(27:20) Statistical measurement strategies and tactics</li><li>(31:11) Operationalizing developer productivity metrics</li><li>(34:26) How Airbnb reviews data</li><li>(35:41) How Airbnb uses DORA metrics</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Christopher Sanson on <a href="https://www.linkedin.com/in/christophersanson/">LinkedIn</a></li><li><a href="https://dpesummit.com/sessions/christopher-sanson/embracing-dora-metrics-the-airbnb-journey-towards-enhanced-developer-productivity/">Christopher’s talk at DPE Summit</a></li><li><a href="https://getdx.com/guide/how-top-companies-measure-productivity/">How Top Companies Measure Developer Productivity</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/dcdcea0a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Leading a DevEx team through transformation | Ana Petkovska (Nexthink)</title>
      <itunes:episode>53</itunes:episode>
      <podcast:episode>53</podcast:episode>
      <itunes:title>Leading a DevEx team through transformation | Ana Petkovska (Nexthink)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4ac27660-292b-485a-a046-94fd8263c703</guid>
      <link>https://share.transistor.fm/s/e2376085</link>
      <description>
        <![CDATA[<p>In this episode, Abi speaks with Ana Petkovska, who is currently leading the developer experience team at Nexthink. Ana takes us through her journey of leading a DevOps team that underwent multiple transformations. She explains how her team went from being a DevOps team to EngProd and eventually DevEx. Ana elaborates on her team's challenges and the reasons behind the shift in focus. She also shares how she discovered EngProd and used data from companies like Google to convince her company to invest in EngProd. Finally, Ana explains how DevEx came into the picture and changed how her team approaches and measures their work.</p><p>Discussion points:</p><ul><li>(00:28) Creating and leading a DevOps team</li><li>(05:04) Shifting from DevOps to EngProd</li><li>(07:28) Inspiration from Google</li><li>(10:05) Building the case for EngProd</li><li>(13:42) Ratio of engineers to DevEx engineers</li><li>(15:10) Team mission and charter</li><li>(16:53) Learning about DevEx</li><li>(20:05) The difference between EngProd and DevEx</li><li>(22:32) Nexthink's focus today</li></ul><p>Mentions and links:</p><ul><li>Ana Petkovska on <a href="https://www.linkedin.com/in/apetkovska/">LinkedIn</a></li><li><a href="https://www.youtube.com/watch?v=ETtRxDEYeF4&amp;t=1519s">Engineering Productivity @Google (Michael Bachman)</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi speaks with Ana Petkovska, who is currently leading the developer experience team at Nexthink. Ana takes us through her journey of leading a DevOps team that underwent multiple transformations. She explains how her team went from being a DevOps team to EngProd and eventually DevEx. Ana elaborates on her team's challenges and the reasons behind the shift in focus. She also shares how she discovered EngProd and used data from companies like Google to convince her company to invest in EngProd. Finally, Ana explains how DevEx came into the picture and changed how her team approaches and measures their work.</p><p>Discussion points:</p><ul><li>(00:28) Creating and leading a DevOps team</li><li>(05:04) Shifting from DevOps to EngProd</li><li>(07:28) Inspiration from Google</li><li>(10:05) Building the case for EngProd</li><li>(13:42) Ratio of engineers to DevEx engineers</li><li>(15:10) Team mission and charter</li><li>(16:53) Learning about DevEx</li><li>(20:05) The difference between EngProd and DevEx</li><li>(22:32) Nexthink's focus today</li></ul><p>Mentions and links:</p><ul><li>Ana Petkovska on <a href="https://www.linkedin.com/in/apetkovska/">LinkedIn</a></li><li><a href="https://www.youtube.com/watch?v=ETtRxDEYeF4&amp;t=1519s">Engineering Productivity @Google (Michael Bachman)</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 13 Dec 2023 01:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/e2376085/0ecc80f7.mp3" length="63859219" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/GsYns6JVC9g1Hz88PlYMZNnkxARTnamx0lmyuqYrdKs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MzU4MzQv/MTcwMTk4ODY1NS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1595</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi speaks with Ana Petkovska, who is currently leading the developer experience team at Nexthink. Ana takes us through her journey of leading a DevOps team that underwent multiple transformations. She explains how her team went from being a DevOps team to EngProd and eventually DevEx. Ana elaborates on her team's challenges and the reasons behind the shift in focus. She also shares how she discovered EngProd and used data from companies like Google to convince her company to invest in EngProd. Finally, Ana explains how DevEx came into the picture and changed how her team approaches and measures their work.</p><p>Discussion points:</p><ul><li>(00:28) Creating and leading a DevOps team</li><li>(05:04) Shifting from DevOps to EngProd</li><li>(07:28) Inspiration from Google</li><li>(10:05) Building the case for EngProd</li><li>(13:42) Ratio of engineers to DevEx engineers</li><li>(15:10) Team mission and charter</li><li>(16:53) Learning about DevEx</li><li>(20:05) The difference between EngProd and DevEx</li><li>(22:32) Nexthink's focus today</li></ul><p>Mentions and links:</p><ul><li>Ana Petkovska on <a href="https://www.linkedin.com/in/apetkovska/">LinkedIn</a></li><li><a href="https://www.youtube.com/watch?v=ETtRxDEYeF4&amp;t=1519s">Engineering Productivity @Google (Michael Bachman)</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e2376085/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/e2376085/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>How LinkedIn defines and tracks key developer productivity metrics | Grant Jenks (LinkedIn)</title>
      <itunes:episode>52</itunes:episode>
      <podcast:episode>52</podcast:episode>
      <itunes:title>How LinkedIn defines and tracks key developer productivity metrics | Grant Jenks (LinkedIn)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7e25b5cd-9a14-48f9-8e87-1442f8e3c641</guid>
      <link>https://share.transistor.fm/s/68fb0c6c</link>
      <description>
        <![CDATA[<p>In this episode, Abi chats with Grant Jenks, Senior Staff SWE, Engineering Insights @ LinkedIn. They dive into LinkedIn's developer insights platform, iHub, and its backstory. The conversation covers qualitative versus quantitative metrics, sharing concerns about these terms and exploring their correlation. The episode wraps up with technical topics like winsorized means, thoughts on composite scores, and ways AI can benefit developer productivity teams.</p><p>(1:10) Insights in the productivity space<br>(7:13) LinkedIn's metrics platform, iHub<br>(12:52) Making metrics actionable<br>(15:35) Choosing the right and wrong metrics<br>(19:39) The difficulty of answering simple questions<br>(26:23) Top-down vs. bottom-up approach to metrics<br>(32:12) Winsorized mean and selecting measurements<br>(39:25) Using composite metrics<br>(46:57) Using AI in developer productivity</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi chats with Grant Jenks, Senior Staff SWE, Engineering Insights @ LinkedIn. They dive into LinkedIn's developer insights platform, iHub, and its backstory. The conversation covers qualitative versus quantitative metrics, sharing concerns about these terms and exploring their correlation. The episode wraps up with technical topics like winsorized means, thoughts on composite scores, and ways AI can benefit developer productivity teams.</p><p>(1:10) Insights in the productivity space<br>(7:13) LinkedIn's metrics platform, iHub<br>(12:52) Making metrics actionable<br>(15:35) Choosing the right and wrong metrics<br>(19:39) The difficulty of answering simple questions<br>(26:23) Top-down vs. bottom-up approach to metrics<br>(32:12) Winsorized mean and selecting measurements<br>(39:25) Using composite metrics<br>(46:57) Using AI in developer productivity</p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Dec 2023 01:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/68fb0c6c/c642cc07.mp3" length="125809045" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/AjkWpes19xMR3dUYSTSIZm1j2pOCafRg_Fe42u4Y-Es/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MzEyNTAv/MTcwMTc4NzM3MC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3143</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi chats with Grant Jenks, Senior Staff SWE, Engineering Insights @ LinkedIn. They dive into LinkedIn's developer insights platform, iHub, and its backstory. The conversation covers qualitative versus quantitative metrics, sharing concerns about these terms and exploring their correlation. The episode wraps up with technical topics like winsorized means, thoughts on composite scores, and ways AI can benefit developer productivity teams.</p><p>(1:10) Insights in the productivity space<br>(7:13) LinkedIn's metrics platform, iHub<br>(12:52) Making metrics actionable<br>(15:35) Choosing the right and wrong metrics<br>(19:39) The difficulty of answering simple questions<br>(26:23) Top-down vs. bottom-up approach to metrics<br>(32:12) Winsorized mean and selecting measurements<br>(39:25) Using composite metrics<br>(46:57) Using AI in developer productivity</p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/68fb0c6c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/68fb0c6c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Building an internal developer platform at CVS Health | Jim Beyers (CVS Health)</title>
      <itunes:episode>51</itunes:episode>
      <podcast:episode>51</podcast:episode>
      <itunes:title>Building an internal developer platform at CVS Health | Jim Beyers (CVS Health)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">73c21795-24a4-48c5-9973-2280acaf53d6</guid>
      <link>https://share.transistor.fm/s/9693c6cd</link>
      <description>
        <![CDATA[<p>This week's episode is with Jim Beyers, VP of Engineering Enablement at CVS Health. Jim joined CVS a year ago to lead an effort to build an internal developer platform. Abi and Jim discuss how Jim joined CVS to build an internal developer platform, what brought him to the job, and how the developer experience fits into the broader transformation goals of CVS. Additionally, this episode covers building the team, defining a strategy, and how he's thinking about winning the hearts and minds across his organization.</p><p><br></p><p>Discussion points:</p><ul><li>(1:15) How Jim was brought into CVS</li><li>(2:39) How DevEx aligns with CVS’s transformation initiatives</li><li>(6:06) Jim’s vision for developer experience</li><li>(8:26) Building a DevEx team and working with product managers</li><li>(15:06) Defining and communicating a DevEx strategy</li><li>(19:37) Assessing Backstage and developing a platform</li><li>(24:40) Working with developers and leaders</li><li>(27:55) Working alongside colleagues tackling similar problems</li><li>(29:26) Reporting on progress</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Jim Beyers on <a href="https://www.linkedin.com/in/jamesbeyers/">LinkedIn</a></li><li>Jim’s talk on the <a href="https://www.youtube.com/watch?v=cnHfK4MZA2Y">Target Application Platform</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week's episode is with Jim Beyers, VP of Engineering Enablement at CVS Health. Jim joined CVS a year ago to lead an effort to build an internal developer platform. Abi and Jim discuss how Jim joined CVS to build an internal developer platform, what brought him to the job, and how the developer experience fits into the broader transformation goals of CVS. Additionally, this episode covers building the team, defining a strategy, and how he's thinking about winning the hearts and minds across his organization.</p><p><br></p><p>Discussion points:</p><ul><li>(1:15) How Jim was brought into CVS</li><li>(2:39) How DevEx aligns with CVS’s transformation initiatives</li><li>(6:06) Jim’s vision for developer experience</li><li>(8:26) Building a DevEx team and working with product managers</li><li>(15:06) Defining and communicating a DevEx strategy</li><li>(19:37) Assessing Backstage and developing a platform</li><li>(24:40) Working with developers and leaders</li><li>(27:55) Working alongside colleagues tackling similar problems</li><li>(29:26) Reporting on progress</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Jim Beyers on <a href="https://www.linkedin.com/in/jamesbeyers/">LinkedIn</a></li><li>Jim’s talk on the <a href="https://www.youtube.com/watch?v=cnHfK4MZA2Y">Target Application Platform</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 22 Nov 2023 01:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/9693c6cd/97dec74a.mp3" length="74798852" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/2GxoqbzppGHUAXrakozBEq3gYmJenr2_N-t_2jQvm2w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MDc4NTEv/MTcwMDU5MjY4My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1868</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week's episode is with Jim Beyers, VP of Engineering Enablement at CVS Health. Jim joined CVS a year ago to lead an effort to build an internal developer platform. Abi and Jim discuss how Jim joined CVS to build an internal developer platform, what brought him to the job, and how the developer experience fits into the broader transformation goals of CVS. Additionally, this episode covers building the team, defining a strategy, and how he's thinking about winning the hearts and minds across his organization.</p><p><br></p><p>Discussion points:</p><ul><li>(1:15) How Jim was brought into CVS</li><li>(2:39) How DevEx aligns with CVS’s transformation initiatives</li><li>(6:06) Jim’s vision for developer experience</li><li>(8:26) Building a DevEx team and working with product managers</li><li>(15:06) Defining and communicating a DevEx strategy</li><li>(19:37) Assessing Backstage and developing a platform</li><li>(24:40) Working with developers and leaders</li><li>(27:55) Working alongside colleagues tackling similar problems</li><li>(29:26) Reporting on progress</li></ul><p><br></p><p>Mentions and links:</p><ul><li>Jim Beyers on <a href="https://www.linkedin.com/in/jamesbeyers/">LinkedIn</a></li><li>Jim’s talk on the <a href="https://www.youtube.com/watch?v=cnHfK4MZA2Y">Target Application Platform</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/9693c6cd/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>The Platform PM role at Spotify | Nils Loodin (Spotify)</title>
      <itunes:episode>50</itunes:episode>
      <podcast:episode>50</podcast:episode>
      <itunes:title>The Platform PM role at Spotify | Nils Loodin (Spotify)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cb2f16b0-d9eb-4d12-a0b7-9bef79cb787e</guid>
      <link>https://share.transistor.fm/s/ffa8c191</link>
      <description>
        <![CDATA[<p>This week we spoke with Nils Loodin, Platform Product Manager at Spotify. Nils describes how his role in platform product management works, including unique challenges, approaches, and career considerations. Nils also discusses some of the recent changes within Spotify's platform organization, including shifting teams from tech-centric to journey-centric. </p><p><br></p><p>Discussion points:</p><ul><li>(1:30) How Nils came into his role</li><li>(3:59) How “developer experience” came into the picture at Spotify</li><li>(5:30) How the Platform team is structured</li><li>(8:52) Unique challenges of the Platform PM role</li><li>(12:51) Defining the Platform PM’s focus</li><li>(16:39) Staying close to their customers</li><li>(21:09) Optimal background for someone in this role</li><li>(24:43) Attracting PMs into Platform roles</li><li>(29:40) How it is that Spotify’s leadership invests in developer experience</li><li>(31:19) How a recent reorg shifted Platform’s focus </li><li>(41:29) Improving onboarding for mobile engineers</li><li>(47:33) Measuring onboarding </li></ul><p>Mentions and links:</p><ul><li>Connect with Nils on <a href="https://www.linkedin.com/in/nilsloodin/">LinkedIn</a></li><li><a href="https://getdx.com/podcast/product-management-platform-teams">The product management discipline in platform teams | Russ Nealis (Plaid)</a></li><li>Spotify’s <a href="https://engineering.atspotify.com/">Engineering blog</a> </li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we spoke with Nils Loodin, Platform Product Manager at Spotify. Nils describes how his role in platform product management works, including unique challenges, approaches, and career considerations. Nils also discusses some of the recent changes within Spotify's platform organization, including shifting teams from tech-centric to journey-centric. </p><p><br></p><p>Discussion points:</p><ul><li>(1:30) How Nils came into his role</li><li>(3:59) How “developer experience” came into the picture at Spotify</li><li>(5:30) How the Platform team is structured</li><li>(8:52) Unique challenges of the Platform PM role</li><li>(12:51) Defining the Platform PM’s focus</li><li>(16:39) Staying close to their customers</li><li>(21:09) Optimal background for someone in this role</li><li>(24:43) Attracting PMs into Platform roles</li><li>(29:40) How it is that Spotify’s leadership invests in developer experience</li><li>(31:19) How a recent reorg shifted Platform’s focus </li><li>(41:29) Improving onboarding for mobile engineers</li><li>(47:33) Measuring onboarding </li></ul><p>Mentions and links:</p><ul><li>Connect with Nils on <a href="https://www.linkedin.com/in/nilsloodin/">LinkedIn</a></li><li><a href="https://getdx.com/podcast/product-management-platform-teams">The product management discipline in platform teams | Russ Nealis (Plaid)</a></li><li>Spotify’s <a href="https://engineering.atspotify.com/">Engineering blog</a> </li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 15 Nov 2023 13:07:43 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/ffa8c191/6ebeb217.mp3" length="125260371" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/dDul5pqfSAqu_KZeIp_8gjQu3CzU9xKZpIzMPOrsYKU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MDEzMjkv/MTcwMDA3ODQ2MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3130</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we spoke with Nils Loodin, Platform Product Manager at Spotify. Nils describes how his role in platform product management works, including unique challenges, approaches, and career considerations. Nils also discusses some of the recent changes within Spotify's platform organization, including shifting teams from tech-centric to journey-centric. </p><p><br></p><p>Discussion points:</p><ul><li>(1:30) How Nils came into his role</li><li>(3:59) How “developer experience” came into the picture at Spotify</li><li>(5:30) How the Platform team is structured</li><li>(8:52) Unique challenges of the Platform PM role</li><li>(12:51) Defining the Platform PM’s focus</li><li>(16:39) Staying close to their customers</li><li>(21:09) Optimal background for someone in this role</li><li>(24:43) Attracting PMs into Platform roles</li><li>(29:40) How it is that Spotify’s leadership invests in developer experience</li><li>(31:19) How a recent reorg shifted Platform’s focus </li><li>(41:29) Improving onboarding for mobile engineers</li><li>(47:33) Measuring onboarding </li></ul><p>Mentions and links:</p><ul><li>Connect with Nils on <a href="https://www.linkedin.com/in/nilsloodin/">LinkedIn</a></li><li><a href="https://getdx.com/podcast/product-management-platform-teams">The product management discipline in platform teams | Russ Nealis (Plaid)</a></li><li>Spotify’s <a href="https://engineering.atspotify.com/">Engineering blog</a> </li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/ffa8c191/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Evolving platform and enablement at Thomson Reuters | Justin Wright, Matthew Dimich (Thomson Reuters) </title>
      <itunes:episode>49</itunes:episode>
      <podcast:episode>49</podcast:episode>
      <itunes:title>Evolving platform and enablement at Thomson Reuters | Justin Wright, Matthew Dimich (Thomson Reuters) </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2221789a-8b7c-47ae-a405-79be9e5773ee</guid>
      <link>https://share.transistor.fm/s/1680c384</link>
      <description>
        <![CDATA[<p>This week we’re joined by Justin Wright and Matthew Dimich, who lead Platform Engineering and Engineering Enablement at Thomson Reuters. Justin and Matt give an inside look at how they’ve evolved their organization’s structure and approach over the past 8 years. </p><p><strong>Discussion points: </strong></p><ul><li>(1:03) Founding the platform team</li><li>(5:49) The current organizational structure</li><li>(9:00) Key initiatives the platform organization is focused on</li><li>(12:55) The enablement function within platform</li><li>(16:44) What drove the engagement function’s growth</li><li>(19:42) The value of having an enablement function</li><li>(24:05) Marketing the enablement team’s work</li><li>(29:47) How enablement interfaces with other platform teams</li><li>(33:22) Managing the work enablement focuses on</li><li>(36:55) The balance of requests vs proactive work</li></ul><p><br></p><p><strong>Mentions and links:</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/jwright006/">Justin</a> and <a href="https://www.linkedin.com/in/mattdimich">Matt</a> on LinkedIn</li><li>Manuel Pais’ <a href="https://www.youtube.com/watch?v=b8YHCDMxqfg">Platform as a Product</a> talk</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re joined by Justin Wright and Matthew Dimich, who lead Platform Engineering and Engineering Enablement at Thomson Reuters. Justin and Matt give an inside look at how they’ve evolved their organization’s structure and approach over the past 8 years. </p><p><strong>Discussion points: </strong></p><ul><li>(1:03) Founding the platform team</li><li>(5:49) The current organizational structure</li><li>(9:00) Key initiatives the platform organization is focused on</li><li>(12:55) The enablement function within platform</li><li>(16:44) What drove the engagement function’s growth</li><li>(19:42) The value of having an enablement function</li><li>(24:05) Marketing the enablement team’s work</li><li>(29:47) How enablement interfaces with other platform teams</li><li>(33:22) Managing the work enablement focuses on</li><li>(36:55) The balance of requests vs proactive work</li></ul><p><br></p><p><strong>Mentions and links:</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/jwright006/">Justin</a> and <a href="https://www.linkedin.com/in/mattdimich">Matt</a> on LinkedIn</li><li>Manuel Pais’ <a href="https://www.youtube.com/watch?v=b8YHCDMxqfg">Platform as a Product</a> talk</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 08 Nov 2023 01:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/1680c384/2c2067ae.mp3" length="39741879" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/W2WTWSM1VcGd2HVm1JSZEPKKcVuXbHu4c62TPdsd90I/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1ODkwMjYv/MTY5OTM3OTU5NS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2478</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re joined by Justin Wright and Matthew Dimich, who lead Platform Engineering and Engineering Enablement at Thomson Reuters. Justin and Matt give an inside look at how they’ve evolved their organization’s structure and approach over the past 8 years. </p><p><strong>Discussion points: </strong></p><ul><li>(1:03) Founding the platform team</li><li>(5:49) The current organizational structure</li><li>(9:00) Key initiatives the platform organization is focused on</li><li>(12:55) The enablement function within platform</li><li>(16:44) What drove the engagement function’s growth</li><li>(19:42) The value of having an enablement function</li><li>(24:05) Marketing the enablement team’s work</li><li>(29:47) How enablement interfaces with other platform teams</li><li>(33:22) Managing the work enablement focuses on</li><li>(36:55) The balance of requests vs proactive work</li></ul><p><br></p><p><strong>Mentions and links:</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/jwright006/">Justin</a> and <a href="https://www.linkedin.com/in/mattdimich">Matt</a> on LinkedIn</li><li>Manuel Pais’ <a href="https://www.youtube.com/watch?v=b8YHCDMxqfg">Platform as a Product</a> talk</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/1680c384/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Key findings from the 2023 State of Devops Report | Nathen Harvey (DORA at Google)</title>
      <itunes:episode>48</itunes:episode>
      <podcast:episode>48</podcast:episode>
      <itunes:title>Key findings from the 2023 State of Devops Report | Nathen Harvey (DORA at Google)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f344faca-3fd7-417e-809e-63a1de38e1e5</guid>
      <link>https://share.transistor.fm/s/66f8dc2f</link>
      <description>
        <![CDATA[<p>This week’s episode dives into the DORA research program and this year’s State of DevOps Report. Nathen Harvey, who leads DORA at Google, shares the key findings from the research and what’s changed since previous reports. </p><p><strong>Discussion points:</strong></p><ul><li>(1:10) What DORA focuses on</li><li>(2:17) Where the DORA metrics fit </li><li>(4:35) Introduction to user-centric software development</li><li>(8:05) Impact of user-centricity on software delivery</li><li>(9:40) Team performance vs. organizational performance </li><li>(13:50) Importance of internal documentation</li><li>(15:19) Methodology for designing surveys</li><li>(19:52) Impact of documentation on software delivery</li><li>(23:11) Reemergence of the Elite cluster</li><li>(25:55) Advice for leaders leveraging benchmarks</li><li>(28:30) Redefining MTTR</li><li>(33:45) Changing how Change Failure Rate is measured</li><li>(36:45) Impact of AI on software delivery </li><li>(41:25) Impact of code review speed</li></ul><p><strong>Mentions and links:</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/nathen/">Nathen on LinkedIn</a></li><li>Listen to the <a href="https://getdx.com/podcast/masterclass-on-dora-metrics">previous episode</a> with Nathen</li><li>Read the <a href="https://cloud.google.com/devops/state-of-devops">2023 State of Devops Report</a></li><li>The <a href="https://dora.dev/quickcheck/">DORA Quick Check</a></li><li>Blog post: <a href="https://cloud.google.com/blog/products/devops-sre/deep-dive-into-2022-state-of-devops-report-on-documentation">Documentation is like sunshine</a></li><li>Join the <a href="https://dora.community/">DORA community</a></li><li><a href="https://queue.acm.org/detail.cfm?id=3595878">DevEx: What Actually Drives Productivity</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week’s episode dives into the DORA research program and this year’s State of DevOps Report. Nathen Harvey, who leads DORA at Google, shares the key findings from the research and what’s changed since previous reports. </p><p><strong>Discussion points:</strong></p><ul><li>(1:10) What DORA focuses on</li><li>(2:17) Where the DORA metrics fit </li><li>(4:35) Introduction to user-centric software development</li><li>(8:05) Impact of user-centricity on software delivery</li><li>(9:40) Team performance vs. organizational performance </li><li>(13:50) Importance of internal documentation</li><li>(15:19) Methodology for designing surveys</li><li>(19:52) Impact of documentation on software delivery</li><li>(23:11) Reemergence of the Elite cluster</li><li>(25:55) Advice for leaders leveraging benchmarks</li><li>(28:30) Redefining MTTR</li><li>(33:45) Changing how Change Failure Rate is measured</li><li>(36:45) Impact of AI on software delivery </li><li>(41:25) Impact of code review speed</li></ul><p><strong>Mentions and links:</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/nathen/">Nathen on LinkedIn</a></li><li>Listen to the <a href="https://getdx.com/podcast/masterclass-on-dora-metrics">previous episode</a> with Nathen</li><li>Read the <a href="https://cloud.google.com/devops/state-of-devops">2023 State of Devops Report</a></li><li>The <a href="https://dora.dev/quickcheck/">DORA Quick Check</a></li><li>Blog post: <a href="https://cloud.google.com/blog/products/devops-sre/deep-dive-into-2022-state-of-devops-report-on-documentation">Documentation is like sunshine</a></li><li>Join the <a href="https://dora.community/">DORA community</a></li><li><a href="https://queue.acm.org/detail.cfm?id=3595878">DevEx: What Actually Drives Productivity</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 25 Oct 2023 01:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/66f8dc2f/b4a54ca6.mp3" length="44224554" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Qipl1I9DmyKHo6aOFkSJjHX7jZHVA_JNEbKXEiIBSII/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NjM1Mjkv/MTY5ODE5Nzc1NS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2759</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week’s episode dives into the DORA research program and this year’s State of DevOps Report. Nathen Harvey, who leads DORA at Google, shares the key findings from the research and what’s changed since previous reports. </p><p><strong>Discussion points:</strong></p><ul><li>(1:10) What DORA focuses on</li><li>(2:17) Where the DORA metrics fit </li><li>(4:35) Introduction to user-centric software development</li><li>(8:05) Impact of user-centricity on software delivery</li><li>(9:40) Team performance vs. organizational performance </li><li>(13:50) Importance of internal documentation</li><li>(15:19) Methodology for designing surveys</li><li>(19:52) Impact of documentation on software delivery</li><li>(23:11) Reemergence of the Elite cluster</li><li>(25:55) Advice for leaders leveraging benchmarks</li><li>(28:30) Redefining MTTR</li><li>(33:45) Changing how Change Failure Rate is measured</li><li>(36:45) Impact of AI on software delivery </li><li>(41:25) Impact of code review speed</li></ul><p><strong>Mentions and links:</strong></p><ul><li>Connect with <a href="https://www.linkedin.com/in/nathen/">Nathen on LinkedIn</a></li><li>Listen to the <a href="https://getdx.com/podcast/masterclass-on-dora-metrics">previous episode</a> with Nathen</li><li>Read the <a href="https://cloud.google.com/devops/state-of-devops">2023 State of Devops Report</a></li><li>The <a href="https://dora.dev/quickcheck/">DORA Quick Check</a></li><li>Blog post: <a href="https://cloud.google.com/blog/products/devops-sre/deep-dive-into-2022-state-of-devops-report-on-documentation">Documentation is like sunshine</a></li><li>Join the <a href="https://dora.community/">DORA community</a></li><li><a href="https://queue.acm.org/detail.cfm?id=3595878">DevEx: What Actually Drives Productivity</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/66f8dc2f/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Atlassian’s journey with developer experience | Preeti Kota (Atlassian)</title>
      <itunes:episode>47</itunes:episode>
      <podcast:episode>47</podcast:episode>
      <itunes:title>Atlassian’s journey with developer experience | Preeti Kota (Atlassian)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0cb61445-3309-492f-86e2-9ce16b942939</guid>
      <link>https://share.transistor.fm/s/f0009138</link>
      <description>
        <![CDATA[<p>This week we’re joined by Preeti Kota, the Head of Engineering for Compass at Atlassian. Preeti walks us through Atlassian’s journey with developer experience: including how they measure DevEx, and how they drive improvements through efforts at both the organization and team levels. Preeti also talks about how this journey has led to the development of Atlassian’s newly released internal developer portal, Compass. </p><p><strong>Mentions and links:</strong></p><ul><li><a href="https://www.atlassian.com/blog/announcements/compass-GA">Learn about Compass</a>, Atlassian’s newly released internal developer portal  </li><li>Connect with Preeti on <a href="https://www.linkedin.com/in/preetikota/">LinkedIn</a></li><li>Atlassian’s CTO, Rajeev Rajan, on the <a href="https://www.atlassian.com/engineering/the-key-to-unlocking-developer-productivity">key to unlocking developer productivity </a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(1:43) Where Atlassian’s journey with developer experience began</li><li>(5:36) Who is championing the focus on DevEx at Atlassian</li><li>(9:30) How the company arrived at their level of investment in DevEx</li><li>(13:47) Defining developer experience</li><li>(18:19) How the program for improving developer productivity is structured</li><li>(21:19) The Developer Productivity Champions group</li><li>(23:53) Two metrics in focus: Self-serve documentation and self-serve dependency maintenance</li><li>(25:56) How Atlassian surveys developers </li><li>(29:59) Types of projects the centralized teams tackle </li><li>(31:19) Getting buy-in for investing 10% time toward DevEx projects</li><li>(33:13) How leaders get teams to feel they have permission to invest 10% of their time toward DevEx projects</li><li>(36:19) The backstory behind Compass, Atlassian’s new product </li><li>(38:10) What Compass is, who it’s for, and how it is unique</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re joined by Preeti Kota, the Head of Engineering for Compass at Atlassian. Preeti walks us through Atlassian’s journey with developer experience: including how they measure DevEx, and how they drive improvements through efforts at both the organization and team levels. Preeti also talks about how this journey has led to the development of Atlassian’s newly released internal developer portal, Compass. </p><p><strong>Mentions and links:</strong></p><ul><li><a href="https://www.atlassian.com/blog/announcements/compass-GA">Learn about Compass</a>, Atlassian’s newly released internal developer portal  </li><li>Connect with Preeti on <a href="https://www.linkedin.com/in/preetikota/">LinkedIn</a></li><li>Atlassian’s CTO, Rajeev Rajan, on the <a href="https://www.atlassian.com/engineering/the-key-to-unlocking-developer-productivity">key to unlocking developer productivity </a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(1:43) Where Atlassian’s journey with developer experience began</li><li>(5:36) Who is championing the focus on DevEx at Atlassian</li><li>(9:30) How the company arrived at their level of investment in DevEx</li><li>(13:47) Defining developer experience</li><li>(18:19) How the program for improving developer productivity is structured</li><li>(21:19) The Developer Productivity Champions group</li><li>(23:53) Two metrics in focus: Self-serve documentation and self-serve dependency maintenance</li><li>(25:56) How Atlassian surveys developers </li><li>(29:59) Types of projects the centralized teams tackle </li><li>(31:19) Getting buy-in for investing 10% time toward DevEx projects</li><li>(33:13) How leaders get teams to feel they have permission to invest 10% of their time toward DevEx projects</li><li>(36:19) The backstory behind Compass, Atlassian’s new product </li><li>(38:10) What Compass is, who it’s for, and how it is unique</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 18 Oct 2023 01:04:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/f0009138/80a17a1d.mp3" length="43205542" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/J80HzfTVF7IOn2qMmCIY8CM_rcBh3-iuC34YBH14tWo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NTA1MTYv/MTY5NzU3ODcyOS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2695</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re joined by Preeti Kota, the Head of Engineering for Compass at Atlassian. Preeti walks us through Atlassian’s journey with developer experience: including how they measure DevEx, and how they drive improvements through efforts at both the organization and team levels. Preeti also talks about how this journey has led to the development of Atlassian’s newly released internal developer portal, Compass. </p><p><strong>Mentions and links:</strong></p><ul><li><a href="https://www.atlassian.com/blog/announcements/compass-GA">Learn about Compass</a>, Atlassian’s newly released internal developer portal  </li><li>Connect with Preeti on <a href="https://www.linkedin.com/in/preetikota/">LinkedIn</a></li><li>Atlassian’s CTO, Rajeev Rajan, on the <a href="https://www.atlassian.com/engineering/the-key-to-unlocking-developer-productivity">key to unlocking developer productivity </a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(1:43) Where Atlassian’s journey with developer experience began</li><li>(5:36) Who is championing the focus on DevEx at Atlassian</li><li>(9:30) How the company arrived at their level of investment in DevEx</li><li>(13:47) Defining developer experience</li><li>(18:19) How the program for improving developer productivity is structured</li><li>(21:19) The Developer Productivity Champions group</li><li>(23:53) Two metrics in focus: Self-serve documentation and self-serve dependency maintenance</li><li>(25:56) How Atlassian surveys developers </li><li>(29:59) Types of projects the centralized teams tackle </li><li>(31:19) Getting buy-in for investing 10% time toward DevEx projects</li><li>(33:13) How leaders get teams to feel they have permission to invest 10% of their time toward DevEx projects</li><li>(36:19) The backstory behind Compass, Atlassian’s new product </li><li>(38:10) What Compass is, who it’s for, and how it is unique</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/f0009138/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Shopify’s developer happiness survey | Mark Côté (Shopify)</title>
      <itunes:episode>46</itunes:episode>
      <podcast:episode>46</podcast:episode>
      <itunes:title>Shopify’s developer happiness survey | Mark Côté (Shopify)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f403001d-df81-4d2f-a450-434c63614738</guid>
      <link>https://share.transistor.fm/s/386f044d</link>
      <description>
        <![CDATA[<p>This week we’re joined by Mark Côté, who leads the Developer Infrastructure organization at Shopify, to learn about their developer survey program. Mark shares what went into designing and running the survey, what they’ve done to drive participation rates higher, and how they interpret their data. </p><p><strong>Mentions and links:</strong></p><ul><li>Follow Mark on <a href="https://www.linkedin.com/in/mrcote/">LinkedIn</a></li><li>Read Shopify’s <a href="https://shopify.engineering/">engineering blog</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(1:32) Starting the survey</li><li>(3:20) How the survey has evolved</li><li>(4:22) Three types of information gleaned from the survey</li><li>(7:37) Designing and running the survey</li><li>(12:28) Participation rates</li><li>(15:12) Why there's an increase of interest in the results at Shopify</li><li>(17:42) What's affecting participation rates</li><li>(23:03) Selecting survey questions</li><li>(27:01) Refining survey questions</li><li>(28:54) Survey length</li><li>(30:56) Analyzing the results </li><li>(33:31) How the data is stored and shared</li><li>(35:56) Sending targeted surveys to the right developers </li><li>(37:40) Using the results as a Developer Acceleration organization</li><li>(39:29) Confidence in the data</li><li>(41:27) The value of a developer survey</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re joined by Mark Côté, who leads the Developer Infrastructure organization at Shopify, to learn about their developer survey program. Mark shares what went into designing and running the survey, what they’ve done to drive participation rates higher, and how they interpret their data. </p><p><strong>Mentions and links:</strong></p><ul><li>Follow Mark on <a href="https://www.linkedin.com/in/mrcote/">LinkedIn</a></li><li>Read Shopify’s <a href="https://shopify.engineering/">engineering blog</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(1:32) Starting the survey</li><li>(3:20) How the survey has evolved</li><li>(4:22) Three types of information gleaned from the survey</li><li>(7:37) Designing and running the survey</li><li>(12:28) Participation rates</li><li>(15:12) Why there's an increase of interest in the results at Shopify</li><li>(17:42) What's affecting participation rates</li><li>(23:03) Selecting survey questions</li><li>(27:01) Refining survey questions</li><li>(28:54) Survey length</li><li>(30:56) Analyzing the results </li><li>(33:31) How the data is stored and shared</li><li>(35:56) Sending targeted surveys to the right developers </li><li>(37:40) Using the results as a Developer Acceleration organization</li><li>(39:29) Confidence in the data</li><li>(41:27) The value of a developer survey</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 27 Sep 2023 02:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/386f044d/967e2743.mp3" length="42674863" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/L7VjrC0cfnrGRKsPHJ3K1muK8PWAWpgAHCG1ueVyqO8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1MTk3NDkv/MTY5NTY5ODcxMC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2661</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re joined by Mark Côté, who leads the Developer Infrastructure organization at Shopify, to learn about their developer survey program. Mark shares what went into designing and running the survey, what they’ve done to drive participation rates higher, and how they interpret their data. </p><p><strong>Mentions and links:</strong></p><ul><li>Follow Mark on <a href="https://www.linkedin.com/in/mrcote/">LinkedIn</a></li><li>Read Shopify’s <a href="https://shopify.engineering/">engineering blog</a></li></ul><p><strong>Discussion points:</strong></p><ul><li>(1:32) Starting the survey</li><li>(3:20) How the survey has evolved</li><li>(4:22) Three types of information gleaned from the survey</li><li>(7:37) Designing and running the survey</li><li>(12:28) Participation rates</li><li>(15:12) Why there's an increase of interest in the results at Shopify</li><li>(17:42) What's affecting participation rates</li><li>(23:03) Selecting survey questions</li><li>(27:01) Refining survey questions</li><li>(28:54) Survey length</li><li>(30:56) Analyzing the results </li><li>(33:31) How the data is stored and shared</li><li>(35:56) Sending targeted surveys to the right developers </li><li>(37:40) Using the results as a Developer Acceleration organization</li><li>(39:29) Confidence in the data</li><li>(41:27) The value of a developer survey</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/386f044d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Leading platform engineering at Trivago | Thomas Khalil (Trivago)</title>
      <itunes:episode>42</itunes:episode>
      <podcast:episode>42</podcast:episode>
      <itunes:title>Leading platform engineering at Trivago | Thomas Khalil (Trivago)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">abee38f2-ceb9-4d70-b7c7-4245dbcb2ee1</guid>
      <link>https://share.transistor.fm/s/fb73def6</link>
      <description>
        <![CDATA[<p>Thomas Khalil, Head of Platform and SRE at Trivago, describes how the teams reporting into him are structured, the tactics they’re using to increase awareness of their work, and how they demonstrate their impact. </p><p><br></p><p>Discussion points: </p><ul><li>(1:17) The pillars of the Central Platform organization</li><li>(2:18) The organization’s focus on time to market and efficiency</li><li>(3:09) The differences in developer experience between teams</li><li>(4:37) Deciding whether to consolidate services</li><li>(5:57) How platform, developer experience, observability, and SRE teams interact</li><li>(8:40) How these problems were being tackled previously </li><li>(10:09) A failed attempt at rolling out Backstage </li><li>(13:48) How SRE squads are organized</li><li>(15:39) How to motivate platform teams </li><li>(17:23) Demonstrating the impact of the organization</li><li>(18:42) How the data is collected</li><li>(22:32) How they’re increasing awareness for their work </li><li>(23:42) The DevEx pillar</li><li>(25:46) How the DevEx roadshow will work </li><li>(27:56) How DORA metrics fit into their measurement program </li></ul><p><br></p><p>Mentions and links: <br>Connect with <a href="https://www.linkedin.com/in/thomas-khalil/">Thomas </a>on LinkedIn</p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Thomas Khalil, Head of Platform and SRE at Trivago, describes how the teams reporting into him are structured, the tactics they’re using to increase awareness of their work, and how they demonstrate their impact. </p><p><br></p><p>Discussion points: </p><ul><li>(1:17) The pillars of the Central Platform organization</li><li>(2:18) The organization’s focus on time to market and efficiency</li><li>(3:09) The differences in developer experience between teams</li><li>(4:37) Deciding whether to consolidate services</li><li>(5:57) How platform, developer experience, observability, and SRE teams interact</li><li>(8:40) How these problems were being tackled previously </li><li>(10:09) A failed attempt at rolling out Backstage </li><li>(13:48) How SRE squads are organized</li><li>(15:39) How to motivate platform teams </li><li>(17:23) Demonstrating the impact of the organization</li><li>(18:42) How the data is collected</li><li>(22:32) How they’re increasing awareness for their work </li><li>(23:42) The DevEx pillar</li><li>(25:46) How the DevEx roadshow will work </li><li>(27:56) How DORA metrics fit into their measurement program </li></ul><p><br></p><p>Mentions and links: <br>Connect with <a href="https://www.linkedin.com/in/thomas-khalil/">Thomas </a>on LinkedIn</p><p><br></p>]]>
      </content:encoded>
      <pubDate>Tue, 22 Aug 2023 15:46:05 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/fb73def6/6929b464.mp3" length="29725263" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/hcNj5NVQe-p-aVniMtinYr7EfZFGmrQN9ED2yPaiRvk/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzOTkzNDcv/MTY4NzgxOTc1NC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1852</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Thomas Khalil, Head of Platform and SRE at Trivago, describes how the teams reporting into him are structured, the tactics they’re using to increase awareness of their work, and how they demonstrate their impact. </p><p><br></p><p>Discussion points: </p><ul><li>(1:17) The pillars of the Central Platform organization</li><li>(2:18) The organization’s focus on time to market and efficiency</li><li>(3:09) The differences in developer experience between teams</li><li>(4:37) Deciding whether to consolidate services</li><li>(5:57) How platform, developer experience, observability, and SRE teams interact</li><li>(8:40) How these problems were being tackled previously </li><li>(10:09) A failed attempt at rolling out Backstage </li><li>(13:48) How SRE squads are organized</li><li>(15:39) How to motivate platform teams </li><li>(17:23) Demonstrating the impact of the organization</li><li>(18:42) How the data is collected</li><li>(22:32) How they’re increasing awareness for their work </li><li>(23:42) The DevEx pillar</li><li>(25:46) How the DevEx roadshow will work </li><li>(27:56) How DORA metrics fit into their measurement program </li></ul><p><br></p><p>Mentions and links: <br>Connect with <a href="https://www.linkedin.com/in/thomas-khalil/">Thomas </a>on LinkedIn</p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/fb73def6/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Enabling teams to drive their own productivity improvements | Jenny McClain (Toast)</title>
      <itunes:episode>45</itunes:episode>
      <podcast:episode>45</podcast:episode>
      <itunes:title>Enabling teams to drive their own productivity improvements | Jenny McClain (Toast)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1b8f0976-925f-4f15-8eb7-07ee20c1732b</guid>
      <link>https://share.transistor.fm/s/db18239a</link>
      <description>
        <![CDATA[<p>This week’s guest is Jenny McClain, who leads R&amp;D Team Enablement at Toast. Jenny’s team focuses on enabling individual teams at Toast to drive their own productivity improvements, and this conversation dives into how they tackle this problem. </p><p><strong>Discussion points: </strong></p><ul><li>(1:19) How the R&amp;D Enablement team works</li><li>(2:50) Why the team was formed</li><li>(4:31) The types of work the team focuses on</li><li>(7:31) Identifying the problems this team would solve</li><li>(11:23) How team embeds work</li><li>(17:19) The learning resources the team develops and maintains</li><li>(20:55) Who creates and maintains the learning resources</li><li>(23:10) How enablement stays connected with teams at scale</li><li>(25:51) How the team plans work with qualitative and quantitative measures </li><li>(29:37) Formats for sharing knowledge between teams</li><li>(33:05) How other companies can think about the enablement function</li><li>(37:40) Enablement as a career path</li></ul><p><strong>Mentions and links:<br></strong>Follow <a href="https://www.linkedin.com/in/jennymariemcclain/">Jenny</a> on LinkedIn<br>Tuckman’s <a href="https://www.wcupa.edu/coral/tuckmanStagesGroupDelvelopment.aspx">stages of group development</a></p><p><a href="https://github.com/honestbleeps/engineering-manager-toolbox/blob/master/meeting-templates/working_agreements_template.md">Working Agreements template</a> from Steve Sobel, Director of Engineering at Toast - one of the resources featured in Toast’s Team Health Toolkit </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week’s guest is Jenny McClain, who leads R&amp;D Team Enablement at Toast. Jenny’s team focuses on enabling individual teams at Toast to drive their own productivity improvements, and this conversation dives into how they tackle this problem. </p><p><strong>Discussion points: </strong></p><ul><li>(1:19) How the R&amp;D Enablement team works</li><li>(2:50) Why the team was formed</li><li>(4:31) The types of work the team focuses on</li><li>(7:31) Identifying the problems this team would solve</li><li>(11:23) How team embeds work</li><li>(17:19) The learning resources the team develops and maintains</li><li>(20:55) Who creates and maintains the learning resources</li><li>(23:10) How enablement stays connected with teams at scale</li><li>(25:51) How the team plans work with qualitative and quantitative measures </li><li>(29:37) Formats for sharing knowledge between teams</li><li>(33:05) How other companies can think about the enablement function</li><li>(37:40) Enablement as a career path</li></ul><p><strong>Mentions and links:<br></strong>Follow <a href="https://www.linkedin.com/in/jennymariemcclain/">Jenny</a> on LinkedIn<br>Tuckman’s <a href="https://www.wcupa.edu/coral/tuckmanStagesGroupDelvelopment.aspx">stages of group development</a></p><p><a href="https://github.com/honestbleeps/engineering-manager-toolbox/blob/master/meeting-templates/working_agreements_template.md">Working Agreements template</a> from Steve Sobel, Director of Engineering at Toast - one of the resources featured in Toast’s Team Health Toolkit </p>]]>
      </content:encoded>
      <pubDate>Wed, 16 Aug 2023 01:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/db18239a/a16a9098.mp3" length="41946703" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/RYGH8nYb6JAoDf-KggWh2DdNIBDMHYGARCKG3WgAUEY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0NjIyMTcv/MTY5MjE0MjA3OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2616</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week’s guest is Jenny McClain, who leads R&amp;D Team Enablement at Toast. Jenny’s team focuses on enabling individual teams at Toast to drive their own productivity improvements, and this conversation dives into how they tackle this problem. </p><p><strong>Discussion points: </strong></p><ul><li>(1:19) How the R&amp;D Enablement team works</li><li>(2:50) Why the team was formed</li><li>(4:31) The types of work the team focuses on</li><li>(7:31) Identifying the problems this team would solve</li><li>(11:23) How team embeds work</li><li>(17:19) The learning resources the team develops and maintains</li><li>(20:55) Who creates and maintains the learning resources</li><li>(23:10) How enablement stays connected with teams at scale</li><li>(25:51) How the team plans work with qualitative and quantitative measures </li><li>(29:37) Formats for sharing knowledge between teams</li><li>(33:05) How other companies can think about the enablement function</li><li>(37:40) Enablement as a career path</li></ul><p><strong>Mentions and links:<br></strong>Follow <a href="https://www.linkedin.com/in/jennymariemcclain/">Jenny</a> on LinkedIn<br>Tuckman’s <a href="https://www.wcupa.edu/coral/tuckmanStagesGroupDelvelopment.aspx">stages of group development</a></p><p><a href="https://github.com/honestbleeps/engineering-manager-toolbox/blob/master/meeting-templates/working_agreements_template.md">Working Agreements template</a> from Steve Sobel, Director of Engineering at Toast - one of the resources featured in Toast’s Team Health Toolkit </p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/db18239a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>How Google measures developer productivity | Ciera Jaspan, Collin Green (Google)</title>
      <itunes:episode>44</itunes:episode>
      <podcast:episode>44</podcast:episode>
      <itunes:title>How Google measures developer productivity | Ciera Jaspan, Collin Green (Google)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">65095854-f260-4efe-927d-7e8669cb5519</guid>
      <link>https://share.transistor.fm/s/b36e515c</link>
      <description>
        <![CDATA[<p>This week we’re joined by Ciera Jaspan and Collin Green, who lead the Engineering Productivity Research team at Google. Ciera and Collin have written several papers from studies they’ve conducted, and this discussion covers the insights from their research as well as their work more broadly at Google. </p><p>Discussion points:</p><ul><li>(1:19) About the Engineering Productivity Research team</li><li>(3:57) How the team interacts with the rest of the organization</li><li>(5:58) The different backgrounds included on the team</li><li>(13:11) How Google measures developer productivity</li><li>(18:54) Evaluating discrepancies between qualitative and quantitative data </li><li>(28:40) Google’s quarterly developer survey</li><li>(32:02) Distributing survey results back to the organization </li><li>(40:25) Misunderstandings about surveys</li><li>(43:51) Ciera and Collin’s paper on why measuring productivity is difficult</li><li>(50:35) Reductionist metrics for measuring productivity</li><li>(55:26) Examples of other fields that have struggled with measurement</li><li>(59:00) Google’s study on measuring technical debt</li><li>(1:08:05) Human judgment in measurement</li></ul><p><br></p><p>Mentions and links: <br>Follow <a href="https://www.linkedin.com/in/ciera/">Ciera</a> and <a href="https://www.linkedin.com/in/collin-green-97720378/">Collin</a> on LinkedIn<br>A Human-Centered Approach to Measuring Developer Productivity - <a href="https://ieeexplore.ieee.org/document/9994260">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/difficult-to-measure">summary</a><br>Enabling the Study of Software Development with Cross-Tool Logs - <a href="https://research.google/pubs/pub49446/">Paper</a><br>Defining, Measuring, and Managing Tech Debt - <a href="https://ieeexplore.ieee.org/document/10109339">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/measuring-and-managing-tech-debt">summary</a><br>Google’s Goals, Signals, Metrics framework - <a href="https://abseil.io/resources/swe-book/html/ch07.html">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/engineering-productivity">summary</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re joined by Ciera Jaspan and Collin Green, who lead the Engineering Productivity Research team at Google. Ciera and Collin have written several papers from studies they’ve conducted, and this discussion covers the insights from their research as well as their work more broadly at Google. </p><p>Discussion points:</p><ul><li>(1:19) About the Engineering Productivity Research team</li><li>(3:57) How the team interacts with the rest of the organization</li><li>(5:58) The different backgrounds included on the team</li><li>(13:11) How Google measures developer productivity</li><li>(18:54) Evaluating discrepancies between qualitative and quantitative data </li><li>(28:40) Google’s quarterly developer survey</li><li>(32:02) Distributing survey results back to the organization </li><li>(40:25) Misunderstandings about surveys</li><li>(43:51) Ciera and Collin’s paper on why measuring productivity is difficult</li><li>(50:35) Reductionist metrics for measuring productivity</li><li>(55:26) Examples of other fields that have struggled with measurement</li><li>(59:00) Google’s study on measuring technical debt</li><li>(1:08:05) Human judgment in measurement</li></ul><p><br></p><p>Mentions and links: <br>Follow <a href="https://www.linkedin.com/in/ciera/">Ciera</a> and <a href="https://www.linkedin.com/in/collin-green-97720378/">Collin</a> on LinkedIn<br>A Human-Centered Approach to Measuring Developer Productivity - <a href="https://ieeexplore.ieee.org/document/9994260">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/difficult-to-measure">summary</a><br>Enabling the Study of Software Development with Cross-Tool Logs - <a href="https://research.google/pubs/pub49446/">Paper</a><br>Defining, Measuring, and Managing Tech Debt - <a href="https://ieeexplore.ieee.org/document/10109339">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/measuring-and-managing-tech-debt">summary</a><br>Google’s Goals, Signals, Metrics framework - <a href="https://abseil.io/resources/swe-book/html/ch07.html">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/engineering-productivity">summary</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 02 Aug 2023 03:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/b36e515c/7527b3e0.mp3" length="71395570" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/AHSFjf5hcoXKQCWapBWmMmp1KrvhZDvdJxgP20KzmbA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0NDMyODYv/MTY5MDk0NTEzNi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>4452</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re joined by Ciera Jaspan and Collin Green, who lead the Engineering Productivity Research team at Google. Ciera and Collin have written several papers from studies they’ve conducted, and this discussion covers the insights from their research as well as their work more broadly at Google. </p><p>Discussion points:</p><ul><li>(1:19) About the Engineering Productivity Research team</li><li>(3:57) How the team interacts with the rest of the organization</li><li>(5:58) The different backgrounds included on the team</li><li>(13:11) How Google measures developer productivity</li><li>(18:54) Evaluating discrepancies between qualitative and quantitative data </li><li>(28:40) Google’s quarterly developer survey</li><li>(32:02) Distributing survey results back to the organization </li><li>(40:25) Misunderstandings about surveys</li><li>(43:51) Ciera and Collin’s paper on why measuring productivity is difficult</li><li>(50:35) Reductionist metrics for measuring productivity</li><li>(55:26) Examples of other fields that have struggled with measurement</li><li>(59:00) Google’s study on measuring technical debt</li><li>(1:08:05) Human judgment in measurement</li></ul><p><br></p><p>Mentions and links: <br>Follow <a href="https://www.linkedin.com/in/ciera/">Ciera</a> and <a href="https://www.linkedin.com/in/collin-green-97720378/">Collin</a> on LinkedIn<br>A Human-Centered Approach to Measuring Developer Productivity - <a href="https://ieeexplore.ieee.org/document/9994260">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/difficult-to-measure">summary</a><br>Enabling the Study of Software Development with Cross-Tool Logs - <a href="https://research.google/pubs/pub49446/">Paper</a><br>Defining, Measuring, and Managing Tech Debt - <a href="https://ieeexplore.ieee.org/document/10109339">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/measuring-and-managing-tech-debt">summary</a><br>Google’s Goals, Signals, Metrics framework - <a href="https://abseil.io/resources/swe-book/html/ch07.html">Paper</a>, Abi’s <a href="https://newsletter.abinoda.com/p/engineering-productivity">summary</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/b36e515c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>A customer service approach to improving DevEx | Jason Kennedy (One Medical)</title>
      <itunes:episode>43</itunes:episode>
      <podcast:episode>43</podcast:episode>
      <itunes:title>A customer service approach to improving DevEx | Jason Kennedy (One Medical)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7c81800e-e17e-455c-8e20-f72841563ab1</guid>
      <link>https://share.transistor.fm/s/40284d95</link>
      <description>
        <![CDATA[<p>This week we’re joined by Jason Kennedy, Senior Engineering Manager of Developer Experience at One Medical. Jason’s team takes a uniquely customer-driven approach to improving the developer experience, and in this episode he describes their philosophy and how it works in practice. Jason explains how they shadow developers, how they run surveys, and more. </p><p><strong>Discussion points:</strong></p><ul><li>(1:02) Renaming from Engineering Efficiency to Engineering Experience</li><li>(4:17) How Platform and DevEx teams differ </li><li>(5:38) How One Medical’s approach to customer experience inspires this team’s work</li><li>(7:01) Mapping out the developer journey</li><li>(11:14) Jason’s career transition from VPE to a line manager role</li><li>(14:14) Challenges some companies face with getting buy-in for a DevEx team</li><li>(16:22) Taking a customer service approach to DevEx</li><li>(19:12) Jason’s experience with DORA metrics</li><li>(22:19) Lessons learned about ownership</li><li>(24:18) The “Gemba” practice used at One Medical </li><li>(28:02) How information from the Gemba practice is stored</li><li>(30:59) Using weekly polls to surface pain points</li><li>(34:03) Tracking trends in the poll</li><li>(35:00) Using a quarterly NPS survey for overall sentiment</li><li>(37:08) How sentiment is measured and evaluated</li><li>(41:44) The biggest challenges with surveys </li></ul><p><br></p><p><strong>Mentions and links:<br></strong>Follow Jason on <a href="https://www.linkedin.com/in/jasonhkennedy/">LinkedIn</a><br>Listen to the <a href="https://getdx.com/podcast/developer-experience-twitter">podcast episode with Jasmine James</a></p><p>Book about Disney: <a href="https://www.amazon.com/Be-Our-Guest-Perfecting-Institute/dp/1423145844">Be Our Guest</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week we’re joined by Jason Kennedy, Senior Engineering Manager of Developer Experience at One Medical. Jason’s team takes a uniquely customer-driven approach to improving the developer experience, and in this episode he describes their philosophy and how it works in practice. Jason explains how they shadow developers, how they run surveys, and more. </p><p><strong>Discussion points:</strong></p><ul><li>(1:02) Renaming from Engineering Efficiency to Engineering Experience</li><li>(4:17) How Platform and DevEx teams differ </li><li>(5:38) How One Medical’s approach to customer experience inspires this team’s work</li><li>(7:01) Mapping out the developer journey</li><li>(11:14) Jason’s career transition from VPE to a line manager role</li><li>(14:14) Challenges some companies face with getting buy-in for a DevEx team</li><li>(16:22) Taking a customer service approach to DevEx</li><li>(19:12) Jason’s experience with DORA metrics</li><li>(22:19) Lessons learned about ownership</li><li>(24:18) The “Gemba” practice used at One Medical </li><li>(28:02) How information from the Gemba practice is stored</li><li>(30:59) Using weekly polls to surface pain points</li><li>(34:03) Tracking trends in the poll</li><li>(35:00) Using a quarterly NPS survey for overall sentiment</li><li>(37:08) How sentiment is measured and evaluated</li><li>(41:44) The biggest challenges with surveys </li></ul><p><br></p><p><strong>Mentions and links:<br></strong>Follow Jason on <a href="https://www.linkedin.com/in/jasonhkennedy/">LinkedIn</a><br>Listen to the <a href="https://getdx.com/podcast/developer-experience-twitter">podcast episode with Jasmine James</a></p><p>Book about Disney: <a href="https://www.amazon.com/Be-Our-Guest-Perfecting-Institute/dp/1423145844">Be Our Guest</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 12 Jul 2023 01:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/40284d95/d7fd384b.mp3" length="42758521" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/b_H2sUgVzrdvqMr20SJ8O9WuFxwrHssAE3EriIx3JPA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0MTY4OTQv/MTY5NTg0MjkxMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2670</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This week we’re joined by Jason Kennedy, Senior Engineering Manager of Developer Experience at One Medical. Jason’s team takes a uniquely customer-driven approach to improving the developer experience, and in this episode he describes their philosophy and how it works in practice. Jason explains how they shadow developers, how they run surveys, and more. </p><p><strong>Discussion points:</strong></p><ul><li>(1:02) Renaming from Engineering Efficiency to Engineering Experience</li><li>(4:17) How Platform and DevEx teams differ </li><li>(5:38) How One Medical’s approach to customer experience inspires this team’s work</li><li>(7:01) Mapping out the developer journey</li><li>(11:14) Jason’s career transition from VPE to a line manager role</li><li>(14:14) Challenges some companies face with getting buy-in for a DevEx team</li><li>(16:22) Taking a customer service approach to DevEx</li><li>(19:12) Jason’s experience with DORA metrics</li><li>(22:19) Lessons learned about ownership</li><li>(24:18) The “Gemba” practice used at One Medical </li><li>(28:02) How information from the Gemba practice is stored</li><li>(30:59) Using weekly polls to surface pain points</li><li>(34:03) Tracking trends in the poll</li><li>(35:00) Using a quarterly NPS survey for overall sentiment</li><li>(37:08) How sentiment is measured and evaluated</li><li>(41:44) The biggest challenges with surveys </li></ul><p><br></p><p><strong>Mentions and links:<br></strong>Follow Jason on <a href="https://www.linkedin.com/in/jasonhkennedy/">LinkedIn</a><br>Listen to the <a href="https://getdx.com/podcast/developer-experience-twitter">podcast episode with Jasmine James</a></p><p>Book about Disney: <a href="https://www.amazon.com/Be-Our-Guest-Perfecting-Institute/dp/1423145844">Be Our Guest</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Behind the scenes with Extend’s developer experience team | Matthew Schrepel and Luke Patterson (Extend)</title>
      <itunes:episode>40</itunes:episode>
      <podcast:episode>40</podcast:episode>
      <itunes:title>Behind the scenes with Extend’s developer experience team | Matthew Schrepel and Luke Patterson (Extend)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d985d02d-68c5-4352-bc27-b598970ca532</guid>
      <link>https://share.transistor.fm/s/31bad702</link>
      <description>
        <![CDATA[<p>Matthew and Luke lead Extend’s Developer Experience team, a team that has approached their work in a way that is more forward-thinking than most. In this episode, they cover how they deliver impact at multiple levels of the organization, their journey with productivity metrics, and how they’ve made DevEx a C-level concern. </p><p><br></p><p><strong>Discussion points:</strong></p><ul><li>(1:40) How the DevEx team started and where it fits at Extend</li><li>(5:08) Tradeoffs of DevEx reporting into Platform </li><li>(6:40) The mandate and tasks they focus on</li><li>(12:07) The impact of learning and development efforts</li><li>(16:33) How to drive team-level improvements </li><li>(18:44) Why developer experience is becoming more prevalent</li><li>(26:17) How they made DevEx a C-level concern</li><li>(30:27) Their journey with productivity metrics </li><li>(33:10) Advice for presenting DevEx data to executives </li><li>(34:52) The team’s experience using git metrics tools</li><li>(48:30) Being rigorous in leveraging metrics </li></ul><p><br></p><p><strong>Mentions and links:</strong> <br>Connect with <a href="https://www.linkedin.com/in/mschrepel/">Matthew</a> and <a href="https://www.linkedin.com/in/luke-patterson-0b3778157/">Luke</a> on LinkedIn<br>Other podcasts mentioned: <a href="https://getdx.com/podcast/team-topologies-platform-work">Manuel Pais</a>; <a href="https://getdx.com/podcast/developer-experience-survey-at-peloton">Peloton’s DevEx survey</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Matthew and Luke lead Extend’s Developer Experience team, a team that has approached their work in a way that is more forward-thinking than most. In this episode, they cover how they deliver impact at multiple levels of the organization, their journey with productivity metrics, and how they’ve made DevEx a C-level concern. </p><p><br></p><p><strong>Discussion points:</strong></p><ul><li>(1:40) How the DevEx team started and where it fits at Extend</li><li>(5:08) Tradeoffs of DevEx reporting into Platform </li><li>(6:40) The mandate and tasks they focus on</li><li>(12:07) The impact of learning and development efforts</li><li>(16:33) How to drive team-level improvements </li><li>(18:44) Why developer experience is becoming more prevalent</li><li>(26:17) How they made DevEx a C-level concern</li><li>(30:27) Their journey with productivity metrics </li><li>(33:10) Advice for presenting DevEx data to executives </li><li>(34:52) The team’s experience using git metrics tools</li><li>(48:30) Being rigorous in leveraging metrics </li></ul><p><br></p><p><strong>Mentions and links:</strong> <br>Connect with <a href="https://www.linkedin.com/in/mschrepel/">Matthew</a> and <a href="https://www.linkedin.com/in/luke-patterson-0b3778157/">Luke</a> on LinkedIn<br>Other podcasts mentioned: <a href="https://getdx.com/podcast/team-topologies-platform-work">Manuel Pais</a>; <a href="https://getdx.com/podcast/developer-experience-survey-at-peloton">Peloton’s DevEx survey</a></p>]]>
      </content:encoded>
      <pubDate>Tue, 13 Jun 2023 01:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/31bad702/abcb3fa8.mp3" length="57372490" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/DEy1SyCn-JDn_7N3ATYSF1i8QIF79lueblHllD_P3X8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzODA5NDYv/MTY4NjYyMDcyOC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3580</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Matthew and Luke lead Extend’s Developer Experience team, a team that has approached their work in a way that is more forward-thinking than most. In this episode, they cover how they deliver impact at multiple levels of the organization, their journey with productivity metrics, and how they’ve made DevEx a C-level concern. </p><p><br></p><p><strong>Discussion points:</strong></p><ul><li>(1:40) How the DevEx team started and where it fits at Extend</li><li>(5:08) Tradeoffs of DevEx reporting into Platform </li><li>(6:40) The mandate and tasks they focus on</li><li>(12:07) The impact of learning and development efforts</li><li>(16:33) How to drive team-level improvements </li><li>(18:44) Why developer experience is becoming more prevalent</li><li>(26:17) How they made DevEx a C-level concern</li><li>(30:27) Their journey with productivity metrics </li><li>(33:10) Advice for presenting DevEx data to executives </li><li>(34:52) The team’s experience using git metrics tools</li><li>(48:30) Being rigorous in leveraging metrics </li></ul><p><br></p><p><strong>Mentions and links:</strong> <br>Connect with <a href="https://www.linkedin.com/in/mschrepel/">Matthew</a> and <a href="https://www.linkedin.com/in/luke-patterson-0b3778157/">Luke</a> on LinkedIn<br>Other podcasts mentioned: <a href="https://getdx.com/podcast/team-topologies-platform-work">Manuel Pais</a>; <a href="https://getdx.com/podcast/developer-experience-survey-at-peloton">Peloton’s DevEx survey</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Platform teams vs enabling teams | Manuel Pais (Team Topologies)</title>
      <itunes:episode>39</itunes:episode>
      <podcast:episode>39</podcast:episode>
      <itunes:title>Platform teams vs enabling teams | Manuel Pais (Team Topologies)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fccbf1da-bbed-402a-ae59-e29d51f7e60c</guid>
      <link>https://share.transistor.fm/s/b289e215</link>
      <description>
        <![CDATA[<p>Manuel Pais delves into one of the concepts covered in his book “Team Topologies”: platform and enabling work. Manuel shares how he views the strategy behind when and how to invest in platform or enabling work. This conversation also goes into each type of work in more detail, covering topics such as measuring cognitive load and where platform engineering may be heading in the future. </p><ul><li>(2:13) How enabling teams and platform teams are different </li><li>(10:28) What it looks like for a team to own both platform and enabling work </li><li>(17:04) How to deliver enabling work in an organization</li><li>(22:28) Whether enabling teams should be temporary</li><li>(30:10) Platform team anti-patterns</li><li>(47:10) Measuring cognitive load</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Manuel Pais delves into one of the concepts covered in his book “Team Topologies”: platform and enabling work. Manuel shares how he views the strategy behind when and how to invest in platform or enabling work. This conversation also goes into each type of work in more detail, covering topics such as measuring cognitive load and where platform engineering may be heading in the future. </p><ul><li>(2:13) How enabling teams and platform teams are different </li><li>(10:28) What it looks like for a team to own both platform and enabling work </li><li>(17:04) How to deliver enabling work in an organization</li><li>(22:28) Whether enabling teams should be temporary</li><li>(30:10) Platform team anti-patterns</li><li>(47:10) Measuring cognitive load</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 07 Jun 2023 07:44:34 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/b289e215/82ac1a4d.mp3" length="51331574" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/yZhsqFWlVRZJWgvr8LD9ChWkkp98PPD1E9elb4OcGIM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzNzMwNjUv/MTY4NjE1MDk4Ny1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3206</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Manuel Pais delves into one of the concepts covered in his book “Team Topologies”: platform and enabling work. Manuel shares how he views the strategy behind when and how to invest in platform or enabling work. This conversation also goes into each type of work in more detail, covering topics such as measuring cognitive load and where platform engineering may be heading in the future. </p><ul><li>(2:13) How enabling teams and platform teams are different </li><li>(10:28) What it looks like for a team to own both platform and enabling work </li><li>(17:04) How to deliver enabling work in an organization</li><li>(22:28) Whether enabling teams should be temporary</li><li>(30:10) Platform team anti-patterns</li><li>(47:10) Measuring cognitive load</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>A close look at Peloton’s developer experience survey | Thansha Sadacharam (Peloton)</title>
      <itunes:episode>38</itunes:episode>
      <podcast:episode>38</podcast:episode>
      <itunes:title>A close look at Peloton’s developer experience survey | Thansha Sadacharam (Peloton)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4a7edafd-2da6-4c2c-b598-5c95c172a5e3</guid>
      <link>https://share.transistor.fm/s/e46ee296</link>
      <description>
        <![CDATA[<p>Thansha Sadacharam, who leads Tech Learning and Insights at Peloton walks us through the journey of building the company’s developer experience survey. She shares what went into the survey’s design, rollout, and maintenance, as well as the different teams involved.</p><p><br><strong>Discussion points: </strong></p><ul><li>(1:19) Where the idea for running a developer survey originated</li><li>(6:36) Advice for other leaders getting buy-in for these initiatives</li><li>(11:27) The first steps in designing the survey</li><li>(18:21) How the survey incorporated benchmarking</li><li>(20:30) Measuring developer satisfaction</li><li>(22:37) Refining the question items </li><li>(25:50) How long the survey was</li><li>(26:50) What was involved in trimming the questions </li><li>(29:28) Writing survey questions </li><li>(33:12) How much time was spent developing the survey</li><li>(35:19) The communication plan for launching the survey</li><li>(42:05) Driving participation rates  </li><li>(45:21) Sampling and how often surveys are being sent </li><li>(49:21) How the information was presented </li><li>(54:10) Feeling nervous about sending out surveys </li></ul><p><br></p><p><strong>Mentions and links</strong></p><p>Follow Thansha on <a href="https://www.linkedin.com/in/thansha-sadacharam-3a850288/">LinkedIn</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Thansha Sadacharam, who leads Tech Learning and Insights at Peloton walks us through the journey of building the company’s developer experience survey. She shares what went into the survey’s design, rollout, and maintenance, as well as the different teams involved.</p><p><br><strong>Discussion points: </strong></p><ul><li>(1:19) Where the idea for running a developer survey originated</li><li>(6:36) Advice for other leaders getting buy-in for these initiatives</li><li>(11:27) The first steps in designing the survey</li><li>(18:21) How the survey incorporated benchmarking</li><li>(20:30) Measuring developer satisfaction</li><li>(22:37) Refining the question items </li><li>(25:50) How long the survey was</li><li>(26:50) What was involved in trimming the questions </li><li>(29:28) Writing survey questions </li><li>(33:12) How much time was spent developing the survey</li><li>(35:19) The communication plan for launching the survey</li><li>(42:05) Driving participation rates  </li><li>(45:21) Sampling and how often surveys are being sent </li><li>(49:21) How the information was presented </li><li>(54:10) Feeling nervous about sending out surveys </li></ul><p><br></p><p><strong>Mentions and links</strong></p><p>Follow Thansha on <a href="https://www.linkedin.com/in/thansha-sadacharam-3a850288/">LinkedIn</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 24 May 2023 07:20:22 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/e46ee296/c82bd8d4.mp3" length="56318815" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/xbEgr3S-xA7WOwxRbT66f9PGYgrckvUtR_gdIFDMLXc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzNTEyODIv/MTY5NTg0MjkzMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3514</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Thansha Sadacharam, who leads Tech Learning and Insights at Peloton walks us through the journey of building the company’s developer experience survey. She shares what went into the survey’s design, rollout, and maintenance, as well as the different teams involved.</p><p><br><strong>Discussion points: </strong></p><ul><li>(1:19) Where the idea for running a developer survey originated</li><li>(6:36) Advice for other leaders getting buy-in for these initiatives</li><li>(11:27) The first steps in designing the survey</li><li>(18:21) How the survey incorporated benchmarking</li><li>(20:30) Measuring developer satisfaction</li><li>(22:37) Refining the question items </li><li>(25:50) How long the survey was</li><li>(26:50) What was involved in trimming the questions </li><li>(29:28) Writing survey questions </li><li>(33:12) How much time was spent developing the survey</li><li>(35:19) The communication plan for launching the survey</li><li>(42:05) Driving participation rates  </li><li>(45:21) Sampling and how often surveys are being sent </li><li>(49:21) How the information was presented </li><li>(54:10) Feeling nervous about sending out surveys </li></ul><p><br></p><p><strong>Mentions and links</strong></p><p>Follow Thansha on <a href="https://www.linkedin.com/in/thansha-sadacharam-3a850288/">LinkedIn</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>A better way to measure developer productivity | A special episode with Laura Tacho and Abi Noda</title>
      <itunes:episode>37</itunes:episode>
      <podcast:episode>37</podcast:episode>
      <itunes:title>A better way to measure developer productivity | A special episode with Laura Tacho and Abi Noda</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d0fc1d17-780e-4994-84e2-be6d5bd4958b</guid>
      <link>https://share.transistor.fm/s/24fed126</link>
      <description>
        <![CDATA[<p>In this episode, Abi is interviewed by Laura Tacho about the new paper he co-authored with Dr. Nicole Forsgren, Dr. Margaret-Anne Storey, and Dr. Michaela Greiler. Abi and Laura discuss the pitfalls of some of the common metrics organizations use, and how the new paper builds on prior frameworks such as DORA and SPACE to offer a new approach to measuring and improving developer productivity. </p><p><strong>Discussion topics:</strong></p><ul><li>(2:20) Laura’s background</li><li>(3:59) Laura’s view on git metrics</li><li>(11:05) What developer experience (DevEx) is </li><li>(14:37) How the authors came together for this paper </li><li>(18:55) How DORA and SPACE are different</li><li>(22:38) Limitations of DORA metrics </li><li>(24:43) Employing the DORA metrics at GitHub</li><li>(27:47) What the SPACE framework is</li><li>(30:44) Whether to use DORA or SPACE or both</li><li>(33:54) Limitations of the SPACE framework</li><li>(37:29) The need for a new approach </li><li>(38:46) What the new DevEx paper solves </li><li>(40:13) The three dimensions of developer experience </li><li>(40:54) Flow state </li><li>(43:10) Feedback loops</li><li>(43:52) Cognitive load </li><li>(44:51) Why developer sentiment matters</li><li>(47:58) Using both perceptual and workflow measures</li><li>(50:59) Examples of perceptual and workflow measures </li><li>(54:05) How to collect metrics </li><li>(59:47) How other companies are measuring and improving developer experience</li><li>(01:02:56) Advice for earlier-stage or growing organizations</li></ul><p><br><strong>Resources for learning more about the DevEx framework:</strong></p><p>Read the <a href="https://queue.acm.org/detail.cfm?id=3595878">new paper</a> on ACM Queue</p><p>Read <a href="https://getdx.com/news/measuring-developer-productivity">Abi’s announcement</a> about the new paper </p><p>Read how <a href="https://getdx.com/developer-experience-management-paper">top companies</a> measure developer productivity </p><p><br></p><p><strong>Connect with Abi and Laura </strong></p><p>Sign up for Laura’s course, <a href="https://maven.com/high-performing-software-teams/measuring-development-team-performance">Measuring Development Team Performance</a></p><p>Connect with Laura on <a href="https://www.linkedin.com/in/lauratacho/">LinkedIn</a> or <a href="https://twitter.com/rhein_wein">Twitter</a></p><p>Connect with Abi on <a href="https://www.linkedin.com/in/abinoda/">LinkedIn</a> or <a href="https://twitter.com/abinoda">Twitter</a></p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Abi is interviewed by Laura Tacho about the new paper he co-authored with Dr. Nicole Forsgren, Dr. Margaret-Anne Storey, and Dr. Michaela Greiler. Abi and Laura discuss the pitfalls of some of the common metrics organizations use, and how the new paper builds on prior frameworks such as DORA and SPACE to offer a new approach to measuring and improving developer productivity. </p><p><strong>Discussion topics:</strong></p><ul><li>(2:20) Laura’s background</li><li>(3:59) Laura’s view on git metrics</li><li>(11:05) What developer experience (DevEx) is </li><li>(14:37) How the authors came together for this paper </li><li>(18:55) How DORA and SPACE are different</li><li>(22:38) Limitations of DORA metrics </li><li>(24:43) Employing the DORA metrics at GitHub</li><li>(27:47) What the SPACE framework is</li><li>(30:44) Whether to use DORA or SPACE or both</li><li>(33:54) Limitations of the SPACE framework</li><li>(37:29) The need for a new approach </li><li>(38:46) What the new DevEx paper solves </li><li>(40:13) The three dimensions of developer experience </li><li>(40:54) Flow state </li><li>(43:10) Feedback loops</li><li>(43:52) Cognitive load </li><li>(44:51) Why developer sentiment matters</li><li>(47:58) Using both perceptual and workflow measures</li><li>(50:59) Examples of perceptual and workflow measures </li><li>(54:05) How to collect metrics </li><li>(59:47) How other companies are measuring and improving developer experience</li><li>(01:02:56) Advice for earlier-stage or growing organizations</li></ul><p><br><strong>Resources for learning more about the DevEx framework:</strong></p><p>Read the <a href="https://queue.acm.org/detail.cfm?id=3595878">new paper</a> on ACM Queue</p><p>Read <a href="https://getdx.com/news/measuring-developer-productivity">Abi’s announcement</a> about the new paper </p><p>Read how <a href="https://getdx.com/developer-experience-management-paper">top companies</a> measure developer productivity </p><p><br></p><p><strong>Connect with Abi and Laura </strong></p><p>Sign up for Laura’s course, <a href="https://maven.com/high-performing-software-teams/measuring-development-team-performance">Measuring Development Team Performance</a></p><p>Connect with Laura on <a href="https://www.linkedin.com/in/lauratacho/">LinkedIn</a> or <a href="https://twitter.com/rhein_wein">Twitter</a></p><p>Connect with Abi on <a href="https://www.linkedin.com/in/abinoda/">LinkedIn</a> or <a href="https://twitter.com/abinoda">Twitter</a></p><p><br></p>]]>
      </content:encoded>
      <pubDate>Tue, 16 May 2023 14:16:44 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/24fed126/fced0f9e.mp3" length="65847050" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/weL5hkOiVV--Cm-CbseSf3OORc8zKjOGhF9_Apw5o4w/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzMzgyNjMv/MTY4NTA1MjEzOS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>4109</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Abi is interviewed by Laura Tacho about the new paper he co-authored with Dr. Nicole Forsgren, Dr. Margaret-Anne Storey, and Dr. Michaela Greiler. Abi and Laura discuss the pitfalls of some of the common metrics organizations use, and how the new paper builds on prior frameworks such as DORA and SPACE to offer a new approach to measuring and improving developer productivity. </p><p><strong>Discussion topics:</strong></p><ul><li>(2:20) Laura’s background</li><li>(3:59) Laura’s view on git metrics</li><li>(11:05) What developer experience (DevEx) is </li><li>(14:37) How the authors came together for this paper </li><li>(18:55) How DORA and SPACE are different</li><li>(22:38) Limitations of DORA metrics </li><li>(24:43) Employing the DORA metrics at GitHub</li><li>(27:47) What the SPACE framework is</li><li>(30:44) Whether to use DORA or SPACE or both</li><li>(33:54) Limitations of the SPACE framework</li><li>(37:29) The need for a new approach </li><li>(38:46) What the new DevEx paper solves </li><li>(40:13) The three dimensions of developer experience </li><li>(40:54) Flow state </li><li>(43:10) Feedback loops</li><li>(43:52) Cognitive load </li><li>(44:51) Why developer sentiment matters</li><li>(47:58) Using both perceptual and workflow measures</li><li>(50:59) Examples of perceptual and workflow measures </li><li>(54:05) How to collect metrics </li><li>(59:47) How other companies are measuring and improving developer experience</li><li>(01:02:56) Advice for earlier-stage or growing organizations</li></ul><p><br><strong>Resources for learning more about the DevEx framework:</strong></p><p>Read the <a href="https://queue.acm.org/detail.cfm?id=3595878">new paper</a> on ACM Queue</p><p>Read <a href="https://getdx.com/news/measuring-developer-productivity">Abi’s announcement</a> about the new paper </p><p>Read how <a href="https://getdx.com/developer-experience-management-paper">top companies</a> measure developer productivity </p><p><br></p><p><strong>Connect with Abi and Laura </strong></p><p>Sign up for Laura’s course, <a href="https://maven.com/high-performing-software-teams/measuring-development-team-performance">Measuring Development Team Performance</a></p><p>Connect with Laura on <a href="https://www.linkedin.com/in/lauratacho/">LinkedIn</a> or <a href="https://twitter.com/rhein_wein">Twitter</a></p><p>Connect with Abi on <a href="https://www.linkedin.com/in/abinoda/">LinkedIn</a> or <a href="https://twitter.com/abinoda">Twitter</a></p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The developer experience of building a database | Tara Hernandez (MongoDB, Google)</title>
      <itunes:episode>36</itunes:episode>
      <podcast:episode>36</podcast:episode>
      <itunes:title>The developer experience of building a database | Tara Hernandez (MongoDB, Google)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7a392dff-a400-48ca-aa67-0ea9320559e1</guid>
      <link>https://share.transistor.fm/s/f33620da</link>
      <description>
        <![CDATA[<p>Tara Hernandez, the VP of Developer Productivity at MongoDB, joins the podcast to give an inside look at what the developer experience looks like at an organization that develops a database. Here, Tara shares what it looks like to develop, test, and release changes at MongoDB, while also providing insight into how the company invests in developer productivity more broadly. </p><p><strong>Discussion points: </strong></p><ul><li>(0:57) What was going on at the time Tara joined </li><li>(4:37) Tara’s perspective on the buzz of platform engineering</li><li>(7:38) What’s involved in building and testing a database</li><li>(10:11) The development environment at MongoDB</li><li>(13:14) How testing works</li><li>(16:50) What the release process looks like</li><li>(19:27) What goes into performance testing a release</li><li>(21:31) MongoDB’s investment in engineering enablement </li><li>(22:39) Takeaways from working on databases</li><li>(24:24) Affecting cultural change</li><li>(26:40) Opportunities Tara’s team identified to change culture</li><li>(29:12) Managing technical debt</li><li>(33:06) MongoDB’s culture around developer experience </li><li>(34:59) Why Evergreen CI is open source</li></ul><p><br></p><p><strong>Mentions and links: <br></strong>Follow Tara on <a href="https://www.linkedin.com/in/tara-hernandez/">LinkedIn</a> or <a href="https://twitter.com/tequilarista">Twitter</a><br>Read more about MongoDB’s <a href="https://www.mongodb.com/blog/post/evergreen-continuous-integration-why-we-reinvented-wheel">“Evergreen” Continuous Integration</a> <br>Visit MongoDB’s <a href="https://www.mongodb.com/blog/channel/engineering-blog">engineering blog <br></a><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Tara Hernandez, the VP of Developer Productivity at MongoDB, joins the podcast to give an inside look at what the developer experience looks like at an organization that develops a database. Here, Tara shares what it looks like to develop, test, and release changes at MongoDB, while also providing insight into how the company invests in developer productivity more broadly. </p><p><strong>Discussion points: </strong></p><ul><li>(0:57) What was going on at the time Tara joined </li><li>(4:37) Tara’s perspective on the buzz of platform engineering</li><li>(7:38) What’s involved in building and testing a database</li><li>(10:11) The development environment at MongoDB</li><li>(13:14) How testing works</li><li>(16:50) What the release process looks like</li><li>(19:27) What goes into performance testing a release</li><li>(21:31) MongoDB’s investment in engineering enablement </li><li>(22:39) Takeaways from working on databases</li><li>(24:24) Affecting cultural change</li><li>(26:40) Opportunities Tara’s team identified to change culture</li><li>(29:12) Managing technical debt</li><li>(33:06) MongoDB’s culture around developer experience </li><li>(34:59) Why Evergreen CI is open source</li></ul><p><br></p><p><strong>Mentions and links: <br></strong>Follow Tara on <a href="https://www.linkedin.com/in/tara-hernandez/">LinkedIn</a> or <a href="https://twitter.com/tequilarista">Twitter</a><br>Read more about MongoDB’s <a href="https://www.mongodb.com/blog/post/evergreen-continuous-integration-why-we-reinvented-wheel">“Evergreen” Continuous Integration</a> <br>Visit MongoDB’s <a href="https://www.mongodb.com/blog/channel/engineering-blog">engineering blog <br></a><br></p>]]>
      </content:encoded>
      <pubDate>Tue, 02 May 2023 07:32:56 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/f33620da/ad4454fa.mp3" length="95109246" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Aggrx8A1Hl-OdZ6alWFg-md1wdN1RIaTN9-qlUWxvVU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzMTcwMjcv/MTY5NTg0MzEzMC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2376</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Tara Hernandez, the VP of Developer Productivity at MongoDB, joins the podcast to give an inside look at what the developer experience looks like at an organization that develops a database. Here, Tara shares what it looks like to develop, test, and release changes at MongoDB, while also providing insight into how the company invests in developer productivity more broadly. </p><p><strong>Discussion points: </strong></p><ul><li>(0:57) What was going on at the time Tara joined </li><li>(4:37) Tara’s perspective on the buzz of platform engineering</li><li>(7:38) What’s involved in building and testing a database</li><li>(10:11) The development environment at MongoDB</li><li>(13:14) How testing works</li><li>(16:50) What the release process looks like</li><li>(19:27) What goes into performance testing a release</li><li>(21:31) MongoDB’s investment in engineering enablement </li><li>(22:39) Takeaways from working on databases</li><li>(24:24) Affecting cultural change</li><li>(26:40) Opportunities Tara’s team identified to change culture</li><li>(29:12) Managing technical debt</li><li>(33:06) MongoDB’s culture around developer experience </li><li>(34:59) Why Evergreen CI is open source</li></ul><p><br></p><p><strong>Mentions and links: <br></strong>Follow Tara on <a href="https://www.linkedin.com/in/tara-hernandez/">LinkedIn</a> or <a href="https://twitter.com/tequilarista">Twitter</a><br>Read more about MongoDB’s <a href="https://www.mongodb.com/blog/post/evergreen-continuous-integration-why-we-reinvented-wheel">“Evergreen” Continuous Integration</a> <br>Visit MongoDB’s <a href="https://www.mongodb.com/blog/channel/engineering-blog">engineering blog <br></a><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How teams use productivity metrics at LinkedIn | Max Kanat-Alexander (LinkedIn, Google)</title>
      <itunes:episode>35</itunes:episode>
      <podcast:episode>35</podcast:episode>
      <itunes:title>How teams use productivity metrics at LinkedIn | Max Kanat-Alexander (LinkedIn, Google)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">58bca0e9-1043-4631-83a9-f674a2ca2249</guid>
      <link>https://share.transistor.fm/s/c97e65eb</link>
      <description>
        <![CDATA[<p>Max Kanat-Alexander, the Tech Lead for the Developer Productivity and Insights Team at LinkedIn, shares an inside look at LinkedIn’s metrics platform and how teams across the organization use it. </p><p><strong>Discussion points: </strong></p><ul><li>(1:31) Why Max shares how his team is measuring productivity</li><li>(3:20) Why some teams use metrics and some don’t </li><li>(6:03) The types of metrics Max’s team focuses on</li><li>(12:59) The role of TPMs</li><li>(17:05) How Max would measure productivity if he weren’t at LinkedIn</li><li>(25:04) Surprises in how teams are using metrics at LinkedIn</li><li>(31:27) The tooling required to enable metrics for teams to use</li><li>(36:41) Qualitative versus quantitative metrics</li><li>(40:39) Measuring code quality at Google </li><li>(46:16) Whether a centralized team should own measurement</li></ul><p><br></p><p><strong>Mentions and links:</strong><br>Connect with Max on <a href="https://www.linkedin.com/in/mkanat/">LinkedIn</a> or <a href="https://twitter.com/mkanat?">Twitter</a><br>Read the article, <a href="https://engineering.linkedin.com/blog/2023/inside-look--measuring-developer-productivity-and-happiness-at-l">Measuring Developer Productivity and Happiness at LinkedIn</a><br>Listen to the first interview with Max and his colleague Or Michael Berlowitz: <a href="https://getdx.com/podcast/23">Episode 23<br></a>Abi’s blog post on the <a href="https://newsletter.abinoda.com/p/choosing-engineering-metrics">Three-Bucket Framework for Engineering Metrics</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Max Kanat-Alexander, the Tech Lead for the Developer Productivity and Insights Team at LinkedIn, shares an inside look at LinkedIn’s metrics platform and how teams across the organization use it. </p><p><strong>Discussion points: </strong></p><ul><li>(1:31) Why Max shares how his team is measuring productivity</li><li>(3:20) Why some teams use metrics and some don’t </li><li>(6:03) The types of metrics Max’s team focuses on</li><li>(12:59) The role of TPMs</li><li>(17:05) How Max would measure productivity if he weren’t at LinkedIn</li><li>(25:04) Surprises in how teams are using metrics at LinkedIn</li><li>(31:27) The tooling required to enable metrics for teams to use</li><li>(36:41) Qualitative versus quantitative metrics</li><li>(40:39) Measuring code quality at Google </li><li>(46:16) Whether a centralized team should own measurement</li></ul><p><br></p><p><strong>Mentions and links:</strong><br>Connect with Max on <a href="https://www.linkedin.com/in/mkanat/">LinkedIn</a> or <a href="https://twitter.com/mkanat?">Twitter</a><br>Read the article, <a href="https://engineering.linkedin.com/blog/2023/inside-look--measuring-developer-productivity-and-happiness-at-l">Measuring Developer Productivity and Happiness at LinkedIn</a><br>Listen to the first interview with Max and his colleague Or Michael Berlowitz: <a href="https://getdx.com/podcast/23">Episode 23<br></a>Abi’s blog post on the <a href="https://newsletter.abinoda.com/p/choosing-engineering-metrics">Three-Bucket Framework for Engineering Metrics</a></p>]]>
      </content:encoded>
      <pubDate>Tue, 25 Apr 2023 20:59:02 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/c97e65eb/6973c3d7.mp3" length="51287432" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/U0E2C2mhDDKGpLXgTGX_6qjt2V7jwHP1PM6GGVY5Eow/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzMDcyOTYv/MTY5NTg0MzEwOC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3199</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Max Kanat-Alexander, the Tech Lead for the Developer Productivity and Insights Team at LinkedIn, shares an inside look at LinkedIn’s metrics platform and how teams across the organization use it. </p><p><strong>Discussion points: </strong></p><ul><li>(1:31) Why Max shares how his team is measuring productivity</li><li>(3:20) Why some teams use metrics and some don’t </li><li>(6:03) The types of metrics Max’s team focuses on</li><li>(12:59) The role of TPMs</li><li>(17:05) How Max would measure productivity if he weren’t at LinkedIn</li><li>(25:04) Surprises in how teams are using metrics at LinkedIn</li><li>(31:27) The tooling required to enable metrics for teams to use</li><li>(36:41) Qualitative versus quantitative metrics</li><li>(40:39) Measuring code quality at Google </li><li>(46:16) Whether a centralized team should own measurement</li></ul><p><br></p><p><strong>Mentions and links:</strong><br>Connect with Max on <a href="https://www.linkedin.com/in/mkanat/">LinkedIn</a> or <a href="https://twitter.com/mkanat?">Twitter</a><br>Read the article, <a href="https://engineering.linkedin.com/blog/2023/inside-look--measuring-developer-productivity-and-happiness-at-l">Measuring Developer Productivity and Happiness at LinkedIn</a><br>Listen to the first interview with Max and his colleague Or Michael Berlowitz: <a href="https://getdx.com/podcast/23">Episode 23<br></a>Abi’s blog post on the <a href="https://newsletter.abinoda.com/p/choosing-engineering-metrics">Three-Bucket Framework for Engineering Metrics</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Inside Etsy’s multi-year DevEx initiative | Mike Fisher (Etsy, PayPal)</title>
      <itunes:episode>34</itunes:episode>
      <podcast:episode>34</podcast:episode>
      <itunes:title>Inside Etsy’s multi-year DevEx initiative | Mike Fisher (Etsy, PayPal)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">edd9f5c9-6b16-492a-a436-9553566138f9</guid>
      <link>https://share.transistor.fm/s/0caa28a9</link>
      <description>
        <![CDATA[<p>Mike Fisher, the former CTO at Etsy, spearheaded a multi-year developer experience initiative aimed at improving developer happiness and efficiency during his time at Etsy. Here, he shares the story of that initiative, including the pillars of the program and the investment that went into it. Towards the end of the conversation, Mike also shares his perspective on measuring developer productivity. </p><p><strong>Discussion points:</strong></p><ul><li>(1:31) What was happening at Etsy when Mike joined </li><li>(4:08) The scaling challenges Etsy faced</li><li>(6:08) Deciding on the term “developer experience” </li><li>(9:35) Whether developer experience is a new approach</li><li>(11:24) The pillars of Etsy’s DevEx initiative </li><li>(15:49) Converting the length of time required for this initiative</li><li>(18:11) The investment allocated to the initiative </li><li>(20:04) Talking about the ROI of devex initiatives </li><li>(22:50) Who was actually leading this work</li><li>(24:37) Etsy’s experience with platform teams </li><li>(30:42) Advice for leaders championing DevEx initiatives</li><li>(34:45) Framing the conversation about getting budget for a DevEx initiative</li><li>(37:45) How leaders can address the efficiency conversation</li><li>(42:00) Measuring productivity </li><li>(45:49) The “experiment velocity” metric </li></ul><p>‍</p><p><strong>Mentions and links:<br></strong>Follow Mike on <a href="https://www.linkedin.com/in/mike-fisher-3317a8/">LinkedIn</a> or <a href="https://twitter.com/MikeFisher_Fish">Twitter</a></p><p>Subscribe to Mike’s newsletter, <a href="https://mikefisher.substack.com/">Fish Food for Thought</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Mike Fisher, the former CTO at Etsy, spearheaded a multi-year developer experience initiative aimed at improving developer happiness and efficiency during his time at Etsy. Here, he shares the story of that initiative, including the pillars of the program and the investment that went into it. Towards the end of the conversation, Mike also shares his perspective on measuring developer productivity. </p><p><strong>Discussion points:</strong></p><ul><li>(1:31) What was happening at Etsy when Mike joined </li><li>(4:08) The scaling challenges Etsy faced</li><li>(6:08) Deciding on the term “developer experience” </li><li>(9:35) Whether developer experience is a new approach</li><li>(11:24) The pillars of Etsy’s DevEx initiative </li><li>(15:49) Converting the length of time required for this initiative</li><li>(18:11) The investment allocated to the initiative </li><li>(20:04) Talking about the ROI of devex initiatives </li><li>(22:50) Who was actually leading this work</li><li>(24:37) Etsy’s experience with platform teams </li><li>(30:42) Advice for leaders championing DevEx initiatives</li><li>(34:45) Framing the conversation about getting budget for a DevEx initiative</li><li>(37:45) How leaders can address the efficiency conversation</li><li>(42:00) Measuring productivity </li><li>(45:49) The “experiment velocity” metric </li></ul><p>‍</p><p><strong>Mentions and links:<br></strong>Follow Mike on <a href="https://www.linkedin.com/in/mike-fisher-3317a8/">LinkedIn</a> or <a href="https://twitter.com/MikeFisher_Fish">Twitter</a></p><p>Subscribe to Mike’s newsletter, <a href="https://mikefisher.substack.com/">Fish Food for Thought</a></p>]]>
      </content:encoded>
      <pubDate>Tue, 18 Apr 2023 21:03:26 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/0caa28a9/8b36730d.mp3" length="126494829" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ND3LXDeurLmNC6a9t7fp7sBeh6HsrwPM-pTEg-tjeu0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyOTU3NjEv/MTY5NTg0MzIzMC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3160</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Mike Fisher, the former CTO at Etsy, spearheaded a multi-year developer experience initiative aimed at improving developer happiness and efficiency during his time at Etsy. Here, he shares the story of that initiative, including the pillars of the program and the investment that went into it. Towards the end of the conversation, Mike also shares his perspective on measuring developer productivity. </p><p><strong>Discussion points:</strong></p><ul><li>(1:31) What was happening at Etsy when Mike joined </li><li>(4:08) The scaling challenges Etsy faced</li><li>(6:08) Deciding on the term “developer experience” </li><li>(9:35) Whether developer experience is a new approach</li><li>(11:24) The pillars of Etsy’s DevEx initiative </li><li>(15:49) Converting the length of time required for this initiative</li><li>(18:11) The investment allocated to the initiative </li><li>(20:04) Talking about the ROI of devex initiatives </li><li>(22:50) Who was actually leading this work</li><li>(24:37) Etsy’s experience with platform teams </li><li>(30:42) Advice for leaders championing DevEx initiatives</li><li>(34:45) Framing the conversation about getting budget for a DevEx initiative</li><li>(37:45) How leaders can address the efficiency conversation</li><li>(42:00) Measuring productivity </li><li>(45:49) The “experiment velocity” metric </li></ul><p>‍</p><p><strong>Mentions and links:<br></strong>Follow Mike on <a href="https://www.linkedin.com/in/mike-fisher-3317a8/">LinkedIn</a> or <a href="https://twitter.com/MikeFisher_Fish">Twitter</a></p><p>Subscribe to Mike’s newsletter, <a href="https://mikefisher.substack.com/">Fish Food for Thought</a></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Implementing a developer portal | Karl Haworth (American Airlines)</title>
      <itunes:episode>33</itunes:episode>
      <podcast:episode>33</podcast:episode>
      <itunes:title>Implementing a developer portal | Karl Haworth (American Airlines)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0015d760-3481-4528-a688-901d97050e45</guid>
      <link>https://share.transistor.fm/s/aad47b34</link>
      <description>
        <![CDATA[<p>Karl’s team at American Airlines were early adopters of Backstage, and in this episode he shares their journey of implementing and rolling out a developer portal. He also describes two of the extensions his team has built for their portal. </p><p><strong>Discussion points:</strong></p><ul><li>(1:24) Where the idea of building a developer portal came from</li><li>(7:24) What the developer experience looked like before the portal </li><li>(10:41) Initiating the project</li><li>(14:16) The decision to choose Backstage </li><li>(16:28) The V1 scope for the portal </li><li>(19:14) Getting adoption for the portal</li><li>(23:35) Defining success for the portal’s adoption </li><li>(28:04) The ideal state for how developers will use the portal</li><li>(30:56) Who should or shouldn’t invest in building a developer portal </li><li>(33:14) Custom extensions Karl’s team has developed for their portal</li><li>(37:46) What’s difficult about developing a new plugin for the backstage platform</li></ul><p>‍</p><p><strong>Mentions and links:<br></strong>Follow Karl on <a href="https://www.linkedin.com/in/karl-haworth/">LinkedIn</a><br>The <a href="https://tech.aa.com/2021-12-21-runway-pt1/">Runway platform</a> at American Airlines <br>Read more on the <a href="https://tech.aa.com/">engineering blog</a> from American Airlines </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Karl’s team at American Airlines were early adopters of Backstage, and in this episode he shares their journey of implementing and rolling out a developer portal. He also describes two of the extensions his team has built for their portal. </p><p><strong>Discussion points:</strong></p><ul><li>(1:24) Where the idea of building a developer portal came from</li><li>(7:24) What the developer experience looked like before the portal </li><li>(10:41) Initiating the project</li><li>(14:16) The decision to choose Backstage </li><li>(16:28) The V1 scope for the portal </li><li>(19:14) Getting adoption for the portal</li><li>(23:35) Defining success for the portal’s adoption </li><li>(28:04) The ideal state for how developers will use the portal</li><li>(30:56) Who should or shouldn’t invest in building a developer portal </li><li>(33:14) Custom extensions Karl’s team has developed for their portal</li><li>(37:46) What’s difficult about developing a new plugin for the backstage platform</li></ul><p>‍</p><p><strong>Mentions and links:<br></strong>Follow Karl on <a href="https://www.linkedin.com/in/karl-haworth/">LinkedIn</a><br>The <a href="https://tech.aa.com/2021-12-21-runway-pt1/">Runway platform</a> at American Airlines <br>Read more on the <a href="https://tech.aa.com/">engineering blog</a> from American Airlines </p>]]>
      </content:encoded>
      <pubDate>Tue, 04 Apr 2023 21:43:58 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/aad47b34/ffcdf7c5.mp3" length="130034553" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>3249</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Karl’s team at American Airlines were early adopters of Backstage, and in this episode he shares their journey of implementing and rolling out a developer portal. He also describes two of the extensions his team has built for their portal. </p><p><strong>Discussion points:</strong></p><ul><li>(1:24) Where the idea of building a developer portal came from</li><li>(7:24) What the developer experience looked like before the portal </li><li>(10:41) Initiating the project</li><li>(14:16) The decision to choose Backstage </li><li>(16:28) The V1 scope for the portal </li><li>(19:14) Getting adoption for the portal</li><li>(23:35) Defining success for the portal’s adoption </li><li>(28:04) The ideal state for how developers will use the portal</li><li>(30:56) Who should or shouldn’t invest in building a developer portal </li><li>(33:14) Custom extensions Karl’s team has developed for their portal</li><li>(37:46) What’s difficult about developing a new plugin for the backstage platform</li></ul><p>‍</p><p><strong>Mentions and links:<br></strong>Follow Karl on <a href="https://www.linkedin.com/in/karl-haworth/">LinkedIn</a><br>The <a href="https://tech.aa.com/2021-12-21-runway-pt1/">Runway platform</a> at American Airlines <br>Read more on the <a href="https://tech.aa.com/">engineering blog</a> from American Airlines </p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Bringing the product management discipline to platform teams | Russ Nealis (Plaid)</title>
      <itunes:episode>32</itunes:episode>
      <podcast:episode>32</podcast:episode>
      <itunes:title>Bringing the product management discipline to platform teams | Russ Nealis (Plaid)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">04595930-2407-4b07-b0ae-07f04cb6018c</guid>
      <link>https://share.transistor.fm/s/73156c00</link>
      <description>
        <![CDATA[<p>As product lead, Russ Nealis has been focused on introducing the discipline of product management in the Developer Foundations organization. This episode discusses the reasons why PMs are currently uncommon in platform organizations, examples of when having a PM has been helpful, and more. </p><p><strong>Discussion points:</strong></p><ul><li>(1:23) Russ’s role at Plaid </li><li>(2:49) Why platform product managers are uncommon</li><li>(3:28) Backgrounds to look for when hiring a platform PM</li><li>(4:58) Deciding whether to hire a platform PM</li><li>(6:20) Signs that bringing in a Product Manager would be beneficial</li><li>(9:16) How Russ personally became a platform PM</li><li>(12:15) Whether a platform PM is a career path </li><li>(14:55) Articulating the business impact a platform PM has</li><li>(18:56) Challenges Plaid’s platform team has faced without a PM  </li><li>(19:19) Symptoms of a need for product management in an internal-facing team</li><li>(30:15) Whether Twilio had platform PMs  </li><li>(31:22) Example projects where PMs have been crucial</li><li>(34:12) How the book “Ask Your Developer” influenced Twilio’s engineering culture </li><li>(36:13) Getting started with introducing a product management discipline to an organization </li><li>(38:33) Org structure and where platform PMs may report </li><li>(40:00) Career ladder for platform PM when reporting to engineering leadership</li><li>(41:20) Being product-led or technology-led</li><li>(43:14) How technical skills may help when in a platform PM role</li></ul><p>‍</p><p><strong>Mentions and links: <br></strong>Follow Russ on <a href="https://www.linkedin.com/in/russnealis/">LinkedIn</a> <br>Episode 7 with <a href="https://getdx.com/podcast/7">Will Larson</a> - related to why it’s difficult to find Platform PMs<br>Episode 27 with <a href="https://getdx.com/podcast/27">Jean-Michel Lemieux</a> - related to the percentage of investment that should be put towards platform investments <br><a href="https://www.amazon.com/Escaping-Build-Trap-Effective-Management/dp/B08B46C8R1/ref=sr_1_1?hvadid=472002083496&amp;hvdev=c&amp;hvlocphy=9029757&amp;hvnetw=g&amp;hvqmt=e&amp;hvrand=7758966804803449943&amp;hvtargid=kwd-977893935279&amp;hydadcr=7441_9611280&amp;keywords=the+build+trap+by+melissa+perri&amp;qid=1679346663&amp;sr=8-1">The Build Trap</a> by Melissa Perri<br><a href="https://www.amazon.com/Ask-Your-Developer-Software-Developers/dp/0063018292">Ask Your Developer</a> by Jeff Lawson</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>As product lead, Russ Nealis has been focused on introducing the discipline of product management in the Developer Foundations organization. This episode discusses the reasons why PMs are currently uncommon in platform organizations, examples of when having a PM has been helpful, and more. </p><p><strong>Discussion points:</strong></p><ul><li>(1:23) Russ’s role at Plaid </li><li>(2:49) Why platform product managers are uncommon</li><li>(3:28) Backgrounds to look for when hiring a platform PM</li><li>(4:58) Deciding whether to hire a platform PM</li><li>(6:20) Signs that bringing in a Product Manager would be beneficial</li><li>(9:16) How Russ personally became a platform PM</li><li>(12:15) Whether a platform PM is a career path </li><li>(14:55) Articulating the business impact a platform PM has</li><li>(18:56) Challenges Plaid’s platform team has faced without a PM  </li><li>(19:19) Symptoms of a need for product management in an internal-facing team</li><li>(30:15) Whether Twilio had platform PMs  </li><li>(31:22) Example projects where PMs have been crucial</li><li>(34:12) How the book “Ask Your Developer” influenced Twilio’s engineering culture </li><li>(36:13) Getting started with introducing a product management discipline to an organization </li><li>(38:33) Org structure and where platform PMs may report </li><li>(40:00) Career ladder for platform PM when reporting to engineering leadership</li><li>(41:20) Being product-led or technology-led</li><li>(43:14) How technical skills may help when in a platform PM role</li></ul><p>‍</p><p><strong>Mentions and links: <br></strong>Follow Russ on <a href="https://www.linkedin.com/in/russnealis/">LinkedIn</a> <br>Episode 7 with <a href="https://getdx.com/podcast/7">Will Larson</a> - related to why it’s difficult to find Platform PMs<br>Episode 27 with <a href="https://getdx.com/podcast/27">Jean-Michel Lemieux</a> - related to the percentage of investment that should be put towards platform investments <br><a href="https://www.amazon.com/Escaping-Build-Trap-Effective-Management/dp/B08B46C8R1/ref=sr_1_1?hvadid=472002083496&amp;hvdev=c&amp;hvlocphy=9029757&amp;hvnetw=g&amp;hvqmt=e&amp;hvrand=7758966804803449943&amp;hvtargid=kwd-977893935279&amp;hydadcr=7441_9611280&amp;keywords=the+build+trap+by+melissa+perri&amp;qid=1679346663&amp;sr=8-1">The Build Trap</a> by Melissa Perri<br><a href="https://www.amazon.com/Ask-Your-Developer-Software-Developers/dp/0063018292">Ask Your Developer</a> by Jeff Lawson</p>]]>
      </content:encoded>
      <pubDate>Tue, 28 Mar 2023 20:21:58 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/73156c00/8449b59d.mp3" length="43475982" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2711</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>As product lead, Russ Nealis has been focused on introducing the discipline of product management in the Developer Foundations organization. This episode discusses the reasons why PMs are currently uncommon in platform organizations, examples of when having a PM has been helpful, and more. </p><p><strong>Discussion points:</strong></p><ul><li>(1:23) Russ’s role at Plaid </li><li>(2:49) Why platform product managers are uncommon</li><li>(3:28) Backgrounds to look for when hiring a platform PM</li><li>(4:58) Deciding whether to hire a platform PM</li><li>(6:20) Signs that bringing in a Product Manager would be beneficial</li><li>(9:16) How Russ personally became a platform PM</li><li>(12:15) Whether a platform PM is a career path </li><li>(14:55) Articulating the business impact a platform PM has</li><li>(18:56) Challenges Plaid’s platform team has faced without a PM  </li><li>(19:19) Symptoms of a need for product management in an internal-facing team</li><li>(30:15) Whether Twilio had platform PMs  </li><li>(31:22) Example projects where PMs have been crucial</li><li>(34:12) How the book “Ask Your Developer” influenced Twilio’s engineering culture </li><li>(36:13) Getting started with introducing a product management discipline to an organization </li><li>(38:33) Org structure and where platform PMs may report </li><li>(40:00) Career ladder for platform PM when reporting to engineering leadership</li><li>(41:20) Being product-led or technology-led</li><li>(43:14) How technical skills may help when in a platform PM role</li></ul><p>‍</p><p><strong>Mentions and links: <br></strong>Follow Russ on <a href="https://www.linkedin.com/in/russnealis/">LinkedIn</a> <br>Episode 7 with <a href="https://getdx.com/podcast/7">Will Larson</a> - related to why it’s difficult to find Platform PMs<br>Episode 27 with <a href="https://getdx.com/podcast/27">Jean-Michel Lemieux</a> - related to the percentage of investment that should be put towards platform investments <br><a href="https://www.amazon.com/Escaping-Build-Trap-Effective-Management/dp/B08B46C8R1/ref=sr_1_1?hvadid=472002083496&amp;hvdev=c&amp;hvlocphy=9029757&amp;hvnetw=g&amp;hvqmt=e&amp;hvrand=7758966804803449943&amp;hvtargid=kwd-977893935279&amp;hydadcr=7441_9611280&amp;keywords=the+build+trap+by+melissa+perri&amp;qid=1679346663&amp;sr=8-1">The Build Trap</a> by Melissa Perri<br><a href="https://www.amazon.com/Ask-Your-Developer-Software-Developers/dp/0063018292">Ask Your Developer</a> by Jeff Lawson</p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Intercom’s approach to a great on-call experience | Brian Scanlan (Intercom)</title>
      <itunes:episode>31</itunes:episode>
      <podcast:episode>31</podcast:episode>
      <itunes:title>Intercom’s approach to a great on-call experience | Brian Scanlan (Intercom)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">47cb896c-db6c-46c8-9c1b-3239f7005706</guid>
      <link>https://share.transistor.fm/s/966c47eb</link>
      <description>
        <![CDATA[<p>In this deep-dive episode, Brian Scanlan, Principal Systems Engineer at Intercom, describes how the company’s on-call process works. He explains how the process started and key changes they’ve made over the years, including a new volunteer model, changes to compensation, and more.</p><p><strong>Discussion points:</strong></p><ul><li>(1:28) How on-call started at Intercom</li><li>(10:11) Brian’s background and interest in being on-call</li><li>(14:06) Getting engineers motivated to be on-call </li><li>(16:37) Challenges Intercom saw with on-call as it grew</li><li>(19:53) Having too many people on-call</li><li>(23:20) Having alarms that aren’t useful </li><li>(26:03) Recognizing uneven workload with compensation</li><li>(27:22) Initiating changes to the on-call process </li><li>(30:08) Creating a volunteer model</li><li>(33:02) Addressing concerns that volunteers wouldn’t take action on alarms </li><li>(34:40) Equitability in a volunteer model</li><li>(36:36) Expectations of expertise for being on-call</li><li>(40:56) How volunteers sign up </li><li>(44:15) The Incident Commander role </li><li>(46:19) Using code review for changes to alarms</li><li>(50:02) On-call compensation </li><li>(52:50) Other approaches to compensating on-call</li><li>(55:08) Whether other companies should compensate on-call</li><li>(57:32) How Intercom’s on-call process compares to other companies </li><li>(1:00:46) Recent changes to the on-call process</li><li>(1:04:13) Balancing responsiveness and burnout </li><li>(1:07:12) Signals for evaluating the on-call process </li></ul><p>‍</p><p><strong>Mentions and links: </strong></p><ul><li>Follow Brian on <a href="https://www.linkedin.com/in/scanlanb/">LinkedIn</a> or <a href="https://twitter.com/brian_scanlan">Twitter </a></li><li>Brian’s article: <a href="https://www.intercom.com/blog/rapid-response-how-we-developed-an-on-call-team-at-intercom/">How we fixed our on call process to avoid engineer burnout</a></li><li>Gergely Orosz’s <a href="https://newsletter.pragmaticengineer.com/p/oncall-compensation">On-Call Compensation </a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this deep-dive episode, Brian Scanlan, Principal Systems Engineer at Intercom, describes how the company’s on-call process works. He explains how the process started and key changes they’ve made over the years, including a new volunteer model, changes to compensation, and more.</p><p><strong>Discussion points:</strong></p><ul><li>(1:28) How on-call started at Intercom</li><li>(10:11) Brian’s background and interest in being on-call</li><li>(14:06) Getting engineers motivated to be on-call </li><li>(16:37) Challenges Intercom saw with on-call as it grew</li><li>(19:53) Having too many people on-call</li><li>(23:20) Having alarms that aren’t useful </li><li>(26:03) Recognizing uneven workload with compensation</li><li>(27:22) Initiating changes to the on-call process </li><li>(30:08) Creating a volunteer model</li><li>(33:02) Addressing concerns that volunteers wouldn’t take action on alarms </li><li>(34:40) Equitability in a volunteer model</li><li>(36:36) Expectations of expertise for being on-call</li><li>(40:56) How volunteers sign up </li><li>(44:15) The Incident Commander role </li><li>(46:19) Using code review for changes to alarms</li><li>(50:02) On-call compensation </li><li>(52:50) Other approaches to compensating on-call</li><li>(55:08) Whether other companies should compensate on-call</li><li>(57:32) How Intercom’s on-call process compares to other companies </li><li>(1:00:46) Recent changes to the on-call process</li><li>(1:04:13) Balancing responsiveness and burnout </li><li>(1:07:12) Signals for evaluating the on-call process </li></ul><p>‍</p><p><strong>Mentions and links: </strong></p><ul><li>Follow Brian on <a href="https://www.linkedin.com/in/scanlanb/">LinkedIn</a> or <a href="https://twitter.com/brian_scanlan">Twitter </a></li><li>Brian’s article: <a href="https://www.intercom.com/blog/rapid-response-how-we-developed-an-on-call-team-at-intercom/">How we fixed our on call process to avoid engineer burnout</a></li><li>Gergely Orosz’s <a href="https://newsletter.pragmaticengineer.com/p/oncall-compensation">On-Call Compensation </a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 07 Mar 2023 18:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/966c47eb/c3048b25.mp3" length="169056782" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>4224</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this deep-dive episode, Brian Scanlan, Principal Systems Engineer at Intercom, describes how the company’s on-call process works. He explains how the process started and key changes they’ve made over the years, including a new volunteer model, changes to compensation, and more.</p><p><strong>Discussion points:</strong></p><ul><li>(1:28) How on-call started at Intercom</li><li>(10:11) Brian’s background and interest in being on-call</li><li>(14:06) Getting engineers motivated to be on-call </li><li>(16:37) Challenges Intercom saw with on-call as it grew</li><li>(19:53) Having too many people on-call</li><li>(23:20) Having alarms that aren’t useful </li><li>(26:03) Recognizing uneven workload with compensation</li><li>(27:22) Initiating changes to the on-call process </li><li>(30:08) Creating a volunteer model</li><li>(33:02) Addressing concerns that volunteers wouldn’t take action on alarms </li><li>(34:40) Equitability in a volunteer model</li><li>(36:36) Expectations of expertise for being on-call</li><li>(40:56) How volunteers sign up </li><li>(44:15) The Incident Commander role </li><li>(46:19) Using code review for changes to alarms</li><li>(50:02) On-call compensation </li><li>(52:50) Other approaches to compensating on-call</li><li>(55:08) Whether other companies should compensate on-call</li><li>(57:32) How Intercom’s on-call process compares to other companies </li><li>(1:00:46) Recent changes to the on-call process</li><li>(1:04:13) Balancing responsiveness and burnout </li><li>(1:07:12) Signals for evaluating the on-call process </li></ul><p>‍</p><p><strong>Mentions and links: </strong></p><ul><li>Follow Brian on <a href="https://www.linkedin.com/in/scanlanb/">LinkedIn</a> or <a href="https://twitter.com/brian_scanlan">Twitter </a></li><li>Brian’s article: <a href="https://www.intercom.com/blog/rapid-response-how-we-developed-an-on-call-team-at-intercom/">How we fixed our on call process to avoid engineer burnout</a></li><li>Gergely Orosz’s <a href="https://newsletter.pragmaticengineer.com/p/oncall-compensation">On-Call Compensation </a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How Instagram Reels manages reliability | Jack Li (Instagram, Shopify)</title>
      <itunes:episode>30</itunes:episode>
      <podcast:episode>30</podcast:episode>
      <itunes:title>How Instagram Reels manages reliability | Jack Li (Instagram, Shopify)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a8bc310a-8df2-4265-99d2-9a8d044edee2</guid>
      <link>https://share.transistor.fm/s/374710e7</link>
      <description>
        <![CDATA[<p>Jack Li explains how his production engineering team rolled out a new incident review process, how they’ve made the case for investing in reliability, and specific tools his team has built to improve reliability.</p><p>—</p><p><br><strong>Discussion points:</strong></p><ul><li>(1:25) How Jack became interested in reliability </li><li>(3:24) Where the Instagram Reels team fits into the broader organization</li><li>(4:05) What Jack’s team focuses on</li><li>(4:55) The role of production engineering at Instagram versus Shopify </li><li>(8:32) The essence of DevOps</li><li>(10:44) Pros and cons of having product-focused teams</li><li>(13:35) How Jack’s team defines and tracks quality</li><li>(15:46) Signals the team monitors outside of systems </li><li>(18:10) Revamping Instagram Reel’s incident management process</li><li>(19:46) Making the case for improving the incident review process</li><li>(28:10) How their incident review process works</li><li>(31:55) The roles involved in an incident review </li><li>(33:40) The value of having incident reviews</li><li>(35:55) Why leaders should be part of incident reviews </li><li>(38:34) Why Jack’s team builds tools for driving reliability goals</li><li>(40:06) The types of tools Jack’s team focuses on </li><li>(43:09) What a merge queue is and why it was built at Shopify</li><li>(51:20) Using a Slack bot for ‘failed build’ alerts</li><li>(52:32) When a company should consider implementing a merge queue</li></ul><p><br>—</p><p><br><strong>Mentions and links: <br></strong>Follow Jack on <a href="https://www.linkedin.com/in/jacktengli/">LinkedIn</a> <br>Jack’s article from his time on Shopify about their <a href="https://shopify.engineering/successfully-merging-work-1000-developers">Merge Queue</a><br>Jack’s talk on Shopify’s Merge Queue at <a href="https://www.youtube.com/watch?v=04TTRJArpVw">GitHub Universe 2019</a></p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Jack Li explains how his production engineering team rolled out a new incident review process, how they’ve made the case for investing in reliability, and specific tools his team has built to improve reliability.</p><p>—</p><p><br><strong>Discussion points:</strong></p><ul><li>(1:25) How Jack became interested in reliability </li><li>(3:24) Where the Instagram Reels team fits into the broader organization</li><li>(4:05) What Jack’s team focuses on</li><li>(4:55) The role of production engineering at Instagram versus Shopify </li><li>(8:32) The essence of DevOps</li><li>(10:44) Pros and cons of having product-focused teams</li><li>(13:35) How Jack’s team defines and tracks quality</li><li>(15:46) Signals the team monitors outside of systems </li><li>(18:10) Revamping Instagram Reel’s incident management process</li><li>(19:46) Making the case for improving the incident review process</li><li>(28:10) How their incident review process works</li><li>(31:55) The roles involved in an incident review </li><li>(33:40) The value of having incident reviews</li><li>(35:55) Why leaders should be part of incident reviews </li><li>(38:34) Why Jack’s team builds tools for driving reliability goals</li><li>(40:06) The types of tools Jack’s team focuses on </li><li>(43:09) What a merge queue is and why it was built at Shopify</li><li>(51:20) Using a Slack bot for ‘failed build’ alerts</li><li>(52:32) When a company should consider implementing a merge queue</li></ul><p><br>—</p><p><br><strong>Mentions and links: <br></strong>Follow Jack on <a href="https://www.linkedin.com/in/jacktengli/">LinkedIn</a> <br>Jack’s article from his time on Shopify about their <a href="https://shopify.engineering/successfully-merging-work-1000-developers">Merge Queue</a><br>Jack’s talk on Shopify’s Merge Queue at <a href="https://www.youtube.com/watch?v=04TTRJArpVw">GitHub Universe 2019</a></p><p><br></p>]]>
      </content:encoded>
      <pubDate>Wed, 15 Feb 2023 21:20:35 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/374710e7/d9e4178f.mp3" length="54310757" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>3388</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Jack Li explains how his production engineering team rolled out a new incident review process, how they’ve made the case for investing in reliability, and specific tools his team has built to improve reliability.</p><p>—</p><p><br><strong>Discussion points:</strong></p><ul><li>(1:25) How Jack became interested in reliability </li><li>(3:24) Where the Instagram Reels team fits into the broader organization</li><li>(4:05) What Jack’s team focuses on</li><li>(4:55) The role of production engineering at Instagram versus Shopify </li><li>(8:32) The essence of DevOps</li><li>(10:44) Pros and cons of having product-focused teams</li><li>(13:35) How Jack’s team defines and tracks quality</li><li>(15:46) Signals the team monitors outside of systems </li><li>(18:10) Revamping Instagram Reel’s incident management process</li><li>(19:46) Making the case for improving the incident review process</li><li>(28:10) How their incident review process works</li><li>(31:55) The roles involved in an incident review </li><li>(33:40) The value of having incident reviews</li><li>(35:55) Why leaders should be part of incident reviews </li><li>(38:34) Why Jack’s team builds tools for driving reliability goals</li><li>(40:06) The types of tools Jack’s team focuses on </li><li>(43:09) What a merge queue is and why it was built at Shopify</li><li>(51:20) Using a Slack bot for ‘failed build’ alerts</li><li>(52:32) When a company should consider implementing a merge queue</li></ul><p><br>—</p><p><br><strong>Mentions and links: <br></strong>Follow Jack on <a href="https://www.linkedin.com/in/jacktengli/">LinkedIn</a> <br>Jack’s article from his time on Shopify about their <a href="https://shopify.engineering/successfully-merging-work-1000-developers">Merge Queue</a><br>Jack’s talk on Shopify’s Merge Queue at <a href="https://www.youtube.com/watch?v=04TTRJArpVw">GitHub Universe 2019</a></p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>A masterclass on DORA – research program, common pitfalls, and future direction | Nathen Harvey (Google)</title>
      <itunes:episode>29</itunes:episode>
      <podcast:episode>29</podcast:episode>
      <itunes:title>A masterclass on DORA – research program, common pitfalls, and future direction | Nathen Harvey (Google)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">65dd4cf4-4903-4397-aafa-f28ef03d008f</guid>
      <link>https://share.transistor.fm/s/05271afb</link>
      <description>
        <![CDATA[<p>Nathen Harvey, who leads DORA at Google, explains what DORA is, how it has evolved in recent years, the common challenges companies face as they adopt DORA metrics, and where the program may be heading in the future.</p><p>—</p><p><br><strong>Discussion points:<br></strong><br></p><p>(1:48) What DORA is today and how it exists within Google</p><p>(3:37) The vision for Google and DORA coming together</p><p>(5:20) How the DORA research program works</p><p>(7:53) Who participates in the DORA survey</p><p>(9:28) How the industry benchmarks are identified </p><p>(11:05) How the reports have evolved over recent years</p><p>(13:55) How reliability is measured </p><p>(15:19) Why the 2022 report didn’t have an Elite category</p><p>(17:11) The new Slowing, Flowing, and Retiring clusters</p><p>(19:25) How to think about applying the benchmarks</p><p>(20:45) Challenges with how DORA metrics are used</p><p>(24:02) Why comparing teams’ DORA metrics is an antipattern </p><p>(26:18) Why ‘industry’ doesn’t matter when comparing organizations to benchmarks </p><p>(29:32) Moving beyond DORA metrics to optimize organizational performance </p><p>(30:56) Defining different DORA metrics</p><p>(36:27) Measuring deployment frequency at the team level, not the organizational level</p><p>(38:29) The capabilities: there’s more to DORA than the four metrics </p><p>(43:09) How DORA and SPACE are related</p><p>(47:58) DORA’s capabilities assessment tool </p><p>(49:26) Where DORA is heading</p><p><br>—</p><p><br><strong>Mentions and links:</strong></p><p>Follow Nathen on <a href="https://www.linkedin.com/in/nathen/">LinkedIn</a> or <a href="https://twitter.com/nathenharvey">Twitter</a></p><p><a href="https://getdx.com/podcast/dora-metrics-space-framework">Engineering Enablement episode</a> with Dr. Nicole Forsgren</p><p><a href="https://cloud.google.com/devops/state-of-devops/">2022 State of DevOps report</a>  </p><p>Bryan Finster’s <a href="https://itrevolution.com/product/devops-enterprise-journal-spring-2022/">How to Use &amp; Abuse DORA Metrics</a> (and <a href="https://abinoda.substack.com/p/misuse-dora">Abi’s summary</a> of the paper) </p><p>Engineering Enablement episode with <a href="https://getdx.com/podcast/29">Dr. Margaret-Anne Storey<br></a><br></p><p>Join the DORA community for discussion and events:<a href="https://dora.community/"> dora.community</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Nathen Harvey, who leads DORA at Google, explains what DORA is, how it has evolved in recent years, the common challenges companies face as they adopt DORA metrics, and where the program may be heading in the future.</p><p>—</p><p><br><strong>Discussion points:<br></strong><br></p><p>(1:48) What DORA is today and how it exists within Google</p><p>(3:37) The vision for Google and DORA coming together</p><p>(5:20) How the DORA research program works</p><p>(7:53) Who participates in the DORA survey</p><p>(9:28) How the industry benchmarks are identified </p><p>(11:05) How the reports have evolved over recent years</p><p>(13:55) How reliability is measured </p><p>(15:19) Why the 2022 report didn’t have an Elite category</p><p>(17:11) The new Slowing, Flowing, and Retiring clusters</p><p>(19:25) How to think about applying the benchmarks</p><p>(20:45) Challenges with how DORA metrics are used</p><p>(24:02) Why comparing teams’ DORA metrics is an antipattern </p><p>(26:18) Why ‘industry’ doesn’t matter when comparing organizations to benchmarks </p><p>(29:32) Moving beyond DORA metrics to optimize organizational performance </p><p>(30:56) Defining different DORA metrics</p><p>(36:27) Measuring deployment frequency at the team level, not the organizational level</p><p>(38:29) The capabilities: there’s more to DORA than the four metrics </p><p>(43:09) How DORA and SPACE are related</p><p>(47:58) DORA’s capabilities assessment tool </p><p>(49:26) Where DORA is heading</p><p><br>—</p><p><br><strong>Mentions and links:</strong></p><p>Follow Nathen on <a href="https://www.linkedin.com/in/nathen/">LinkedIn</a> or <a href="https://twitter.com/nathenharvey">Twitter</a></p><p><a href="https://getdx.com/podcast/dora-metrics-space-framework">Engineering Enablement episode</a> with Dr. Nicole Forsgren</p><p><a href="https://cloud.google.com/devops/state-of-devops/">2022 State of DevOps report</a>  </p><p>Bryan Finster’s <a href="https://itrevolution.com/product/devops-enterprise-journal-spring-2022/">How to Use &amp; Abuse DORA Metrics</a> (and <a href="https://abinoda.substack.com/p/misuse-dora">Abi’s summary</a> of the paper) </p><p>Engineering Enablement episode with <a href="https://getdx.com/podcast/29">Dr. Margaret-Anne Storey<br></a><br></p><p>Join the DORA community for discussion and events:<a href="https://dora.community/"> dora.community</a> </p>]]>
      </content:encoded>
      <pubDate>Tue, 24 Jan 2023 20:42:54 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/05271afb/670ada5e.mp3" length="131502619" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>3285</itunes:duration>
      <itunes:summary>Nathen Harvey explains how DORA has evolved in recent years, the common challenges companies face as they adopt DORA metrics, and where the program may be heading in the future. </itunes:summary>
      <itunes:subtitle>Nathen Harvey explains how DORA has evolved in recent years, the common challenges companies face as they adopt DORA metrics, and where the program may be heading in the future. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>An inside look at the SPACE framework | Dr. Margaret-Anne Storey (co-author, SPACE)</title>
      <itunes:episode>28</itunes:episode>
      <podcast:episode>28</podcast:episode>
      <itunes:title>An inside look at the SPACE framework | Dr. Margaret-Anne Storey (co-author, SPACE)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">40752c91-e01b-40dc-b6fe-e5d565a39204</guid>
      <link>https://share.transistor.fm/s/8171cfe3</link>
      <description>
        <![CDATA[<p>This week's guest is Dr. Margaret-Anne Storey, who goes by the name Peggy. Peggy is a professor of Computer Science at the University of Victoria, the Chief Scientist at DX, and co-author of the SPACE Framework, which is the topic of focus in this episode. Today’s conversation discusses what the SPACE framework is and what went into developing the metrics and categories. Peggy also shares where she sees this line of research heading next.  </p><p>—</p><p><br><strong>Discussion points:<br> <br></strong>(1:29) Peggy’s background <br>(4:01) What the SPACE framework is <br>(5:55) Why the researchers came together for this paper<br>(7:27) The process of writing this paper<br>(9:52) How the SPACE categories and acronym emerged <br>(11:50) The authors’ intention for how this framework would be received<br>(13:26) Finding a definition for what developer productivity is<br>(17:08) The metrics included in the SPACE framework <br>(24:48) How SPACE is different from DORA<br>(26:17) Why lines of code and number of pull requests were included as example metrics<br>(27:14) What Peggy is thinking about next</p><p>—</p><p><br><strong>Mentions and links: <br></strong>Where to find Peggy: <a href="https://twitter.com/margaretstorey">Twitter</a>, <a href="https://www.margaretstorey.com/">Website</a><br><a href="https://queue.acm.org/detail.cfm?id=3454124">The SPACE of Developer Productivity: There’s more to it than you think</a> by Nicole Forsgren, Margaret-Anne Storey, Chandra Madilla, Thomas Zimmerman, Brian Houck, and Jenna Butler<br>Abi’s <a href="https://abinoda.substack.com/p/space">summary</a> of the SPACE paper <br>Peggy’s talk, <a href="https://www.youtube.com/watch?v=ZdUAxUBrYLA&amp;feature=youtu.be">What Does Productivity Actually Mean for Developers?</a> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week's guest is Dr. Margaret-Anne Storey, who goes by the name Peggy. Peggy is a professor of Computer Science at the University of Victoria, the Chief Scientist at DX, and co-author of the SPACE Framework, which is the topic of focus in this episode. Today’s conversation discusses what the SPACE framework is and what went into developing the metrics and categories. Peggy also shares where she sees this line of research heading next.  </p><p>—</p><p><br><strong>Discussion points:<br> <br></strong>(1:29) Peggy’s background <br>(4:01) What the SPACE framework is <br>(5:55) Why the researchers came together for this paper<br>(7:27) The process of writing this paper<br>(9:52) How the SPACE categories and acronym emerged <br>(11:50) The authors’ intention for how this framework would be received<br>(13:26) Finding a definition for what developer productivity is<br>(17:08) The metrics included in the SPACE framework <br>(24:48) How SPACE is different from DORA<br>(26:17) Why lines of code and number of pull requests were included as example metrics<br>(27:14) What Peggy is thinking about next</p><p>—</p><p><br><strong>Mentions and links: <br></strong>Where to find Peggy: <a href="https://twitter.com/margaretstorey">Twitter</a>, <a href="https://www.margaretstorey.com/">Website</a><br><a href="https://queue.acm.org/detail.cfm?id=3454124">The SPACE of Developer Productivity: There’s more to it than you think</a> by Nicole Forsgren, Margaret-Anne Storey, Chandra Madilla, Thomas Zimmerman, Brian Houck, and Jenna Butler<br>Abi’s <a href="https://abinoda.substack.com/p/space">summary</a> of the SPACE paper <br>Peggy’s talk, <a href="https://www.youtube.com/watch?v=ZdUAxUBrYLA&amp;feature=youtu.be">What Does Productivity Actually Mean for Developers?</a> </p>]]>
      </content:encoded>
      <pubDate>Tue, 17 Jan 2023 21:49:45 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/8171cfe3/7a79de4d.mp3" length="36375915" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2268</itunes:duration>
      <itunes:summary>Dr. Margaret-Anne Storey discusses what the SPACE framework is and how the metrics and categories were developed.  </itunes:summary>
      <itunes:subtitle>Dr. Margaret-Anne Storey discusses what the SPACE framework is and how the metrics and categories were developed.  </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Spotify’s failed #SquadGoals | Jeremiah Lee (Spotify, Stripe)</title>
      <itunes:episode>27</itunes:episode>
      <podcast:episode>27</podcast:episode>
      <itunes:title>Spotify’s failed #SquadGoals | Jeremiah Lee (Spotify, Stripe)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2a66749a-d1c8-4dc8-893d-10bd7fe5ed28</guid>
      <link>https://share.transistor.fm/s/574836d3</link>
      <description>
        <![CDATA[<p>This week’s guest is Jeremiah Lee, who was previously a manager at Stripe and product manager at Spotify. This conversation focuses on org structure, and specifically Jeremiah’s experience with the popular squad model from Spotify. Jeremiah provides the backstory on where the model came from, what parts of the model were a challenge, and advice for leaders either already adopting the model or considering doing so. </p><p><strong>Discussion points:</strong></p><p>(1:40) What the Spotify model is</p><p>(4:39) Jeremiah’s impression of the Spotify model as he joined the company</p><p>(7:29) Spotify’s progress in adopting the model as Jeremiah joined</p><p>(9:55) Challenges with matrix management</p><p>(12:02) The role of engineering managers </p><p>(14:40) What the model was designed to solve </p><p>(15:54) Good autonomy versus toxic autonomy </p><p>(18:51) How Agile coaches were used at Spotify </p><p>(21:39) Advice for teams who are struggling to implement the Spotify model</p><p>(24:50) Advice for leaders who are starting to think about org design</p><p>(27:30) How Stripe approached org structure </p><p>(30:26) How org structure affects a platform team’s work </p><p>(33:32) Tracking engineering org structures </p><p>(36:02) Why the squad model became so popular</p><p>(39:37) What the original authors may have felt about the popularity of the model</p><p><br></p><p><strong>Mentions and links: </strong></p><p>Follow Jeremiah on <a href="https://www.linkedin.com/in/jeremiah-x-lee/">LinkedIn</a></p><p>Jeremiah’s <a href="https://www.jeremiahlee.com/posts/failed-squad-goals/">Spotify’s Failed #SquadGoals</a></p><p>The original whitepaper on the Spotify model: <a href="https://blog.crisp.se/wp-content/uploads/2012/11/SpotifyScaling.pdf">Scaling Agile at Spotify</a></p><p><a href="https://teamtopologies.com/">Team Topologies</a> by Matthew Skelton and Manuel Pais</p><p><a href="https://www.amazon.com/Essential-Scrum-Practical-Addison-Wesley-Signature/dp/0137043295">Essential Scrum</a> by Kenneth S. Rubin</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This week’s guest is Jeremiah Lee, who was previously a manager at Stripe and product manager at Spotify. This conversation focuses on org structure, and specifically Jeremiah’s experience with the popular squad model from Spotify. Jeremiah provides the backstory on where the model came from, what parts of the model were a challenge, and advice for leaders either already adopting the model or considering doing so. </p><p><strong>Discussion points:</strong></p><p>(1:40) What the Spotify model is</p><p>(4:39) Jeremiah’s impression of the Spotify model as he joined the company</p><p>(7:29) Spotify’s progress in adopting the model as Jeremiah joined</p><p>(9:55) Challenges with matrix management</p><p>(12:02) The role of engineering managers </p><p>(14:40) What the model was designed to solve </p><p>(15:54) Good autonomy versus toxic autonomy </p><p>(18:51) How Agile coaches were used at Spotify </p><p>(21:39) Advice for teams who are struggling to implement the Spotify model</p><p>(24:50) Advice for leaders who are starting to think about org design</p><p>(27:30) How Stripe approached org structure </p><p>(30:26) How org structure affects a platform team’s work </p><p>(33:32) Tracking engineering org structures </p><p>(36:02) Why the squad model became so popular</p><p>(39:37) What the original authors may have felt about the popularity of the model</p><p><br></p><p><strong>Mentions and links: </strong></p><p>Follow Jeremiah on <a href="https://www.linkedin.com/in/jeremiah-x-lee/">LinkedIn</a></p><p>Jeremiah’s <a href="https://www.jeremiahlee.com/posts/failed-squad-goals/">Spotify’s Failed #SquadGoals</a></p><p>The original whitepaper on the Spotify model: <a href="https://blog.crisp.se/wp-content/uploads/2012/11/SpotifyScaling.pdf">Scaling Agile at Spotify</a></p><p><a href="https://teamtopologies.com/">Team Topologies</a> by Matthew Skelton and Manuel Pais</p><p><a href="https://www.amazon.com/Essential-Scrum-Practical-Addison-Wesley-Signature/dp/0137043295">Essential Scrum</a> by Kenneth S. Rubin</p>]]>
      </content:encoded>
      <pubDate>Tue, 10 Jan 2023 19:53:03 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/574836d3/7e99dbc5.mp3" length="41902793" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2613</itunes:duration>
      <itunes:summary>What parts of the Spotify Squad Model were challenging, and advice for leadings considering adopting the model. </itunes:summary>
      <itunes:subtitle>What parts of the Spotify Squad Model were challenging, and advice for leadings considering adopting the model. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How much to invest in platform work | Jean-Michel Lemieux (Shopify, Atlassian)</title>
      <itunes:episode>26</itunes:episode>
      <podcast:episode>26</podcast:episode>
      <itunes:title>How much to invest in platform work | Jean-Michel Lemieux (Shopify, Atlassian)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">08ceaee7-4de7-44ea-a723-970af3bf1b0b</guid>
      <link>https://share.transistor.fm/s/846d4d56</link>
      <description>
        <![CDATA[<p>Jean-Michel Lemieux, former CTO of Shopify and VP of Engineering at Atlassian, explains how to advocate for investing in platform work, which projects to fund, and what distinguishes a great platform leader. </p><p>—</p><p><strong>Discussion points:</strong></p><p><br></p><p>(1:38) Jean-Michel’s definition of platform work </p><p>(6:44) Why reliability, performance, and stability do fall within platform work </p><p>(7:24) The consequences of lacking a product mindset in platform</p><p>(9:20) Why and how to advocate for investing 50% of R&amp;D spend in platform work </p><p>(12:31) How Jean-Michel arrived at 50% as the percentage of R&amp;D spend that should be allocated to platform </p><p>(16:09) Jean-Michel’s experiences with different levels of investment in platform work </p><p>(21:59) What percentage of platform investment should go towards keep the lights on work</p><p>(24:01) Whether the allocation changes at different company stages</p><p>(27:05) Why platform work is consistently underinvested in</p><p>(29:00) Why having a platform team could be an anti-pattern</p><p>(32:32) How to advocate for this work to leaders</p><p>(35:35) What it looks like to over-invest in platform work </p><p>(40:03) How to decide which initiatives to invest in</p><p>(47:41) Making build vs buy decisions in platform work </p><p>(49:58) What distinguishes a great platform leader </p><p><br></p><p>—</p><p><br></p><p><strong>Mentions and links: <br></strong>Follow Jean-Michel Lemieux on <a href="https://www.linkedin.com/in/jmlemieux-613/">LinkedIn </a>and <a href="https://twitter.com/jmwind">Twitter </a><br><a href="https://www.linkedin.com/posts/abinoda_platformengineering-podcast-activity-7008445078666964992-RgMT?utm_source=share&amp;utm_medium=member_desktop">Abi’s post </a>that sourced many of the questions discussed in this conversation<br>Jean-Michel’s book chapter on <a href="https://buildrightside.com/book/chapter2-platform-investments.html">platform investments</a><br>Jean-Michel’s <a href="https://twitter.com/jmwind/status/1470894717839704067?s=20&amp;t=Qqh1IO0geQBPXb9HWdtvcQ">definition of what platform work is </a><br>The podcast episode on <a href="https://fellow.app/supermanagers/jean-michel-lemieux-shopify-building-a-connected-network-of-brains/">what Shopify expects of managers </a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Jean-Michel Lemieux, former CTO of Shopify and VP of Engineering at Atlassian, explains how to advocate for investing in platform work, which projects to fund, and what distinguishes a great platform leader. </p><p>—</p><p><strong>Discussion points:</strong></p><p><br></p><p>(1:38) Jean-Michel’s definition of platform work </p><p>(6:44) Why reliability, performance, and stability do fall within platform work </p><p>(7:24) The consequences of lacking a product mindset in platform</p><p>(9:20) Why and how to advocate for investing 50% of R&amp;D spend in platform work </p><p>(12:31) How Jean-Michel arrived at 50% as the percentage of R&amp;D spend that should be allocated to platform </p><p>(16:09) Jean-Michel’s experiences with different levels of investment in platform work </p><p>(21:59) What percentage of platform investment should go towards keep the lights on work</p><p>(24:01) Whether the allocation changes at different company stages</p><p>(27:05) Why platform work is consistently underinvested in</p><p>(29:00) Why having a platform team could be an anti-pattern</p><p>(32:32) How to advocate for this work to leaders</p><p>(35:35) What it looks like to over-invest in platform work </p><p>(40:03) How to decide which initiatives to invest in</p><p>(47:41) Making build vs buy decisions in platform work </p><p>(49:58) What distinguishes a great platform leader </p><p><br></p><p>—</p><p><br></p><p><strong>Mentions and links: <br></strong>Follow Jean-Michel Lemieux on <a href="https://www.linkedin.com/in/jmlemieux-613/">LinkedIn </a>and <a href="https://twitter.com/jmwind">Twitter </a><br><a href="https://www.linkedin.com/posts/abinoda_platformengineering-podcast-activity-7008445078666964992-RgMT?utm_source=share&amp;utm_medium=member_desktop">Abi’s post </a>that sourced many of the questions discussed in this conversation<br>Jean-Michel’s book chapter on <a href="https://buildrightside.com/book/chapter2-platform-investments.html">platform investments</a><br>Jean-Michel’s <a href="https://twitter.com/jmwind/status/1470894717839704067?s=20&amp;t=Qqh1IO0geQBPXb9HWdtvcQ">definition of what platform work is </a><br>The podcast episode on <a href="https://fellow.app/supermanagers/jean-michel-lemieux-shopify-building-a-connected-network-of-brains/">what Shopify expects of managers </a></p>]]>
      </content:encoded>
      <pubDate>Wed, 04 Jan 2023 07:30:09 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/846d4d56/48743eff.mp3" length="50741702" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>3165</itunes:duration>
      <itunes:summary>Jean-Michel Lemieux shares his perspective on how much to invest in platform work, which projects to fund, and how to advocate for more resources. </itunes:summary>
      <itunes:subtitle>Jean-Michel Lemieux shares his perspective on how much to invest in platform work, which projects to fund, and how to advocate for more resources. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Principles for driving adoption and platform team growth | Jonathan Biddle (Wayfair) </title>
      <itunes:episode>25</itunes:episode>
      <podcast:episode>25</podcast:episode>
      <itunes:title>Principles for driving adoption and platform team growth | Jonathan Biddle (Wayfair) </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">376fcfc8-3459-4594-8533-cd1454d5fdfb</guid>
      <link>https://share.transistor.fm/s/a3429aca</link>
      <description>
        <![CDATA[<p>Jonathan Biddle, Director of Engineering Effectiveness at Wayfair, shares the story of how his team found repeat success and subsequently grew in size and scope. He shares lessons they’ve borrowed from startups, including understanding the adoption curve and knowing your core users, and offers advice for other platform teams looking to move to the next stage. </p><p>—<br><strong>Discussion points:<br></strong><br>(01:15) How Jonathan moved into his role</p><p>(05:30) Why Platforms teams are in a position of leverage, but also ambiguity</p><p>(07:18) The initial work Jonathan’s team focused on</p><p>(10:07) Creating transactional versus recurring value</p><p>(11:36) The difference between startups and platform teams </p><p>(14:12) Expanding the team’s scope and rebranding to Developer Acceleration</p><p>(18:20) What drove the platform team’s success</p><p>(21:05) Three adoption concepts to understand</p><p>(24:41) Knowing your core customers</p><p>(27:36) Adoption metrics and feedback gathering mechanisms</p><p>(33:37) When to mandate adoption or rely on organic adoption</p><p>(38:38) A story of when adoption fell short <br>(45:35) Advice for how other teams can go from zero to one</p><p>—</p><p><strong>Mentions and links: <br></strong>Follow Jonathan on <a href="https://www.linkedin.com/in/jonbiddle/">LinkedIn</a><br><a href="https://www.amazon.com/Diffusion-Innovations-5th-Everett-Rogers/dp/0743222091/ref=asc_df_0743222091/?tag=hyprod-20&amp;linkCode=df0&amp;hvadid=312425492373&amp;hvpos=&amp;hvnetw=g&amp;hvrand=10213469156681608784&amp;hvpone=&amp;hvptwo=&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9029719&amp;hvtargid=pla-458892480208&amp;psc=1">Diffusion of Innovations</a> by Everett M. Rogers (and the <a href="https://en.wikipedia.org/wiki/Diffusion_of_innovations#:~:text=Diffusion%20of%20innovations%20is%20a,its%20fifth%20edition%20(2003).">Wikipedia page</a> for the book)<br><a href="https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream/dp/0062292986/ref=sr_1_1?keywords=crossing+the+chasm&amp;qid=1670992463&amp;sr=8-1">Crossing the Chasm</a> by Geoffrey A. Moore<br><a href="https://www.patagonia.com/stories/let-my-people-go-surfing/story-30910.html">Let My People Go Surfing</a> by Yvon Chouinard of Patagonia</p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Jonathan Biddle, Director of Engineering Effectiveness at Wayfair, shares the story of how his team found repeat success and subsequently grew in size and scope. He shares lessons they’ve borrowed from startups, including understanding the adoption curve and knowing your core users, and offers advice for other platform teams looking to move to the next stage. </p><p>—<br><strong>Discussion points:<br></strong><br>(01:15) How Jonathan moved into his role</p><p>(05:30) Why Platforms teams are in a position of leverage, but also ambiguity</p><p>(07:18) The initial work Jonathan’s team focused on</p><p>(10:07) Creating transactional versus recurring value</p><p>(11:36) The difference between startups and platform teams </p><p>(14:12) Expanding the team’s scope and rebranding to Developer Acceleration</p><p>(18:20) What drove the platform team’s success</p><p>(21:05) Three adoption concepts to understand</p><p>(24:41) Knowing your core customers</p><p>(27:36) Adoption metrics and feedback gathering mechanisms</p><p>(33:37) When to mandate adoption or rely on organic adoption</p><p>(38:38) A story of when adoption fell short <br>(45:35) Advice for how other teams can go from zero to one</p><p>—</p><p><strong>Mentions and links: <br></strong>Follow Jonathan on <a href="https://www.linkedin.com/in/jonbiddle/">LinkedIn</a><br><a href="https://www.amazon.com/Diffusion-Innovations-5th-Everett-Rogers/dp/0743222091/ref=asc_df_0743222091/?tag=hyprod-20&amp;linkCode=df0&amp;hvadid=312425492373&amp;hvpos=&amp;hvnetw=g&amp;hvrand=10213469156681608784&amp;hvpone=&amp;hvptwo=&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9029719&amp;hvtargid=pla-458892480208&amp;psc=1">Diffusion of Innovations</a> by Everett M. Rogers (and the <a href="https://en.wikipedia.org/wiki/Diffusion_of_innovations#:~:text=Diffusion%20of%20innovations%20is%20a,its%20fifth%20edition%20(2003).">Wikipedia page</a> for the book)<br><a href="https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream/dp/0062292986/ref=sr_1_1?keywords=crossing+the+chasm&amp;qid=1670992463&amp;sr=8-1">Crossing the Chasm</a> by Geoffrey A. Moore<br><a href="https://www.patagonia.com/stories/let-my-people-go-surfing/story-30910.html">Let My People Go Surfing</a> by Yvon Chouinard of Patagonia</p><p><br></p>]]>
      </content:encoded>
      <pubDate>Mon, 19 Dec 2022 19:00:00 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/a3429aca/ecbc696a.mp3" length="121660698" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/8baF_7QCKHfLv_idQcbZXxRpyQZxRg8j_dnsyFvknU8/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzExMzUzNDUv/MTY3MDk5MzQ2OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3005</itunes:duration>
      <itunes:summary>Jonathan Biddle, Director of Engineering Effectiveness at Wayfair, shares the story of how his team found repeat success and subsequently grew in size and scope. He shares lessons they’ve borrowed from startups, including understanding the adoption curve and knowing your core users, and offers advice for other platform teams looking to move to the next stage. </itunes:summary>
      <itunes:subtitle>Jonathan Biddle, Director of Engineering Effectiveness at Wayfair, shares the story of how his team found repeat success and subsequently grew in size and scope. He shares lessons they’ve borrowed from startups, including understanding the adoption curve </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/a3429aca/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Leading infrastructure change at scale | Ian White (DAT)</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>Leading infrastructure change at scale | Ian White (DAT)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fe7763af-54f6-49c2-a47b-c73ae9686ad2</guid>
      <link>https://share.transistor.fm/s/975f0228</link>
      <description>
        <![CDATA[<p>Ian White, Director of Platform Engineering at DAT, joined the company to scale their Kubernetes-based cloud infrastructure, which has come under stress as their business has grown over the past couple years. Here he shares how he partnered with developers to learn about their challenges, how we conveyed a vision for how the company needed to evolve, and how he’s been working with development teams and business stakeholders to successfully drive change.</p><p>—</p><p>(01:00) - The challenges DAT was facing as Ian joined </p><p>(05:13) - How Ian used customer interviews to understand problems</p><p>(10:48) - The typical journey companies take as they scale their infrastructure as they grow </p><p>(16:20) - How early changes were positioned and received </p><p>(20:00) - The four personas Ian identified </p><p>(25:14) - How Ian evangelized the vision</p><p>(28:48) - Areas of pushback Ian foresees as they introduce new changes</p><p>(33:00) - Handling teams that want to stay on self-managed infrastructure instead of moving to a managed infrastructure </p><p>(41:55) - Managing business stakeholders</p><p>(45:00) - Partnering with finance </p><p><br>—</p><p><strong>Where to find Ian:</strong></p><ul><li>Follow Ian on <a href="https://www.linkedin.com/in/ian-white-co/">LinkedIn</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Ian White, Director of Platform Engineering at DAT, joined the company to scale their Kubernetes-based cloud infrastructure, which has come under stress as their business has grown over the past couple years. Here he shares how he partnered with developers to learn about their challenges, how we conveyed a vision for how the company needed to evolve, and how he’s been working with development teams and business stakeholders to successfully drive change.</p><p>—</p><p>(01:00) - The challenges DAT was facing as Ian joined </p><p>(05:13) - How Ian used customer interviews to understand problems</p><p>(10:48) - The typical journey companies take as they scale their infrastructure as they grow </p><p>(16:20) - How early changes were positioned and received </p><p>(20:00) - The four personas Ian identified </p><p>(25:14) - How Ian evangelized the vision</p><p>(28:48) - Areas of pushback Ian foresees as they introduce new changes</p><p>(33:00) - Handling teams that want to stay on self-managed infrastructure instead of moving to a managed infrastructure </p><p>(41:55) - Managing business stakeholders</p><p>(45:00) - Partnering with finance </p><p><br>—</p><p><strong>Where to find Ian:</strong></p><ul><li>Follow Ian on <a href="https://www.linkedin.com/in/ian-white-co/">LinkedIn</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 13 Dec 2022 16:10:20 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/975f0228/710d2450.mp3" length="48254172" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1TJYLh1mTADK-h2XKrQONEbRdIPYwJDtggYAT50lNBg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzExMzUwODQv/MTY3MDk4NjIyNy1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3012</itunes:duration>
      <itunes:summary>Ian White, Director of Platform Engineering at DAT, joined the company to scale their Kubernetes-based cloud infrastructure, which has come under stress as their business has grown over the past couple years. Here he shares how he partnered with developers to learn about their challenges, how we conveyed a vision for how the company needed to evolve, and how he’s been working with development teams and business stakeholders to successfully drive change. </itunes:summary>
      <itunes:subtitle>Ian White, Director of Platform Engineering at DAT, joined the company to scale their Kubernetes-based cloud infrastructure, which has come under stress as their business has grown over the past couple years. Here he shares how he partnered with developer</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Positioning platform work in a down market | Brian Guthrie (Orgspace, Meetup)</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>Positioning platform work in a down market | Brian Guthrie (Orgspace, Meetup)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">367d1fbf-22c0-46f7-9248-7e3a077c21ce</guid>
      <link>https://share.transistor.fm/s/7d9a0b48</link>
      <description>
        <![CDATA[<p>Brian Guthrie, co-founder and CTO at Orgspace and former VP of Engineering at Meetup, has the unique experience of having previously decommissioned his Platform team. In this episode, Brian talks about that story openly, and shares advice for Platform teams to make sure they’re well positioned within their organizations. <br><strong><br>Discussion points:</strong></p><ul><li>Brian’s background and story at Meetup - [00:02:20]</li><li>Brian’s perspective on Platform work, generally - [00:06:40]</li><li>The conversation around dissolving the Platform group - [00:12:05]</li><li>Advice for Platform groups positioning their teams - [00:16:55]</li><li>Making sure Platform groups are focused on the right problems [00:21:21]</li><li>How Platform groups can think about communicating with the business [00:23:50]</li><li>Bringing engineering teams into the planning process - [00:25:43]</li><li>Deciding to build vs buy in a down market - [00:28:40]</li><li>How developer happiness is part of positioning platform work [00:32:30]</li></ul><p><strong>Follow Brian: </strong></p><ul><li>Brian's LinkedIn: <a href="https://www.linkedin.com/in/bguthrie/">https://www.linkedin.com/in/bguthrie/</a></li></ul><p><strong>Mentions and links: </strong></p><ul><li>Brian's talk, <a href="https://platformcon.com/talk/is-the-optimal-size-of-a-platform-team-zero">Is the optimal size of a platform team... zero?</a></li><li><a href="https://www.honeycomb.io/blog/future-ops-platform-engineering">The Future of Ops is Platform Engineering</a> by Charity Majors</li><li>Former Shopify CTO's take on the <a href="https://buildrightside.com/book/chapter2-platform-investments.html">optimal spend on platform work</a></li><li>Research on how <a href="https://abinoda.substack.com/p/impact-of-happiness">developer happiness impacts productivity </a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Brian Guthrie, co-founder and CTO at Orgspace and former VP of Engineering at Meetup, has the unique experience of having previously decommissioned his Platform team. In this episode, Brian talks about that story openly, and shares advice for Platform teams to make sure they’re well positioned within their organizations. <br><strong><br>Discussion points:</strong></p><ul><li>Brian’s background and story at Meetup - [00:02:20]</li><li>Brian’s perspective on Platform work, generally - [00:06:40]</li><li>The conversation around dissolving the Platform group - [00:12:05]</li><li>Advice for Platform groups positioning their teams - [00:16:55]</li><li>Making sure Platform groups are focused on the right problems [00:21:21]</li><li>How Platform groups can think about communicating with the business [00:23:50]</li><li>Bringing engineering teams into the planning process - [00:25:43]</li><li>Deciding to build vs buy in a down market - [00:28:40]</li><li>How developer happiness is part of positioning platform work [00:32:30]</li></ul><p><strong>Follow Brian: </strong></p><ul><li>Brian's LinkedIn: <a href="https://www.linkedin.com/in/bguthrie/">https://www.linkedin.com/in/bguthrie/</a></li></ul><p><strong>Mentions and links: </strong></p><ul><li>Brian's talk, <a href="https://platformcon.com/talk/is-the-optimal-size-of-a-platform-team-zero">Is the optimal size of a platform team... zero?</a></li><li><a href="https://www.honeycomb.io/blog/future-ops-platform-engineering">The Future of Ops is Platform Engineering</a> by Charity Majors</li><li>Former Shopify CTO's take on the <a href="https://buildrightside.com/book/chapter2-platform-investments.html">optimal spend on platform work</a></li><li>Research on how <a href="https://abinoda.substack.com/p/impact-of-happiness">developer happiness impacts productivity </a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 07 Dec 2022 08:52:16 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/7d9a0b48/b3e8b449.mp3" length="33085107" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2062</itunes:duration>
      <itunes:summary>Brian Guthrie, co-founder and CTO at Orgspace and former VP of Engineering at Meetup, shares advice for Platform teams to make sure they’re well positioned within their organizations. </itunes:summary>
      <itunes:subtitle>Brian Guthrie, co-founder and CTO at Orgspace and former VP of Engineering at Meetup, shares advice for Platform teams to make sure they’re well positioned within their organizations. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>A deep-dive on real-time feedback and personalized surveys | Max Kanat-Alexander, Or Michael Berlowitz (LinkedIn)</title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>A deep-dive on real-time feedback and personalized surveys | Max Kanat-Alexander, Or Michael Berlowitz (LinkedIn)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6e8a6c08-fdd9-49e7-a76c-80affb791ecf</guid>
      <link>https://share.transistor.fm/s/0f383fa4</link>
      <description>
        <![CDATA[<p>Max Kanat-Alexander and Or Michael Berlowitz (Berlo), share how they gather both periodic and real-time feedback from developers.</p><p><strong>Discussion points:</strong></p><ul><li>Overview of the listening channels used by Max and Berlo’s team - [00:00:58]</li><li>Origin story of the Developer Engagement and Insights team - [00:02:49]</li><li>Perspectives on volume metrics - [00:05:00]</li><li>How the periodic surveys work - [00:08:51]</li><li>Investment required to build the periodic surveys and real-time feedback - [00:14:20]</li><li>How results are handled - [00:15:28]</li><li>How the real-time feedback tool works - [00:21:40]</li><li>Where the idea for the real-time feedback tool came from - [00:25:15]</li><li>Building an MVP for the real-time feedback tool - [00:028:58]</li><li>Other stakeholders involved in triaging feedback - [00:35:40]</li><li>The experience developers have when encountering the real-time feedback tool - [00:37:34]</li><li>How feedback collected via surveys differs from that of the real-time feedback tool - [00:40:44]</li><li>Advice for other teams considering implementing this approach - [00:41:46]</li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Max Kanat-Alexander and Or Michael Berlowitz (Berlo), share how they gather both periodic and real-time feedback from developers.</p><p><strong>Discussion points:</strong></p><ul><li>Overview of the listening channels used by Max and Berlo’s team - [00:00:58]</li><li>Origin story of the Developer Engagement and Insights team - [00:02:49]</li><li>Perspectives on volume metrics - [00:05:00]</li><li>How the periodic surveys work - [00:08:51]</li><li>Investment required to build the periodic surveys and real-time feedback - [00:14:20]</li><li>How results are handled - [00:15:28]</li><li>How the real-time feedback tool works - [00:21:40]</li><li>Where the idea for the real-time feedback tool came from - [00:25:15]</li><li>Building an MVP for the real-time feedback tool - [00:028:58]</li><li>Other stakeholders involved in triaging feedback - [00:35:40]</li><li>The experience developers have when encountering the real-time feedback tool - [00:37:34]</li><li>How feedback collected via surveys differs from that of the real-time feedback tool - [00:40:44]</li><li>Advice for other teams considering implementing this approach - [00:41:46]</li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Wed, 30 Nov 2022 01:08:48 -0700</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/0f383fa4/f276844e.mp3" length="44902854" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2800</itunes:duration>
      <itunes:summary>Max Kanat-Alexander and Or Michael Berlowitz share how they gather both periodic and real-time feedback from developers.</itunes:summary>
      <itunes:subtitle>Max Kanat-Alexander and Or Michael Berlowitz share how they gather both periodic and real-time feedback from developers.</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How to define your team's scope and charter  | Mark Côté (Shopify)</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>How to define your team's scope and charter  | Mark Côté (Shopify)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d83097a5-aaef-4098-a296-59139b9fcc38</guid>
      <link>https://share.transistor.fm/s/60c805e5</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/mrcote/">Mark Côté</a>, Director of Engineering of Developer Infrastructure at Shopify, explains an exercise the Infrastructure group went through to define their boundaries of work. He shares their areas of focus, the team’s guiding principles, how they use their developer happiness survey to decide what to prioritize, and more. </p><p>—</p><p><strong>Discussion points: </strong></p><p>(0:48) Mark's background</p><p>(1:43) How the Developer Acceleration org is structured</p><p>(4:43) The Infrastructure team's chart</p><p>(5:35) Three opportunities for impact</p><p>(7:49) Identifying the opportunities for impact</p><p>(10:51) Why they created a charter</p><p>(17:34) Infrastructure's guiding principles</p><p>(19:32) How they decide what to focus on</p><p>(21:44) Why they don't have product managers</p><p>(24:17) Ideas for reducing cognitive load</p><p>(29:05) Balancing customer requests with strategic roadmap items</p><p>(32:08) How Shopify's Developer Happiness survey works</p><p>(35:32) Who is involved in the Dev Happiness survey</p><p>(36:51) The survey's sampling strategy</p><p>(37:30) How the survey's results are used</p><p>(38:32) The survey's participation rate </p><p>(39:31) Steps they take after the survey</p><p>(42:52) Advice for others starting a developer acceleration team</p><p>—<strong></strong></p><p>Mentions and links:</p><ul><li>Follow Mark on <a href="https://www.linkedin.com/in/mrcote/">LinkedIn</a></li><li><a href="https://shopify.engineering/">Read blog posts</a> written by members of Shopify's Developer Acceleration team</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/mrcote/">Mark Côté</a>, Director of Engineering of Developer Infrastructure at Shopify, explains an exercise the Infrastructure group went through to define their boundaries of work. He shares their areas of focus, the team’s guiding principles, how they use their developer happiness survey to decide what to prioritize, and more. </p><p>—</p><p><strong>Discussion points: </strong></p><p>(0:48) Mark's background</p><p>(1:43) How the Developer Acceleration org is structured</p><p>(4:43) The Infrastructure team's chart</p><p>(5:35) Three opportunities for impact</p><p>(7:49) Identifying the opportunities for impact</p><p>(10:51) Why they created a charter</p><p>(17:34) Infrastructure's guiding principles</p><p>(19:32) How they decide what to focus on</p><p>(21:44) Why they don't have product managers</p><p>(24:17) Ideas for reducing cognitive load</p><p>(29:05) Balancing customer requests with strategic roadmap items</p><p>(32:08) How Shopify's Developer Happiness survey works</p><p>(35:32) Who is involved in the Dev Happiness survey</p><p>(36:51) The survey's sampling strategy</p><p>(37:30) How the survey's results are used</p><p>(38:32) The survey's participation rate </p><p>(39:31) Steps they take after the survey</p><p>(42:52) Advice for others starting a developer acceleration team</p><p>—<strong></strong></p><p>Mentions and links:</p><ul><li>Follow Mark on <a href="https://www.linkedin.com/in/mrcote/">LinkedIn</a></li><li><a href="https://shopify.engineering/">Read blog posts</a> written by members of Shopify's Developer Acceleration team</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 26 Oct 2022 01:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/60c805e5/dab97fca.mp3" length="111099490" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/AvKcy98J6kQ9VZFAOSF5QbvKPHI4LT8VlX8DK7tPtxY/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEwNzcxNDIv/MTY3MTA3ODYzNi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2741</itunes:duration>
      <itunes:summary>Mark Côté explains how Shopify's Infrastructure defined their charter and guiding principles, as well as how they decide what to focus on. </itunes:summary>
      <itunes:subtitle>Mark Côté explains how Shopify's Infrastructure defined their charter and guiding principles, as well as how they decide what to focus on. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
      <podcast:chapters url="https://share.transistor.fm/s/60c805e5/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Dropbox's journey with developer productivity metrics | Utsav Shah (Vanta, Dropbox)</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Dropbox's journey with developer productivity metrics | Utsav Shah (Vanta, Dropbox)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">391f6f87-d66f-4cc6-bb3b-91a3f5b76c00</guid>
      <link>https://share.transistor.fm/s/4b59c20f</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/utsav2/">Utsav Shah</a>, who leads Platform at Vanta and previously led Developer Effectiveness at Dropbox, shares the story of Dropbox’s journey with measuring developer productivity. Utsav discusses what he learned about both system and survey-based measures, his opinion on the usefulness of common Git metrics, and more. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/utsav2/">Utsav Shah</a>, who leads Platform at Vanta and previously led Developer Effectiveness at Dropbox, shares the story of Dropbox’s journey with measuring developer productivity. Utsav discusses what he learned about both system and survey-based measures, his opinion on the usefulness of common Git metrics, and more. </p>]]>
      </content:encoded>
      <pubDate>Wed, 12 Oct 2022 01:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/4b59c20f/340e2a47.mp3" length="107169021" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2677</itunes:duration>
      <itunes:summary>Utsav Shah shares the story of Dropbox’s journey with measuring developer productivity. Utsav discusses what he learned about both system and survey-based measures, his opinion on the usefulness of common Git metrics, and more. </itunes:summary>
      <itunes:subtitle>Utsav Shah shares the story of Dropbox’s journey with measuring developer productivity. Utsav discusses what he learned about both system and survey-based measures, his opinion on the usefulness of common Git metrics, and more. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Using customer interviews to inform your roadmap | Michael Galloway (Doma, Netflix) </title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>Using customer interviews to inform your roadmap | Michael Galloway (Doma, Netflix) </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ce4eed31-ada3-4d4d-94b2-1508e6e3f0ce</guid>
      <link>https://share.transistor.fm/s/c296b60a</link>
      <description>
        <![CDATA[<p>Michael Galloway (Doma and ex-Netflix) describes his process for interviewing developers to understand where his team should focus. He also explains how he thinks about the strategic value of a Platform team. <br><strong><br>Resources mentioned: </strong></p><ul><li><a href="https://bit.ly/3zuQfCm">Customer Interview Guide</a></li><li><a href="https://bit.ly/3On3GZt">Customer Interview Questions</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Michael Galloway (Doma and ex-Netflix) describes his process for interviewing developers to understand where his team should focus. He also explains how he thinks about the strategic value of a Platform team. <br><strong><br>Resources mentioned: </strong></p><ul><li><a href="https://bit.ly/3zuQfCm">Customer Interview Guide</a></li><li><a href="https://bit.ly/3On3GZt">Customer Interview Questions</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 05 Oct 2022 01:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/c296b60a/bc7e1193.mp3" length="103852595" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2594</itunes:duration>
      <itunes:summary>Michael Galloway, who previously led delivery tools at Netflix and now leads platform at Doma, describes his process for interviewing developers to understand where to focus and explains how he thinks about the strategic value of a Platform team. </itunes:summary>
      <itunes:subtitle>Michael Galloway, who previously led delivery tools at Netflix and now leads platform at Doma, describes his process for interviewing developers to understand where to focus and explains how he thinks about the strategic value of a Platform team. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Establishing a DevEx team in a high-growth company | Willie Yao (Notion, Airbnb)</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>Establishing a DevEx team in a high-growth company | Willie Yao (Notion, Airbnb)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aedb0bae-ee35-49ed-8e4f-d1d74ab5f429</guid>
      <link>https://share.transistor.fm/s/b60a17dd</link>
      <description>
        <![CDATA[<p>In this episode, <a href="https://www.linkedin.com/in/willieyao/">Willie Yao</a>, Head of Infrastructure at Notion and former Head of Developer Infrastructure at Airbnb, provides a unique perspective on how Developer Experience teams work in hypergrowth companies. He shares how Airbnb developed a customer-first mindset internally, what it took to get Airbnb’s leadership invested in that effort, and how he’s approaching DevEx at Notion today. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, <a href="https://www.linkedin.com/in/willieyao/">Willie Yao</a>, Head of Infrastructure at Notion and former Head of Developer Infrastructure at Airbnb, provides a unique perspective on how Developer Experience teams work in hypergrowth companies. He shares how Airbnb developed a customer-first mindset internally, what it took to get Airbnb’s leadership invested in that effort, and how he’s approaching DevEx at Notion today. </p>]]>
      </content:encoded>
      <pubDate>Wed, 28 Sep 2022 06:19:46 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/b60a17dd/df8e787f.mp3" length="112323831" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2806</itunes:duration>
      <itunes:summary>Willie Yao, Head of Infrastructure at Notion and former Head of Developer Infrastructure at Airbnb, provides a unique perspective on how Developer Experience teams work in hypergrowth companies. He shares how Airbnb developed a customer-first mindset internally, what it took to get Airbnb’s leadership invested in that effort, and how he’s approaching DevEx at Notion today. </itunes:summary>
      <itunes:subtitle>Willie Yao, Head of Infrastructure at Notion and former Head of Developer Infrastructure at Airbnb, provides a unique perspective on how Developer Experience teams work in hypergrowth companies. He shares how Airbnb developed a customer-first mindset inte</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>A model for managing requests and complaints from developers | Jasmine James (Twitter)</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>A model for managing requests and complaints from developers | Jasmine James (Twitter)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">01bb8193-a978-4267-968c-f71fcbe13e64</guid>
      <link>https://share.transistor.fm/s/4d326bfe</link>
      <description>
        <![CDATA[<p>Twitter’s Developer Experience team is more mature than most. Here, <a href="https://www.linkedin.com/in/jasmine-james/">Jasmine James</a>, a Senior Engineering Manager - Developer Experience, explains how her team manages support requests, why they consider personas as part of their prioritization, and how they present the ROI of the team’s work. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Twitter’s Developer Experience team is more mature than most. Here, <a href="https://www.linkedin.com/in/jasmine-james/">Jasmine James</a>, a Senior Engineering Manager - Developer Experience, explains how her team manages support requests, why they consider personas as part of their prioritization, and how they present the ROI of the team’s work. </p>]]>
      </content:encoded>
      <pubDate>Tue, 13 Sep 2022 14:49:45 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/4d326bfe/74592735.mp3" length="27421931" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>1708</itunes:duration>
      <itunes:summary>Twitter’s Developer Experience team is more mature than most. Here, Jasmine James, a Senior Engineering Manager - Developer Experience, explains how her team manages support requests, why they consider personas as part of their prioritization, and how they present the ROI of the team’s work. </itunes:summary>
      <itunes:subtitle>Twitter’s Developer Experience team is more mature than most. Here, Jasmine James, a Senior Engineering Manager - Developer Experience, explains how her team manages support requests, why they consider personas as part of their prioritization, and how the</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>From DORA to SPACE to DX - A Fireside Chat with Nicole Forsgren</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>From DORA to SPACE to DX - A Fireside Chat with Nicole Forsgren</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">060f2c77-69ee-4756-b3dc-4cfe71c760dd</guid>
      <link>https://share.transistor.fm/s/41772806</link>
      <description>
        <![CDATA[<p>In this special episode, Dr. Nicole Forsgren, author of award-winning book <a href="https://itrevolution.com/accelerate-book/"><em>Accelerate</em></a> and co-author of "<a href="https://www.thoughtworks.com/radar/techniques/four-key-metrics">The SPACE of Developer Productivity</a>", talks about her work with DORA, the inspiration behind the SPACE framework, and how she's thinking about developer experience.<br>Watch the <a href="https://www.youtube.com/watch?v=V4n0XYHvGr8">on-demand fireside chat</a> or <a href="https://getdx.com/news/nicole-forsgren">read the announcement</a> of Nicole joining DX as a strategic advisor. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this special episode, Dr. Nicole Forsgren, author of award-winning book <a href="https://itrevolution.com/accelerate-book/"><em>Accelerate</em></a> and co-author of "<a href="https://www.thoughtworks.com/radar/techniques/four-key-metrics">The SPACE of Developer Productivity</a>", talks about her work with DORA, the inspiration behind the SPACE framework, and how she's thinking about developer experience.<br>Watch the <a href="https://www.youtube.com/watch?v=V4n0XYHvGr8">on-demand fireside chat</a> or <a href="https://getdx.com/news/nicole-forsgren">read the announcement</a> of Nicole joining DX as a strategic advisor. </p>]]>
      </content:encoded>
      <pubDate>Thu, 01 Sep 2022 10:46:44 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/41772806/969b4057.mp3" length="75344684" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>1881</itunes:duration>
      <itunes:summary>In this special episode, Dr. Nicole Forsgren, author of award-winning book Accelerate and co-author of "The SPACE of Developer Productivity", talks about her work with DORA, the inspiration behind the SPACE framework, and how she's thinking about developer experience.</itunes:summary>
      <itunes:subtitle>In this special episode, Dr. Nicole Forsgren, author of award-winning book Accelerate and co-author of "The SPACE of Developer Productivity", talks about her work with DORA, the inspiration behind the SPACE framework, and how she's thinking about develope</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>The people side of engineering and an open conversation about Agile | Brent Strange (GoDaddy)</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>The people side of engineering and an open conversation about Agile | Brent Strange (GoDaddy)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d9e42284-203d-4115-b8f4-2529b0425090</guid>
      <link>https://share.transistor.fm/s/eb1f9628</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/brentstrange/">Brent Strange</a>, Director of Engineering Excellence at GoDaddy, has a unique perspective on the role of an internal enablement team because he focuses more on the people and processes instead of tooling. Here he shares his perspective on org structure, as well as the role of agile coaches and his response to some of the negative views that exist towards Agile. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/brentstrange/">Brent Strange</a>, Director of Engineering Excellence at GoDaddy, has a unique perspective on the role of an internal enablement team because he focuses more on the people and processes instead of tooling. Here he shares his perspective on org structure, as well as the role of agile coaches and his response to some of the negative views that exist towards Agile. </p>]]>
      </content:encoded>
      <pubDate>Wed, 24 Aug 2022 16:19:02 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/eb1f9628/11a4eb63.mp3" length="77726200" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>1940</itunes:duration>
      <itunes:summary>Brent Strange, Director of Engineering Excellence at GoDaddy, has a unique perspective on the role of an internal enablement team because he focuses more on the people and processes instead of tooling. Here he shares his perspective on org structure, as well as the role of agile coaches and his response to some of the negative views that exist towards Agile. </itunes:summary>
      <itunes:subtitle>Brent Strange, Director of Engineering Excellence at GoDaddy, has a unique perspective on the role of an internal enablement team because he focuses more on the people and processes instead of tooling. Here he shares his perspective on org structure, as w</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Moving Slack's development experience to remote environments | Sylvestor George (Slack)</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Moving Slack's development experience to remote environments | Sylvestor George (Slack)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f5ef4ebf-f360-4a8d-aabd-dae61bbfc3cc</guid>
      <link>https://share.transistor.fm/s/d4c845c8</link>
      <description>
        <![CDATA[<p>Sylvestor George (Staff Software Engineer on Slack’s Internal Tools Team) led a project to move the entire development experience to remote environments, which was widely regarded as a “dramatically better experience”. Here he shares the full story of that project, including how they identified the problem, the solution they created, and how they convinced engineers to adopt the new workflow. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Sylvestor George (Staff Software Engineer on Slack’s Internal Tools Team) led a project to move the entire development experience to remote environments, which was widely regarded as a “dramatically better experience”. Here he shares the full story of that project, including how they identified the problem, the solution they created, and how they convinced engineers to adopt the new workflow. </p>]]>
      </content:encoded>
      <pubDate>Wed, 10 Aug 2022 17:00:25 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/d4c845c8/cd88c33d.mp3" length="89772713" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2241</itunes:duration>
      <itunes:summary>Sylvestor George led a project to move Slack's entire development experience to remote environments, which was widely regarded as a “dramatically better experience”. Here he shares the full story of that project, including how they identified the problem, the solution they created, and how they convinced engineers to adopt the new workflow. </itunes:summary>
      <itunes:subtitle>Sylvestor George led a project to move Slack's entire development experience to remote environments, which was widely regarded as a “dramatically better experience”. Here he shares the full story of that project, including how they identified the problem,</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Snyk’s journey with developer experience and productivity | Crystal Hirschorn (Snyk)</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Snyk’s journey with developer experience and productivity | Crystal Hirschorn (Snyk)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">15146cd8-e730-465b-ad0f-c3d2bab79cc0</guid>
      <link>https://share.transistor.fm/s/733faad1</link>
      <description>
        <![CDATA[<p>In this episode Abi Noda is joined by Crystal Hirschorn, who leads Platform Infrastructure, SRE, and Developer Experience at Snyk. In their conversation, Crystal shares the story behind the recently founded Developer experience group, including why they named the team Developer Experience, how she calculates the cost of the problems they solve, and how they partner with engineering teams.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode Abi Noda is joined by Crystal Hirschorn, who leads Platform Infrastructure, SRE, and Developer Experience at Snyk. In their conversation, Crystal shares the story behind the recently founded Developer experience group, including why they named the team Developer Experience, how she calculates the cost of the problems they solve, and how they partner with engineering teams.</p>]]>
      </content:encoded>
      <pubDate>Wed, 03 Aug 2022 21:24:03 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/733faad1/58b00ef4.mp3" length="79835889" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>1993</itunes:duration>
      <itunes:summary>In this episode Abi Noda is joined by Crystal Hirschorn, who leads Platform Infrastructure, SRE, and Developer Experience at Snyk. In their conversation, Crystal shares the story behind the recently founded Developer experience group, including why they named the team Developer Experience, how she calculates the cost of the problems they solve, and how they partner with engineering teams.</itunes:summary>
      <itunes:subtitle>In this episode Abi Noda is joined by Crystal Hirschorn, who leads Platform Infrastructure, SRE, and Developer Experience at Snyk. In their conversation, Crystal shares the story behind the recently founded Developer experience group, including why they n</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Supporting 100,000 engineers | Max Pugliese (IBM)</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Supporting 100,000 engineers | Max Pugliese (IBM)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">963e2dda-00a1-4e6b-a97c-a61d8d58401f</guid>
      <link>https://share.transistor.fm/s/719f0f87</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jmaxpugliese/">Max Pugliese</a>, formerly the Director of Developer Experience at IBM, offers a look at what it’s like to support tens of thousands of engineers. He explains why it’s important to think about the culture and processes surrounding the tooling changes a team tries to implement, how to stay close to developers, and more. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jmaxpugliese/">Max Pugliese</a>, formerly the Director of Developer Experience at IBM, offers a look at what it’s like to support tens of thousands of engineers. He explains why it’s important to think about the culture and processes surrounding the tooling changes a team tries to implement, how to stay close to developers, and more. </p>]]>
      </content:encoded>
      <pubDate>Thu, 28 Jul 2022 07:28:53 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/719f0f87/4dc1b1df.mp3" length="74566321" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>1861</itunes:duration>
      <itunes:summary>Max Pugliese, formerly the Director of Developer Experience at IBM, offers a look at what it’s like to support tens of thousands of engineers. He explains why it’s important to think about the culture and processes surrounding the tooling changes a team tries to implement, how to stay close to developers, and more. </itunes:summary>
      <itunes:subtitle>Max Pugliese, formerly the Director of Developer Experience at IBM, offers a look at what it’s like to support tens of thousands of engineers. He explains why it’s important to think about the culture and processes surrounding the tooling changes a team t</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>The value of having a PM on a platform team | Jelmer Borst (Picnic Technologies)</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>The value of having a PM on a platform team | Jelmer Borst (Picnic Technologies)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7a27dc59-cd4b-4db1-b9ba-8c23c5a00d9e</guid>
      <link>https://share.transistor.fm/s/dfe50b5e</link>
      <description>
        <![CDATA[<p>In this episode Abi speaks with Jelmer Borst, Product Manager for Picnic Technologies’ Platform group. Jelmer explains what the value is of having a PM in an internal-facing team, and shares his process for gathering feedback from developers to understand where they’re experiencing friction.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode Abi speaks with Jelmer Borst, Product Manager for Picnic Technologies’ Platform group. Jelmer explains what the value is of having a PM in an internal-facing team, and shares his process for gathering feedback from developers to understand where they’re experiencing friction.</p>]]>
      </content:encoded>
      <pubDate>Wed, 13 Jul 2022 17:39:10 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/dfe50b5e/2beca091.mp3" length="81723794" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2040</itunes:duration>
      <itunes:summary>In this episode Abi speaks with Jelmer Borst, Product Manager for Picnic Technologies’ Platform group. Jelmer explains what the value is of having a PM in an internal-facing team, and shares his process for gathering feedback from developers to understand where they’re experiencing friction.</itunes:summary>
      <itunes:subtitle>In this episode Abi speaks with Jelmer Borst, Product Manager for Picnic Technologies’ Platform group. Jelmer explains what the value is of having a PM in an internal-facing team, and shares his process for gathering feedback from developers to understand</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Common pitfalls in adopting engineering metrics | Mojtaba Hosseini (Zapier)</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>Common pitfalls in adopting engineering metrics | Mojtaba Hosseini (Zapier)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">409872d1-605d-4c15-9455-7a3cafec7f6e</guid>
      <link>https://share.transistor.fm/s/4e677172</link>
      <description>
        <![CDATA[<p>In this interview, Mojtaba Hosseini (Director of Engineering at Zapier) talks about how to approach using metrics, pitfalls teams run into, and the common evolution teams go through as they adopt metrics.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this interview, Mojtaba Hosseini (Director of Engineering at Zapier) talks about how to approach using metrics, pitfalls teams run into, and the common evolution teams go through as they adopt metrics.</p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Jul 2022 15:37:10 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/4e677172/40a1e14b.mp3" length="110658988" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2763</itunes:duration>
      <itunes:summary>In this interview, Mojtaba Hosseini (Director of Engineering at Zapier) talks about how to approach using metrics, pitfalls teams run into, and the common evolution teams go through as they adopt metrics.</itunes:summary>
      <itunes:subtitle>In this interview, Mojtaba Hosseini (Director of Engineering at Zapier) talks about how to approach using metrics, pitfalls teams run into, and the common evolution teams go through as they adopt metrics.</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Advocating for the voice of the developer | Julio Santana (Workday)</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>Advocating for the voice of the developer | Julio Santana (Workday)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c554b6e6-e6f3-4bad-a9f7-59fae1aa4e3f</guid>
      <link>https://share.transistor.fm/s/73f3030e</link>
      <description>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jsantanaiii/">Julio Santana</a> from Workday shares how he thinks about the ideal scope of a Developer Experience team, getting buy-in for DX initiatives, how his team gathers feedback from developers, and more. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.linkedin.com/in/jsantanaiii/">Julio Santana</a> from Workday shares how he thinks about the ideal scope of a Developer Experience team, getting buy-in for DX initiatives, how his team gathers feedback from developers, and more. </p>]]>
      </content:encoded>
      <pubDate>Wed, 29 Jun 2022 13:55:54 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/73f3030e/88f923cb.mp3" length="37961016" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2366</itunes:duration>
      <itunes:summary>Julio Santana from Workday shares how he thinks about the ideal scope of a Developer Experience team, getting buy-in for DX initiatives, how his team gathers feedback from developers, and more. </itunes:summary>
      <itunes:subtitle>Julio Santana from Workday shares how he thinks about the ideal scope of a Developer Experience team, getting buy-in for DX initiatives, how his team gathers feedback from developers, and more. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Why founding a DevEx team is like starting a startup | Minh Pham, Titus Stone (Ibotta)</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>Why founding a DevEx team is like starting a startup | Minh Pham, Titus Stone (Ibotta)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4b22b6e7-7379-4bcf-9e85-10201f8bdd28</guid>
      <link>https://share.transistor.fm/s/9695a99c</link>
      <description>
        <![CDATA[<p>In this episode we’re joined by Minh Pham and Titus Stone from Ibotta’s Developer Experience team. You’ll hear their story about how the DX team came into existence, why they view a DX team as a “startup within a startup”, and their vision for what DX at Ibotta will become.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode we’re joined by Minh Pham and Titus Stone from Ibotta’s Developer Experience team. You’ll hear their story about how the DX team came into existence, why they view a DX team as a “startup within a startup”, and their vision for what DX at Ibotta will become.</p>]]>
      </content:encoded>
      <pubDate>Wed, 22 Jun 2022 20:42:20 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/9695a99c/2a1bc614.mp3" length="94877982" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2369</itunes:duration>
      <itunes:summary>In this episode we’re joined by Minh Pham and Titus Stone from Ibotta’s Developer Experience team. You’ll hear their story about how the DX team came into existence, why they view a DX team as a “startup within a startup”, and their vision for what DX at Ibotta will become.</itunes:summary>
      <itunes:subtitle>In this episode we’re joined by Minh Pham and Titus Stone from Ibotta’s Developer Experience team. You’ll hear their story about how the DX team came into existence, why they view a DX team as a “startup within a startup”, and their vision for what DX at </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>The ultimate guide on Engineering Operations | Ryan Atkins (Asana, Stripe, Dropbox)</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>The ultimate guide on Engineering Operations | Ryan Atkins (Asana, Stripe, Dropbox)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">73d8f888-b025-4fc3-a6cd-e4605c628597</guid>
      <link>https://share.transistor.fm/s/748ab2b6</link>
      <description>
        <![CDATA[<p>In this episode Abi Noda speaks with <a href="https://www.linkedin.com/in/ryan-atkins-9503659/">Ryan Atkins</a>, Asana’s Head of Engineering Operations. They talk about the role of EngOps and when it’s needed, founding an EngOps team, how these teams work in large companies, and more. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode Abi Noda speaks with <a href="https://www.linkedin.com/in/ryan-atkins-9503659/">Ryan Atkins</a>, Asana’s Head of Engineering Operations. They talk about the role of EngOps and when it’s needed, founding an EngOps team, how these teams work in large companies, and more. </p>]]>
      </content:encoded>
      <pubDate>Thu, 16 Jun 2022 19:21:50 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/748ab2b6/a864cfbb.mp3" length="112075888" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2799</itunes:duration>
      <itunes:summary>In this episode Abi Noda speaks with Ryan Atkins, Asana’s Head of Engineering Operations. They talk about the role of EngOps and when it’s needed, founding an EngOps team, how these teams work in large companies, and more. </itunes:summary>
      <itunes:subtitle>In this episode Abi Noda speaks with Ryan Atkins, Asana’s Head of Engineering Operations. They talk about the role of EngOps and when it’s needed, founding an EngOps team, how these teams work in large companies, and more. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Staffing infrastructure teams | Will Larson (Calm, Stripe, Uber)</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>Staffing infrastructure teams | Will Larson (Calm, Stripe, Uber)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a64a0721-0de4-4626-8f2b-b336f5583cd8</guid>
      <link>https://share.transistor.fm/s/24f6f774</link>
      <description>
        <![CDATA[<p>Will Larson, the CTO at Calm, covers a wide range of topics including whether Infrastructure Engineering is chronically understaffed, the role of Eng Ops, how his opinion on the “build vs buy” question has changed, his thoughts on metrics, and more. </p><p>Helpful resources:</p><ul><li>Will's <a href="https://infraeng.dev/">Infraeng book</a></li><li>Will's article, <a href="https://lethain.com/infrastructure-planning/">Infrastructure planning</a></li><li>Will's article, <a href="https://lethain.com/how-to-invest-technical-infrastructure/">How to invest in infrastructure</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Will Larson, the CTO at Calm, covers a wide range of topics including whether Infrastructure Engineering is chronically understaffed, the role of Eng Ops, how his opinion on the “build vs buy” question has changed, his thoughts on metrics, and more. </p><p>Helpful resources:</p><ul><li>Will's <a href="https://infraeng.dev/">Infraeng book</a></li><li>Will's article, <a href="https://lethain.com/infrastructure-planning/">Infrastructure planning</a></li><li>Will's article, <a href="https://lethain.com/how-to-invest-technical-infrastructure/">How to invest in infrastructure</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 08 Jun 2022 14:55:25 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/24f6f774/f81fc614.mp3" length="97430664" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2433</itunes:duration>
      <itunes:summary>Will Larson, the CTO at Calm, covers a wide range of topics including whether Infrastructure Engineering is chronically understaffed, the role of Eng Ops, how his opinion on the “build vs buy” question has changed, his thoughts on metrics, and more. </itunes:summary>
      <itunes:subtitle>Will Larson, the CTO at Calm, covers a wide range of topics including whether Infrastructure Engineering is chronically understaffed, the role of Eng Ops, how his opinion on the “build vs buy” question has changed, his thoughts on metrics, and more. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Supporting autonomous teams | Victoria Morgan-Smith (Financial Times)</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>Supporting autonomous teams | Victoria Morgan-Smith (Financial Times)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d6c57d6e-7bbc-4ce7-b159-90b780fcbaec</guid>
      <link>https://share.transistor.fm/s/30119381</link>
      <description>
        <![CDATA[<p>Joining us for this episode is <a href="https://www.linkedin.com/in/victoriamorgansmith/">Victoria Morgan-Smith</a>, the Director of Delivery for Engineering Enablement at the Financial Times. Victoria shares some of the tradeoffs in having an autonomous, “you build it, you run it” culture. She also shares how her group equips engineering teams with metrics, best practices, and more.</p><p>Follow Victoria on <a href="https://www.linkedin.com/in/victoriamorgansmith/">LinkedIn</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Joining us for this episode is <a href="https://www.linkedin.com/in/victoriamorgansmith/">Victoria Morgan-Smith</a>, the Director of Delivery for Engineering Enablement at the Financial Times. Victoria shares some of the tradeoffs in having an autonomous, “you build it, you run it” culture. She also shares how her group equips engineering teams with metrics, best practices, and more.</p><p>Follow Victoria on <a href="https://www.linkedin.com/in/victoriamorgansmith/">LinkedIn</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 25 May 2022 21:08:10 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/30119381/9c113b9e.mp3" length="33995632" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2118</itunes:duration>
      <itunes:summary>Joining us for this episode is Victoria Morgan-Smith, the Director of Delivery for Engineering Enablement at Financial Times. Victoria shares some of the tradeoffs in having an autonomous, “you build it, you run it” culture. She also shares how her group equips engineering teams with metrics, best practices, and more.</itunes:summary>
      <itunes:subtitle>Joining us for this episode is Victoria Morgan-Smith, the Director of Delivery for Engineering Enablement at Financial Times. Victoria shares some of the tradeoffs in having an autonomous, “you build it, you run it” culture. She also shares how her group </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>What it looks like to hire Engineering Effectiveness too late | Peter Seibel (ex-Twitter)</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>What it looks like to hire Engineering Effectiveness too late | Peter Seibel (ex-Twitter)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">45229827-19e0-4615-ab01-466c6cd286e5</guid>
      <link>https://share.transistor.fm/s/6d197436</link>
      <description>
        <![CDATA[<p>In this episode Abi talks with Peter Seibel. Peter previously was the Director of Engineering for the Democratic National Committee, and before that led Twitter’s Engineering Effectiveness (EE) team. In this interview, Peter reflects on his experience at Twitter, sharing why it’s better to invest in EE early and his vision for how EE teams can fulfill their potential.  </p><p>Useful links:</p><ul><li>Follow Peter on <a href="https://twitter.com/peterseibel">Twitter</a> and <a href="https://www.linkedin.com/in/peter-seibel-39518a72/">LinkedIn</a></li><li>Read Peter's post about leading Engineering Effectiveness at Twitter: <a href="https://gigamonkeys.com/flowers/">Let a 1,000 Flowers Bloom. Then Rip 999 of Them Out by the Roots </a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode Abi talks with Peter Seibel. Peter previously was the Director of Engineering for the Democratic National Committee, and before that led Twitter’s Engineering Effectiveness (EE) team. In this interview, Peter reflects on his experience at Twitter, sharing why it’s better to invest in EE early and his vision for how EE teams can fulfill their potential.  </p><p>Useful links:</p><ul><li>Follow Peter on <a href="https://twitter.com/peterseibel">Twitter</a> and <a href="https://www.linkedin.com/in/peter-seibel-39518a72/">LinkedIn</a></li><li>Read Peter's post about leading Engineering Effectiveness at Twitter: <a href="https://gigamonkeys.com/flowers/">Let a 1,000 Flowers Bloom. Then Rip 999 of Them Out by the Roots </a></li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 12 May 2022 08:11:11 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/6d197436/ebe65b92.mp3" length="36918116" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2300</itunes:duration>
      <itunes:summary>In this episode Abi talks with Peter Seibel. Peter previously was the Director of Engineering for the Democratic National Committee, and before that led Twitter’s Engineering Effectiveness (EE) team. In this interview, Peter reflects on his experience at Twitter, sharing why it’s better to invest in EE early and his vision for how EE teams can fulfill their potential.  </itunes:summary>
      <itunes:subtitle>In this episode Abi talks with Peter Seibel. Peter previously was the Director of Engineering for the Democratic National Committee, and before that led Twitter’s Engineering Effectiveness (EE) team. In this interview, Peter reflects on his experience at </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>Tactics for driving service adoption | Varun Achar (Razorpay)</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>Tactics for driving service adoption | Varun Achar (Razorpay)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a01c6e22-d357-4623-b552-6efcceba009a</guid>
      <link>https://share.transistor.fm/s/39e35efe</link>
      <description>
        <![CDATA[<p>In this episode, Varun Achar (Director of Engineering at Razorpay) explains how the Platform org has grown from a 15-person team owning everything, to 3 separate subteams. He also shares how they think about creating a culture of productivity, and some of the tactics they’ve used for increasing service adoption.</p><p>Helpful links: </p><ul><li>Connect with Varun on <a href="https://www.linkedin.com/in/varunachar/">LinkedIn</a></li><li>Read Varun's blog post, <a href="https://engineering.razorpay.com/the-platform-engineer-db2b21434911">The Platform Engineer</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Varun Achar (Director of Engineering at Razorpay) explains how the Platform org has grown from a 15-person team owning everything, to 3 separate subteams. He also shares how they think about creating a culture of productivity, and some of the tactics they’ve used for increasing service adoption.</p><p>Helpful links: </p><ul><li>Connect with Varun on <a href="https://www.linkedin.com/in/varunachar/">LinkedIn</a></li><li>Read Varun's blog post, <a href="https://engineering.razorpay.com/the-platform-engineer-db2b21434911">The Platform Engineer</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 04 May 2022 16:00:00 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/39e35efe/9b0dd3ef.mp3" length="35327244" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2201</itunes:duration>
      <itunes:summary>In this episode, Varun Achar (Director of Engineering at Razorpay) explains how the Platform org has grown from a 15-person team owning everything, to 3 separate subteams. He also shares how they think about creating a culture of productivity, and some of the tactics they’ve used for increasing service adoption.</itunes:summary>
      <itunes:subtitle>In this episode, Varun Achar (Director of Engineering at Razorpay) explains how the Platform org has grown from a 15-person team owning everything, to 3 separate subteams. He also shares how they think about creating a culture of productivity, and some of</itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How DoorDash’s developer productivity team prioritizes projects | Marco Chirico (DoorDash)</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>How DoorDash’s developer productivity team prioritizes projects | Marco Chirico (DoorDash)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">417ef27c-3547-4211-a8a4-628c08991486</guid>
      <link>https://share.transistor.fm/s/0708b29c</link>
      <description>
        <![CDATA[<p>In this episode, Marco Chirico shares the strategies DoorDash’s Developer Productivity group uses to prioritize their work. He also explains how the Developer Productivity group has evolved over time, and how they measure their success today.  </p><p>Useful links: </p><ul><li>Connect with Marco on <a href="https://www.linkedin.com/in/chiricomarco/">LinkedIn</a></li><li>Read DoorDash's <a href="https://doordash.engineering/">Engineering blog</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Marco Chirico shares the strategies DoorDash’s Developer Productivity group uses to prioritize their work. He also explains how the Developer Productivity group has evolved over time, and how they measure their success today.  </p><p>Useful links: </p><ul><li>Connect with Marco on <a href="https://www.linkedin.com/in/chiricomarco/">LinkedIn</a></li><li>Read DoorDash's <a href="https://doordash.engineering/">Engineering blog</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 20 Apr 2022 09:56:22 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/0708b29c/e8f4443e.mp3" length="21702084" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>1349</itunes:duration>
      <itunes:summary>In this episode, Marco Chirico shares the strategies DoorDash’s Developer Productivity group uses to prioritize their work. He also explains how the Developer Productivity group has evolved over time, and how they measure their success today.  </itunes:summary>
      <itunes:subtitle>In this episode, Marco Chirico shares the strategies DoorDash’s Developer Productivity group uses to prioritize their work. He also explains how the Developer Productivity group has evolved over time, and how they measure their success today.  </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
    <item>
      <title>How GitHub’s developer experience team has evolved | Liz Saling (GitHub)</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>How GitHub’s developer experience team has evolved | Liz Saling (GitHub)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5b51ed66-ae1e-4c6d-afd9-1c94feaa6b28</guid>
      <link>https://share.transistor.fm/s/22a0d9d8</link>
      <description>
        <![CDATA[<p>Liz Saling, Director of Engineering at GitHub, shares the story of how the Developer Experience group was founded and why GitHub paused features for a quarter to focus on making developer experience improvements. </p><p><strong>Helpful links:</strong></p><ul><li>Watch Liz’s GitHub Universe talk, “<a href="https://www.youtube.com/watch?v=3XR7qeAosFs">Paying Down Technical Debt</a>”</li><li>Find Liz on <a href="https://www.linkedin.com/in/lizsaling/">LinkedIn</a> or <a href="https://twitter.com/LizSaling">Twitter</a></li><li>Read Liz’s blog at <a href="https://lizsaling.com/">lizsaling.com</a></li><li>Read <a href="https://github.blog/category/engineering/">GitHub’s engineering blog </a> </li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Liz Saling, Director of Engineering at GitHub, shares the story of how the Developer Experience group was founded and why GitHub paused features for a quarter to focus on making developer experience improvements. </p><p><strong>Helpful links:</strong></p><ul><li>Watch Liz’s GitHub Universe talk, “<a href="https://www.youtube.com/watch?v=3XR7qeAosFs">Paying Down Technical Debt</a>”</li><li>Find Liz on <a href="https://www.linkedin.com/in/lizsaling/">LinkedIn</a> or <a href="https://twitter.com/LizSaling">Twitter</a></li><li>Read Liz’s blog at <a href="https://lizsaling.com/">lizsaling.com</a></li><li>Read <a href="https://github.blog/category/engineering/">GitHub’s engineering blog </a> </li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Wed, 13 Apr 2022 10:52:46 -0600</pubDate>
      <author>DX</author>
      <enclosure url="https://media.transistor.fm/22a0d9d8/caad3fb2.mp3" length="41480754" type="audio/mpeg"/>
      <itunes:author>DX</itunes:author>
      <itunes:duration>2586</itunes:duration>
      <itunes:summary>Liz Saling, Director of Engineering at GitHub, shares the story of how the Developer Experience group was founded and why GitHub paused features for a quarter to focus on making developer experience improvements. </itunes:summary>
      <itunes:subtitle>Liz Saling, Director of Engineering at GitHub, shares the story of how the Developer Experience group was founded and why GitHub paused features for a quarter to focus on making developer experience improvements. </itunes:subtitle>
      <itunes:keywords>Software Engineering, Developer Experience, Developer Productivity, Engineering Leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://www.linkedin.com/in/abinoda/" img="https://img.transistorcdn.com/3JLLtbOaHGiInF_ixHI50J6hrbPvsmkptOyXbjqu2gw/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9wZXJz/b24vMDNiMDZmMjkt/ZmQxMy00NmNlLWE5/YzEtOTAzODQ4YTE2/YjgzLzE2NjU0MzA2/ODktaW1hZ2UuanBn.jpg">Abi Noda</podcast:person>
    </item>
  </channel>
</rss>
