<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/atom+xml" href="https://feeds.transistor.fm/ai-x-devops-by-facets-cloud" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>AI x DevOps by Facets.cloud</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/ai-x-devops-by-facets-cloud</itunes:new-feed-url>
    <description>Engineering teams are under pressure to move faster, do more with less, and stay ahead of an increasingly complex stack. AI is becoming a key piece of that equation — not just as a tool, but as a shift in how DevOps is done.

At Facets.cloud, we’re building infrastructure orchestration for the AI era. And with AI x DevOps Podcast, we’re creating the space for honest, technical, forward-looking conversations about that shift - from early experiments to long-term visions.

This podcast is about sharing what’s real: what’s working, what’s not, and what’s next. Whether you’re building internal copilots, streamlining CI/CD with AI, or rethinking developer experience — we want to learn from your story.</description>
    <copyright>© 2026 Facets.cloud</copyright>
    <podcast:guid>a2d7e013-4ced-55cb-8a24-76017b83a343</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <language>en</language>
    <pubDate>Wed, 11 Feb 2026 06:35:57 -0800</pubDate>
    <lastBuildDate>Wed, 11 Feb 2026 06:36:11 -0800</lastBuildDate>
    <link>https://www.facets.cloud/</link>
    
    <itunes:category text="Technology"/>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Facets.cloud</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/a7QgUAhvZINX00FtcV8V_OUXCDSfi9ahidtJ0Ik3f4o/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kMDI3/OGE5MzBiYzc2MDQ4/NmIyOTYyZDg2MjZm/NDc2Ni5wbmc.jpg"/>
    <itunes:summary>Engineering teams are under pressure to move faster, do more with less, and stay ahead of an increasingly complex stack. AI is becoming a key piece of that equation — not just as a tool, but as a shift in how DevOps is done.

At Facets.cloud, we’re building infrastructure orchestration for the AI era. And with AI x DevOps Podcast, we’re creating the space for honest, technical, forward-looking conversations about that shift - from early experiments to long-term visions.

This podcast is about sharing what’s real: what’s working, what’s not, and what’s next. Whether you’re building internal copilots, streamlining CI/CD with AI, or rethinking developer experience — we want to learn from your story.</itunes:summary>
    <itunes:subtitle>Engineering teams are under pressure to move faster, do more with less, and stay ahead of an increasingly complex stack.</itunes:subtitle>
    <itunes:keywords></itunes:keywords>
    <itunes:owner>
      <itunes:name>Facets.cloud</itunes:name>
      <itunes:email>marketing@facets.cloud</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>What Engineering Productivity Means Now: The DORA Lens on AI</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>What Engineering Productivity Means Now: The DORA Lens on AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b412b31c-46a2-4c8a-a05e-018c9e678f75</guid>
      <link>https://share.transistor.fm/s/8669c525</link>
      <description>
        <![CDATA[<p>In the 6th episode of the AI in DevOps podcast, host Rohit Raveendran sits down with Nathen Harvey, the lead at DORA at Google Cloud, to dissect the groundbreaking findings of the 2025 DORA Report.</p><p><strong>Key topics covered:<br></strong><br></p><p>• <strong>AI as an Amplifier:</strong> Learn why AI is categorized as an "amplifier" rather than a "magic wand," requiring solid existing practices to truly yield results.</p><p>• <strong>The Platform Engineering Boom:</strong> A look into why 90% of survey respondents have now adopted platform engineering.</p><p>• <strong>The J-Curve of Productivity:</strong> How to navigate the initial performance dip during a transformation to reach higher stability and efficiency.</p><p>• <strong>AI-Centric UX:</strong> Discussing whether platforms should be redesigned to serve AI agents as primary users.</p><p>• <strong>Measuring Success:</strong> Moving beyond static dashboards toward team reflection and experimentation to improve software delivery</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the 6th episode of the AI in DevOps podcast, host Rohit Raveendran sits down with Nathen Harvey, the lead at DORA at Google Cloud, to dissect the groundbreaking findings of the 2025 DORA Report.</p><p><strong>Key topics covered:<br></strong><br></p><p>• <strong>AI as an Amplifier:</strong> Learn why AI is categorized as an "amplifier" rather than a "magic wand," requiring solid existing practices to truly yield results.</p><p>• <strong>The Platform Engineering Boom:</strong> A look into why 90% of survey respondents have now adopted platform engineering.</p><p>• <strong>The J-Curve of Productivity:</strong> How to navigate the initial performance dip during a transformation to reach higher stability and efficiency.</p><p>• <strong>AI-Centric UX:</strong> Discussing whether platforms should be redesigned to serve AI agents as primary users.</p><p>• <strong>Measuring Success:</strong> Moving beyond static dashboards toward team reflection and experimentation to improve software delivery</p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Feb 2026 06:26:03 -0800</pubDate>
      <author>Facets.cloud</author>
      <enclosure url="https://media.transistor.fm/8669c525/b606ec21.mp3" length="54674118" type="audio/mpeg"/>
      <itunes:author>Facets.cloud</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/Tftie2Qik5FCPpwomPPRNmzmxRwDXV-1cmqDnYJOfn0/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83Yjdi/OGVlZDY4YjVhZDZj/NDY5MDkxYzY3NTE1/MmQ0Zi5wbmc.jpg"/>
      <itunes:duration>3414</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In the 6th episode of the AI in DevOps podcast, host Rohit Raveendran sits down with Nathen Harvey, the lead at DORA at Google Cloud, to dissect the groundbreaking findings of the 2025 DORA Report.</p><p><strong>Key topics covered:<br></strong><br></p><p>• <strong>AI as an Amplifier:</strong> Learn why AI is categorized as an "amplifier" rather than a "magic wand," requiring solid existing practices to truly yield results.</p><p>• <strong>The Platform Engineering Boom:</strong> A look into why 90% of survey respondents have now adopted platform engineering.</p><p>• <strong>The J-Curve of Productivity:</strong> How to navigate the initial performance dip during a transformation to reach higher stability and efficiency.</p><p>• <strong>AI-Centric UX:</strong> Discussing whether platforms should be redesigned to serve AI agents as primary users.</p><p>• <strong>Measuring Success:</strong> Moving beyond static dashboards toward team reflection and experimentation to improve software delivery</p>]]>
      </itunes:summary>
      <itunes:keywords>DEVOPS, AI SRE, MCP, PLATFORM ENGINEERING, AI, DORA, GCP</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://aixdevops.transistor.fm/people/rohit-raveendran">Rohit Raveendran</podcast:person>
    </item>
    <item>
      <title>AI Meets MLOps: Making Sense of the Mess</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>AI Meets MLOps: Making Sense of the Mess</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">879ceba4-4a96-47ea-b0ef-60a49c554e82</guid>
      <link>https://share.transistor.fm/s/b8e4dd21</link>
      <description>
        <![CDATA[<p>In this episode of AI x DevOps, Rohit sits down with Görkem Ercan, CTO at Jozu, a company building a DevOps platform for AI agents and models. Görkem, a veteran with over two decades of software experience (including contributions to the Eclipse Foundation), explains why MLOps is fundamentally different from traditional, deterministic DevOps—leading to extreme pipeline fragmentation.</p><p>Here are some of our favourite takeaways:</p><p>• Standardization is Key: Why OCI is the recognized standard for packaging AI/ML artifacts, and how the Model Packs project (with ByteDance, Red Hat, and Docker) is defining the artifact structure.</p><p>• Open Source Headaches: The critical challenge maintainers face when receiving large amounts of untested, verbose, AI-generated code.</p><p>• LLM Economics: Discover why running small, fine-tuned LLMs in-house can be cheaper and provide more predictable, consistent results than generic large providers.</p><p>• KitOps Solution: How KitOps creates an abstraction that allows data scientists to focus on training while leveraging existing DevOps platforms for deployment.</p><p>Tune in now to understand the standardization movement reshaping the future of AI development!</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of AI x DevOps, Rohit sits down with Görkem Ercan, CTO at Jozu, a company building a DevOps platform for AI agents and models. Görkem, a veteran with over two decades of software experience (including contributions to the Eclipse Foundation), explains why MLOps is fundamentally different from traditional, deterministic DevOps—leading to extreme pipeline fragmentation.</p><p>Here are some of our favourite takeaways:</p><p>• Standardization is Key: Why OCI is the recognized standard for packaging AI/ML artifacts, and how the Model Packs project (with ByteDance, Red Hat, and Docker) is defining the artifact structure.</p><p>• Open Source Headaches: The critical challenge maintainers face when receiving large amounts of untested, verbose, AI-generated code.</p><p>• LLM Economics: Discover why running small, fine-tuned LLMs in-house can be cheaper and provide more predictable, consistent results than generic large providers.</p><p>• KitOps Solution: How KitOps creates an abstraction that allows data scientists to focus on training while leveraging existing DevOps platforms for deployment.</p><p>Tune in now to understand the standardization movement reshaping the future of AI development!</p>]]>
      </content:encoded>
      <pubDate>Thu, 06 Nov 2025 03:22:16 -0800</pubDate>
      <author>Facets.cloud</author>
      <enclosure url="https://media.transistor.fm/b8e4dd21/b4200856.mp3" length="171210840" type="audio/mpeg"/>
      <itunes:author>Facets.cloud</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/-zFbh9zhy6vWoisMIB8joo9FPTKX3UGH8JjJHz85mlQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83MWI2/NzEwOTkyMjdjZTRj/YzkyMmI0MTRlNTgx/MWQyNS5wbmc.jpg"/>
      <itunes:duration>4268</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of AI x DevOps, Rohit sits down with Görkem Ercan, CTO at Jozu, a company building a DevOps platform for AI agents and models. Görkem, a veteran with over two decades of software experience (including contributions to the Eclipse Foundation), explains why MLOps is fundamentally different from traditional, deterministic DevOps—leading to extreme pipeline fragmentation.</p><p>Here are some of our favourite takeaways:</p><p>• Standardization is Key: Why OCI is the recognized standard for packaging AI/ML artifacts, and how the Model Packs project (with ByteDance, Red Hat, and Docker) is defining the artifact structure.</p><p>• Open Source Headaches: The critical challenge maintainers face when receiving large amounts of untested, verbose, AI-generated code.</p><p>• LLM Economics: Discover why running small, fine-tuned LLMs in-house can be cheaper and provide more predictable, consistent results than generic large providers.</p><p>• KitOps Solution: How KitOps creates an abstraction that allows data scientists to focus on training while leveraging existing DevOps platforms for deployment.</p><p>Tune in now to understand the standardization movement reshaping the future of AI development!</p>]]>
      </itunes:summary>
      <itunes:keywords>mcp, devops, mlopss, aiops, devops, chatgpt, claude, openai, anthropic</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://aixdevops.transistor.fm/people/rohit-raveendran">Rohit Raveendran</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b8e4dd21/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI DevOps in Practice: A Solutions Architect's View</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>AI DevOps in Practice: A Solutions Architect's View</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">36b1b251-7bea-4a63-bd73-0af415d5efca</guid>
      <link>https://share.transistor.fm/s/79539e9a</link>
      <description>
        <![CDATA[<p>Join host Rohit (Facets Cloud) in conversation with Sanjeev Ganjihal, Senior Specialist Solutions Architect - Containers at AWS and early Kubernetes expert. They discuss the rapid evolution of AI and DevOps, Kubernetes as the new operating system, generative AI in engineering, and the shifting landscape of roles like DevOps, SRE, and AIOps. Sanjeev shares practical advice on using AI assistants, agentic tools, self-hosted models, and the balancing act between automation, productivity, and upskilling in today’s cloud-native world.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Join host Rohit (Facets Cloud) in conversation with Sanjeev Ganjihal, Senior Specialist Solutions Architect - Containers at AWS and early Kubernetes expert. They discuss the rapid evolution of AI and DevOps, Kubernetes as the new operating system, generative AI in engineering, and the shifting landscape of roles like DevOps, SRE, and AIOps. Sanjeev shares practical advice on using AI assistants, agentic tools, self-hosted models, and the balancing act between automation, productivity, and upskilling in today’s cloud-native world.</p>]]>
      </content:encoded>
      <pubDate>Mon, 08 Sep 2025 02:41:50 -0700</pubDate>
      <author>Facets.cloud</author>
      <enclosure url="https://media.transistor.fm/79539e9a/7e6284a1.mp3" length="159375980" type="audio/mpeg"/>
      <itunes:author>Facets.cloud</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/dcFXZ1rXnhTBypUTtaz4UMSsU5zR2zrMcZgFZ-WN6oo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yNDMx/MzBiYjBiNDI3N2I0/NzJiMjEzNTE4Y2M3/MDQ3MS5wbmc.jpg"/>
      <itunes:duration>3974</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Join host Rohit (Facets Cloud) in conversation with Sanjeev Ganjihal, Senior Specialist Solutions Architect - Containers at AWS and early Kubernetes expert. They discuss the rapid evolution of AI and DevOps, Kubernetes as the new operating system, generative AI in engineering, and the shifting landscape of roles like DevOps, SRE, and AIOps. Sanjeev shares practical advice on using AI assistants, agentic tools, self-hosted models, and the balancing act between automation, productivity, and upskilling in today’s cloud-native world.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, kubernetes, MCP, LLM</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://aixdevops.transistor.fm/people/rohit-raveendran">Rohit Raveendran</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/79539e9a/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>AI Security Reality Check</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>AI Security Reality Check</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a67b22a8-87a5-4df0-8cbc-427dc3940032</guid>
      <link>https://share.transistor.fm/s/148f2a54</link>
      <description>
        <![CDATA[<p>This podcast features a discussion with <strong>Nathan Hamiel</strong>, Director of Research at Kudelski Security, an expert with 25 years in the cybersecurity space, focusing specifically on <strong>AI security</strong>.</p><p>The conversation centers on navigating the <strong>generative AI revolution</strong> with a grounded and security-first perspective, particularly for product developers and the security community. Key topics explored include:</p><ul><li><strong>The balance between AI adoption and skepticism</strong>: Nathan discusses how his security outlook influences his professional adoption of AI tools, emphasizing understanding capabilities and evaluating benefits versus trade-offs before production.</li><li><strong>AI productivity and its challenges</strong>: The speakers touch upon Google DORA reports, noting that while AI improves personal coding productivity, its impact on <em>valuable</em> work or features can be negligible or even negative, highlighting the difference between <em>feeling</em> productive and <em>being</em> productive.</li><li><strong>Positive and negative impacts of AI in cybersecurity</strong>: They discuss AI's potential to enhance security tools for code scanning and auto-remediation, such as augmenting traditional fuzzing with large language models. However, they also raise concerns about the <strong>resurgence of conventional vulnerabilities</strong> in AI-generated code.</li><li><strong>Emerging AI-native risks</strong>: The podcast delves into new threats like "<strong>slop squatting</strong>," or "hallucinated dependencies," where LLMs might be tricked into using malicious or non-existent libraries. <strong>Prompt injection</strong> is highlighted as "the vulnerability of generative AI," exploiting the model's inability to differentiate system instructions from user input.</li><li><strong>Addressing AI security vulnerabilities</strong>: Nathan advocates for <strong>architectural changes</strong> and <strong>reducing the attack surface</strong> as the best defense against prompt injection, outlining his "RRT" (refrain, restrict, trap) approach. The need for human oversight and deterministic checks in AI development workflows is also stressed.</li><li><strong>The urgency of security in AI product development</strong>: Both speakers express concern over the rush to market AI products without adequately addressing security issues, leading to unacknowledged vulnerabilities.</li><li><strong>The nature of AI mistakes</strong>: A unique insight is provided on how <strong>AI mistakes differ from human errors</strong>; while human mistakes are predictable (e.g., fatigue), AI mistakes can be random and apply across all complexity levels, making them harder to predict and mitigate. The potential for "hallucinated data of today" to become "facts of tomorrow" due to AI-generated output tainting the web is also discussed.</li><li><strong>Future of AI advancements</strong>: The conversation concludes by suggesting that AI improvements might be <strong>plateauing rather than growing exponentially</strong>, and that new fundamental innovations are needed to push AI forward beyond current capabilities.</li></ul><p>Ultimately, the podcast serves as a <strong>grounding discussion for product engineers</strong> on how to build and integrate AI solutions in a secure and responsible manner, emphasizing that AI tools should be used to solve tasks effectively rather than chasing a path to superintelligence.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This podcast features a discussion with <strong>Nathan Hamiel</strong>, Director of Research at Kudelski Security, an expert with 25 years in the cybersecurity space, focusing specifically on <strong>AI security</strong>.</p><p>The conversation centers on navigating the <strong>generative AI revolution</strong> with a grounded and security-first perspective, particularly for product developers and the security community. Key topics explored include:</p><ul><li><strong>The balance between AI adoption and skepticism</strong>: Nathan discusses how his security outlook influences his professional adoption of AI tools, emphasizing understanding capabilities and evaluating benefits versus trade-offs before production.</li><li><strong>AI productivity and its challenges</strong>: The speakers touch upon Google DORA reports, noting that while AI improves personal coding productivity, its impact on <em>valuable</em> work or features can be negligible or even negative, highlighting the difference between <em>feeling</em> productive and <em>being</em> productive.</li><li><strong>Positive and negative impacts of AI in cybersecurity</strong>: They discuss AI's potential to enhance security tools for code scanning and auto-remediation, such as augmenting traditional fuzzing with large language models. However, they also raise concerns about the <strong>resurgence of conventional vulnerabilities</strong> in AI-generated code.</li><li><strong>Emerging AI-native risks</strong>: The podcast delves into new threats like "<strong>slop squatting</strong>," or "hallucinated dependencies," where LLMs might be tricked into using malicious or non-existent libraries. <strong>Prompt injection</strong> is highlighted as "the vulnerability of generative AI," exploiting the model's inability to differentiate system instructions from user input.</li><li><strong>Addressing AI security vulnerabilities</strong>: Nathan advocates for <strong>architectural changes</strong> and <strong>reducing the attack surface</strong> as the best defense against prompt injection, outlining his "RRT" (refrain, restrict, trap) approach. The need for human oversight and deterministic checks in AI development workflows is also stressed.</li><li><strong>The urgency of security in AI product development</strong>: Both speakers express concern over the rush to market AI products without adequately addressing security issues, leading to unacknowledged vulnerabilities.</li><li><strong>The nature of AI mistakes</strong>: A unique insight is provided on how <strong>AI mistakes differ from human errors</strong>; while human mistakes are predictable (e.g., fatigue), AI mistakes can be random and apply across all complexity levels, making them harder to predict and mitigate. The potential for "hallucinated data of today" to become "facts of tomorrow" due to AI-generated output tainting the web is also discussed.</li><li><strong>Future of AI advancements</strong>: The conversation concludes by suggesting that AI improvements might be <strong>plateauing rather than growing exponentially</strong>, and that new fundamental innovations are needed to push AI forward beyond current capabilities.</li></ul><p>Ultimately, the podcast serves as a <strong>grounding discussion for product engineers</strong> on how to build and integrate AI solutions in a secure and responsible manner, emphasizing that AI tools should be used to solve tasks effectively rather than chasing a path to superintelligence.</p>]]>
      </content:encoded>
      <pubDate>Mon, 14 Jul 2025 22:24:22 -0700</pubDate>
      <author>Facets.cloud</author>
      <enclosure url="https://media.transistor.fm/148f2a54/c3ca7e31.mp3" length="143505458" type="audio/mpeg"/>
      <itunes:author>Facets.cloud</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/1_Jb0EGUtnaV4oGF2ahy-sSGVXvCMmBWdZG-a3G1OH4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hZDYw/OGFkYzE1ZTJlNzc3/Mjk4YTM2YTE0MDNk/MGZlNC5wbmc.jpg"/>
      <itunes:duration>3577</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This podcast features a discussion with <strong>Nathan Hamiel</strong>, Director of Research at Kudelski Security, an expert with 25 years in the cybersecurity space, focusing specifically on <strong>AI security</strong>.</p><p>The conversation centers on navigating the <strong>generative AI revolution</strong> with a grounded and security-first perspective, particularly for product developers and the security community. Key topics explored include:</p><ul><li><strong>The balance between AI adoption and skepticism</strong>: Nathan discusses how his security outlook influences his professional adoption of AI tools, emphasizing understanding capabilities and evaluating benefits versus trade-offs before production.</li><li><strong>AI productivity and its challenges</strong>: The speakers touch upon Google DORA reports, noting that while AI improves personal coding productivity, its impact on <em>valuable</em> work or features can be negligible or even negative, highlighting the difference between <em>feeling</em> productive and <em>being</em> productive.</li><li><strong>Positive and negative impacts of AI in cybersecurity</strong>: They discuss AI's potential to enhance security tools for code scanning and auto-remediation, such as augmenting traditional fuzzing with large language models. However, they also raise concerns about the <strong>resurgence of conventional vulnerabilities</strong> in AI-generated code.</li><li><strong>Emerging AI-native risks</strong>: The podcast delves into new threats like "<strong>slop squatting</strong>," or "hallucinated dependencies," where LLMs might be tricked into using malicious or non-existent libraries. <strong>Prompt injection</strong> is highlighted as "the vulnerability of generative AI," exploiting the model's inability to differentiate system instructions from user input.</li><li><strong>Addressing AI security vulnerabilities</strong>: Nathan advocates for <strong>architectural changes</strong> and <strong>reducing the attack surface</strong> as the best defense against prompt injection, outlining his "RRT" (refrain, restrict, trap) approach. The need for human oversight and deterministic checks in AI development workflows is also stressed.</li><li><strong>The urgency of security in AI product development</strong>: Both speakers express concern over the rush to market AI products without adequately addressing security issues, leading to unacknowledged vulnerabilities.</li><li><strong>The nature of AI mistakes</strong>: A unique insight is provided on how <strong>AI mistakes differ from human errors</strong>; while human mistakes are predictable (e.g., fatigue), AI mistakes can be random and apply across all complexity levels, making them harder to predict and mitigate. The potential for "hallucinated data of today" to become "facts of tomorrow" due to AI-generated output tainting the web is also discussed.</li><li><strong>Future of AI advancements</strong>: The conversation concludes by suggesting that AI improvements might be <strong>plateauing rather than growing exponentially</strong>, and that new fundamental innovations are needed to push AI forward beyond current capabilities.</li></ul><p>Ultimately, the podcast serves as a <strong>grounding discussion for product engineers</strong> on how to build and integrate AI solutions in a secure and responsible manner, emphasizing that AI tools should be used to solve tasks effectively rather than chasing a path to superintelligence.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, security, MCP, Prompts, DevOps, IT</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://aixdevops.transistor.fm/people/rohit-raveendran">Rohit Raveendran</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/148f2a54/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>MCP Without the Hype: Founders Take</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>MCP Without the Hype: Founders Take</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">10060ae4-9739-4461-9c03-ce825b3c7622</guid>
      <link>https://share.transistor.fm/s/25e2acde</link>
      <description>
        <![CDATA[<p>In this episode, Facets.cloud co-founders Rohit and Anshul dive deep into Model Context Protocols (MCPs), explaining how they evolved from basic chat assistants to standardized tool connectors for AI-driven DevOps. You’ll learn best practices for designing MCP servers, naming conventions that reduce hallucinations, dry-run workflows for safe automation, and insights on when and why to adopt MCPs within your organization.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Facets.cloud co-founders Rohit and Anshul dive deep into Model Context Protocols (MCPs), explaining how they evolved from basic chat assistants to standardized tool connectors for AI-driven DevOps. You’ll learn best practices for designing MCP servers, naming conventions that reduce hallucinations, dry-run workflows for safe automation, and insights on when and why to adopt MCPs within your organization.</p>]]>
      </content:encoded>
      <pubDate>Tue, 03 Jun 2025 22:29:01 -0700</pubDate>
      <author>Facets.cloud</author>
      <enclosure url="https://media.transistor.fm/25e2acde/c7951cc3.mp3" length="121632221" type="audio/mpeg"/>
      <itunes:author>Facets.cloud</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/fZjHTF_C20zDENDyJjVFk3OZHaKpknAkC7ukhaOfSUU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zOTQx/ZjM0NGM1ZTQ1OTgy/Zjg4ZDljNzZiM2Nl/YWE4Ni5wbmc.jpg"/>
      <itunes:duration>3030</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Facets.cloud co-founders Rohit and Anshul dive deep into Model Context Protocols (MCPs), explaining how they evolved from basic chat assistants to standardized tool connectors for AI-driven DevOps. You’ll learn best practices for designing MCP servers, naming conventions that reduce hallucinations, dry-run workflows for safe automation, and insights on when and why to adopt MCPs within your organization.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, DevOps, MCP, Model Context Protocol, Platform Engineering</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://aixdevops.transistor.fm/people/rohit-raveendran">Rohit Raveendran</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/25e2acde/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>From Click-Ops to Chat-Ops: AI's Double-Edged Promise</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>From Click-Ops to Chat-Ops: AI's Double-Edged Promise</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b584823c-25ae-47b8-9f48-3b22ded7410c</guid>
      <link>https://share.transistor.fm/s/c73d6688</link>
      <description>
        <![CDATA[<p>In the very first episode of the AI x DevOps Podcast, we dive into how AI is actually changing infrastructure, not hypothetically, but line by line.</p><p>Rohit Raveendran, is joined by Vincent De Smet, DevOps engineer at Handshakes.ai, and together, they explore what happens when LLMs start writing Terraform, the difference between deterministic and vibe-coded infra, and why CDK might offer a more AI-friendly future than raw HCL.</p><p>They talk about the trade-offs of trust, the future of platform engineering in an AI-powered world, and how inner-sourced guardrails could become the foundation for safe, scalable self-service. And yes, they touch on the scary parts too like what happens when your AI agent starts doing more than you asked.</p><p>If you're wondering what it <em>actually</em> looks like to bring AI into DevOps without losing control, this one’s for you.</p><p>Wondering how AI-Ready is your DevOps? Take a 2-minute survey <a href="https://ai-ready.facets.cloud/?utm_source=transistor&amp;utm_medium=podcast&amp;utm_campaign=ai_readiness&amp;utm_id=episode1">here</a> to find out.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In the very first episode of the AI x DevOps Podcast, we dive into how AI is actually changing infrastructure, not hypothetically, but line by line.</p><p>Rohit Raveendran, is joined by Vincent De Smet, DevOps engineer at Handshakes.ai, and together, they explore what happens when LLMs start writing Terraform, the difference between deterministic and vibe-coded infra, and why CDK might offer a more AI-friendly future than raw HCL.</p><p>They talk about the trade-offs of trust, the future of platform engineering in an AI-powered world, and how inner-sourced guardrails could become the foundation for safe, scalable self-service. And yes, they touch on the scary parts too like what happens when your AI agent starts doing more than you asked.</p><p>If you're wondering what it <em>actually</em> looks like to bring AI into DevOps without losing control, this one’s for you.</p><p>Wondering how AI-Ready is your DevOps? Take a 2-minute survey <a href="https://ai-ready.facets.cloud/?utm_source=transistor&amp;utm_medium=podcast&amp;utm_campaign=ai_readiness&amp;utm_id=episode1">here</a> to find out.</p>]]>
      </content:encoded>
      <pubDate>Thu, 08 May 2025 22:56:33 -0700</pubDate>
      <author>Facets.cloud</author>
      <enclosure url="https://media.transistor.fm/c73d6688/b63c608c.mp3" length="137823222" type="audio/mpeg"/>
      <itunes:author>Facets.cloud</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/REHAYuEGFxiu4c_DWoXJqI97uyPiifFLVBYMshu-Vi4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80ZTU4/ZWMyNmNlYTE0MGYx/MmNkZDhiNTk3NzA4/YmU1Yy5wbmc.jpg"/>
      <itunes:duration>3438</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In the very first episode of the AI x DevOps Podcast, we dive into how AI is actually changing infrastructure, not hypothetically, but line by line.</p><p>Rohit Raveendran, is joined by Vincent De Smet, DevOps engineer at Handshakes.ai, and together, they explore what happens when LLMs start writing Terraform, the difference between deterministic and vibe-coded infra, and why CDK might offer a more AI-friendly future than raw HCL.</p><p>They talk about the trade-offs of trust, the future of platform engineering in an AI-powered world, and how inner-sourced guardrails could become the foundation for safe, scalable self-service. And yes, they touch on the scary parts too like what happens when your AI agent starts doing more than you asked.</p><p>If you're wondering what it <em>actually</em> looks like to bring AI into DevOps without losing control, this one’s for you.</p><p>Wondering how AI-Ready is your DevOps? Take a 2-minute survey <a href="https://ai-ready.facets.cloud/?utm_source=transistor&amp;utm_medium=podcast&amp;utm_campaign=ai_readiness&amp;utm_id=episode1">here</a> to find out.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI, DevOps, Infrastructure automation, Infrastructure orchestration, platform engineering</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://aixdevops.transistor.fm/people/rohit-raveendran">Rohit Raveendran</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c73d6688/transcript.txt" type="text/plain"/>
    </item>
  </channel>
</rss>
