<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/the-datastorage-com-podcast-building-for-tomorrows-cloud-infrastructure" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>The Datastorage.com Podcast: Building for Tomorrow's Cloud Infrastructure</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/the-datastorage-com-podcast-building-for-tomorrows-cloud-infrastructure</itunes:new-feed-url>
    <description>The Data Storage Podcast explores the infrastructure powering AI, cloud computing, and the next generation of data-driven companies.

Each episode features deep conversations with founders, engineers, investors, and infrastructure leaders building the future of AI workloads, multi-cloud architecture, synthetic data, storage economics, and distributed systems.

We go beyond surface-level cloud talk to examine:
	•	AI infrastructure strategy
	•	Multi-cloud architecture decisions
	•	Cloud cost optimization
	•	Data storage economics
	•	Synthetic data &amp; robotics
	•	Emerging “neocloud” providers
	•	Enterprise storage modernization

If you’re a technical founder, infrastructure leader, system architect, or investor tracking the evolution of cloud and AI, this podcast is built for you.

Produced by Datastorage.com.</description>
    <copyright>© 2026 Datastorage.com</copyright>
    <podcast:guid>951fb8b1-30a1-5406-9779-f152f088efaf</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <language>en</language>
    <pubDate>Tue, 12 May 2026 10:21:11 -0400</pubDate>
    <lastBuildDate>Tue, 12 May 2026 10:22:13 -0400</lastBuildDate>
    <link>https://datastorage.com/podcasts/</link>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Business">
      <itunes:category text="Management"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Datastorage.com</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/zDdtwNN2i0ML2EvzbEF6LCKvKf1bwnDHR4f5xBkff4c/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iNjU0/MmQ3NGU4ZWZkMjAw/NzA5ZTY0YmJkZDNh/YTZmNC5wbmc.jpg"/>
    <itunes:summary>The Data Storage Podcast explores the infrastructure powering AI, cloud computing, and the next generation of data-driven companies.

Each episode features deep conversations with founders, engineers, investors, and infrastructure leaders building the future of AI workloads, multi-cloud architecture, synthetic data, storage economics, and distributed systems.

We go beyond surface-level cloud talk to examine:
	•	AI infrastructure strategy
	•	Multi-cloud architecture decisions
	•	Cloud cost optimization
	•	Data storage economics
	•	Synthetic data &amp; robotics
	•	Emerging “neocloud” providers
	•	Enterprise storage modernization

If you’re a technical founder, infrastructure leader, system architect, or investor tracking the evolution of cloud and AI, this podcast is built for you.

Produced by Datastorage.com.</itunes:summary>
    <itunes:subtitle>The Data Storage Podcast explores the infrastructure powering AI, cloud computing, and the next generation of data-driven companies.</itunes:subtitle>
    <itunes:keywords>AI Infrastructure, Cloud Storage, Multi-cloud, Data infrastructure, Cloud economics, Infrastructure venture capital, machine learning</itunes:keywords>
    <itunes:owner>
      <itunes:name>Datastorage.com</itunes:name>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>Inside the GPU Carrier Layer: Sunny Smith of Massed Compute on NeoClouds, AI Infrastructure &amp; future</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>Inside the GPU Carrier Layer: Sunny Smith of Massed Compute on NeoClouds, AI Infrastructure &amp; future</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">37a82ecd-bae7-43db-9d5c-8accf1795f45</guid>
      <link>https://datastorage.com/podcasts/inside-the-gpu-carrier-layer-sunny-smith-of-mass-compute-on-neo-clouds-ai-infrastructure-the-future-of-enterprise-compute/</link>
      <description>
        <![CDATA[<p>What does it actually mean to own the GPU layer of AI? In this episode, John Kostros sits down with Sunny Smith, Founder and CTO of Massed Compute, one of the few operators building at the true infrastructure layer: owning hardware, deploying GPUs at scale, and powering what he calls the "underlying carrier layer" for next-gen cloud.</p><p>Sunny has lived through every major tech wave, the PC revolution, the internet era, big data, and now AI. He brings a rare perspective: a full-stack executive who still codes, who went from selling an ISP in the 90s to buying 20 servers and 160 GPUs and building a multi-tiered GPU cloud from the ground up.<br>This episode is a no-fluff, deep dive into the real economics and operational realities of AI infrastructure in 2025.</p><p>In this episode, we cover:</p><p>Why most "Neo Clouds" don't actually own their GPUs and why that matters for you as a buyer<br>The "dropshipping infrastructure" problem and how it breaks down on the support side<br>Why storage is the true Achilles heel of AI (not compute)<br>The data locality problem: why you're landlocked by where your data lives<br>How MassCompute's "Local Metal" offering solves the enterprise on-prem AI dilemma<br>Why agentic AI is now running real-time cluster optimization and replacing what used to take days of human effort<br>The GPU scarcity market dynamics: preemptible contracts, 40% down payments, and a seller's market unlike anything Sunny has seen in 30 years<br>What "post-Kubernetes" infrastructure might look like and why optimization is the real moat<br>Why enterprise GPU demand could 10x once Fortune 1000 companies finally lean in<br>The capital question: AI has now consumed more inflation-adjusted dollars than building the entire US interstate highway system</p><p>Who Should Listen:</p><p>CTOs, Engineering Leaders &amp; Cloud Architects. If you're evaluating GPU infrastructure providers, this episode will teach you the right due diligence questions to ask: starting with "Do you actually own the GPU?"</p><p>FinOps &amp; Procurement Teams: Understand why NVMe prices have tripled, why storage is no longer a rounding error on your cloud bill, and how to think about OpEx vs CapEx for GPU infrastructure.</p><p>AI Builders &amp; Developers: Whether you're fine-tuning models, building RAG pipelines, or deploying inference workloads, Sunny breaks down the infrastructure decisions that are silently costing you performance.</p><p>Enterprise Decision-Makers in FinTech, Healthcare &amp; Regulated Industries</p><p> Learn why the "move everything to the cloud" era is over for IP-sensitive data, and what the next-gen on-prem + cloud hybrid model looks like.</p><p>Founders &amp; Operators in the Neo Cloud Space — Get a rare insider view of how a bootstrapped GPU infrastructure company competed, survived, and evolved alongside the hyperscalers.</p><p>About Massed Compute:<br>Massed Compute is a GPU infrastructure company that owns and operates its hardware, offering bare metal, virtual machines, and Local Metal (on-prem GPU leasing). They serve developers, AI companies, and enterprises needing high-performance compute with white-glove, layer-7 support.<br>About DataStorage.com:<br>DataStorage.com is the go-to resource for builders navigating cloud infrastructure, GPU economics, and AI data strategy. We go deep with the operators, founders, and engineers actually shaping the future — no hype, just signal.</p><p>🔔 Subscribe for new episodes every month.</p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What does it actually mean to own the GPU layer of AI? In this episode, John Kostros sits down with Sunny Smith, Founder and CTO of Massed Compute, one of the few operators building at the true infrastructure layer: owning hardware, deploying GPUs at scale, and powering what he calls the "underlying carrier layer" for next-gen cloud.</p><p>Sunny has lived through every major tech wave, the PC revolution, the internet era, big data, and now AI. He brings a rare perspective: a full-stack executive who still codes, who went from selling an ISP in the 90s to buying 20 servers and 160 GPUs and building a multi-tiered GPU cloud from the ground up.<br>This episode is a no-fluff, deep dive into the real economics and operational realities of AI infrastructure in 2025.</p><p>In this episode, we cover:</p><p>Why most "Neo Clouds" don't actually own their GPUs and why that matters for you as a buyer<br>The "dropshipping infrastructure" problem and how it breaks down on the support side<br>Why storage is the true Achilles heel of AI (not compute)<br>The data locality problem: why you're landlocked by where your data lives<br>How MassCompute's "Local Metal" offering solves the enterprise on-prem AI dilemma<br>Why agentic AI is now running real-time cluster optimization and replacing what used to take days of human effort<br>The GPU scarcity market dynamics: preemptible contracts, 40% down payments, and a seller's market unlike anything Sunny has seen in 30 years<br>What "post-Kubernetes" infrastructure might look like and why optimization is the real moat<br>Why enterprise GPU demand could 10x once Fortune 1000 companies finally lean in<br>The capital question: AI has now consumed more inflation-adjusted dollars than building the entire US interstate highway system</p><p>Who Should Listen:</p><p>CTOs, Engineering Leaders &amp; Cloud Architects. If you're evaluating GPU infrastructure providers, this episode will teach you the right due diligence questions to ask: starting with "Do you actually own the GPU?"</p><p>FinOps &amp; Procurement Teams: Understand why NVMe prices have tripled, why storage is no longer a rounding error on your cloud bill, and how to think about OpEx vs CapEx for GPU infrastructure.</p><p>AI Builders &amp; Developers: Whether you're fine-tuning models, building RAG pipelines, or deploying inference workloads, Sunny breaks down the infrastructure decisions that are silently costing you performance.</p><p>Enterprise Decision-Makers in FinTech, Healthcare &amp; Regulated Industries</p><p> Learn why the "move everything to the cloud" era is over for IP-sensitive data, and what the next-gen on-prem + cloud hybrid model looks like.</p><p>Founders &amp; Operators in the Neo Cloud Space — Get a rare insider view of how a bootstrapped GPU infrastructure company competed, survived, and evolved alongside the hyperscalers.</p><p>About Massed Compute:<br>Massed Compute is a GPU infrastructure company that owns and operates its hardware, offering bare metal, virtual machines, and Local Metal (on-prem GPU leasing). They serve developers, AI companies, and enterprises needing high-performance compute with white-glove, layer-7 support.<br>About DataStorage.com:<br>DataStorage.com is the go-to resource for builders navigating cloud infrastructure, GPU economics, and AI data strategy. We go deep with the operators, founders, and engineers actually shaping the future — no hype, just signal.</p><p>🔔 Subscribe for new episodes every month.</p><p><br></p>]]>
      </content:encoded>
      <pubDate>Tue, 12 May 2026 10:20:35 -0400</pubDate>
      <author>Datastorage.com</author>
      <enclosure url="https://media.transistor.fm/c1cba70e/8196fe28.mp3" length="45292422" type="audio/mpeg"/>
      <itunes:author>Datastorage.com</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/grCDhNSg9u3GbmXNd_ALhJ0lDgwbKlIrxrSR-aCD6uI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xNDgy/MTE4NDNjM2Q5YzMy/YTMxOWM5YTQyNDJm/MDhhZS5wbmc.jpg"/>
      <itunes:duration>2827</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What does it actually mean to own the GPU layer of AI? In this episode, John Kostros sits down with Sunny Smith, Founder and CTO of Massed Compute, one of the few operators building at the true infrastructure layer: owning hardware, deploying GPUs at scale, and powering what he calls the "underlying carrier layer" for next-gen cloud.</p><p>Sunny has lived through every major tech wave, the PC revolution, the internet era, big data, and now AI. He brings a rare perspective: a full-stack executive who still codes, who went from selling an ISP in the 90s to buying 20 servers and 160 GPUs and building a multi-tiered GPU cloud from the ground up.<br>This episode is a no-fluff, deep dive into the real economics and operational realities of AI infrastructure in 2025.</p><p>In this episode, we cover:</p><p>Why most "Neo Clouds" don't actually own their GPUs and why that matters for you as a buyer<br>The "dropshipping infrastructure" problem and how it breaks down on the support side<br>Why storage is the true Achilles heel of AI (not compute)<br>The data locality problem: why you're landlocked by where your data lives<br>How MassCompute's "Local Metal" offering solves the enterprise on-prem AI dilemma<br>Why agentic AI is now running real-time cluster optimization and replacing what used to take days of human effort<br>The GPU scarcity market dynamics: preemptible contracts, 40% down payments, and a seller's market unlike anything Sunny has seen in 30 years<br>What "post-Kubernetes" infrastructure might look like and why optimization is the real moat<br>Why enterprise GPU demand could 10x once Fortune 1000 companies finally lean in<br>The capital question: AI has now consumed more inflation-adjusted dollars than building the entire US interstate highway system</p><p>Who Should Listen:</p><p>CTOs, Engineering Leaders &amp; Cloud Architects. If you're evaluating GPU infrastructure providers, this episode will teach you the right due diligence questions to ask: starting with "Do you actually own the GPU?"</p><p>FinOps &amp; Procurement Teams: Understand why NVMe prices have tripled, why storage is no longer a rounding error on your cloud bill, and how to think about OpEx vs CapEx for GPU infrastructure.</p><p>AI Builders &amp; Developers: Whether you're fine-tuning models, building RAG pipelines, or deploying inference workloads, Sunny breaks down the infrastructure decisions that are silently costing you performance.</p><p>Enterprise Decision-Makers in FinTech, Healthcare &amp; Regulated Industries</p><p> Learn why the "move everything to the cloud" era is over for IP-sensitive data, and what the next-gen on-prem + cloud hybrid model looks like.</p><p>Founders &amp; Operators in the Neo Cloud Space — Get a rare insider view of how a bootstrapped GPU infrastructure company competed, survived, and evolved alongside the hyperscalers.</p><p>About Massed Compute:<br>Massed Compute is a GPU infrastructure company that owns and operates its hardware, offering bare metal, virtual machines, and Local Metal (on-prem GPU leasing). They serve developers, AI companies, and enterprises needing high-performance compute with white-glove, layer-7 support.<br>About DataStorage.com:<br>DataStorage.com is the go-to resource for builders navigating cloud infrastructure, GPU economics, and AI data strategy. We go deep with the operators, founders, and engineers actually shaping the future — no hype, just signal.</p><p>🔔 Subscribe for new episodes every month.</p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>NeoClouds , AI Infrastructure</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Fusion Fund’s Lu Zhang on AI Infrastructure, Data Quality, Edge AI &amp; the Future of Venture</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>Fusion Fund’s Lu Zhang on AI Infrastructure, Data Quality, Edge AI &amp; the Future of Venture</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0d7f9a72-24c9-4c73-a0cd-00913c6d7a2f</guid>
      <link>https://share.transistor.fm/s/773d705e</link>
      <description>
        <![CDATA[<p>In this episode of the DataStorage.com podcast, John sits down with Lu Zhang, Founder and Managing Partner of Fusion Fund, to talk about where AI infrastructure, data quality, cloud architecture, and venture capital are headed next.</p><p>Lu shares her perspective as both a former founder and an early-stage investor backing deep tech, healthcare AI, infrastructure, and enterprise startups. The conversation explores what actually matters in AI today: high-quality data, inference cost, architecture design, multimodal data, edge AI, governance, and the growing importance of specialized agents over one-size-fits-all models.</p><p>They also discuss how startups should think about hyperscalers vs. neo-cloud providers, why data curation is becoming mission-critical, how regulated industries like healthcare and finance are approaching AI adoption, and what Lu learned from this year’s NVIDIA GTC.</p><p>Topics covered:<br> • What Lu Zhang looks for in early-stage AI founders<br> • Why data quality matters more than data quantity<br> • AI infrastructure vs. AI applications<br> • Inference cost, model efficiency, and architecture design<br> • Edge AI and the future of AI deployment<br> • Data governance, compliance, and regulated industries<br> • Multimodal data and enterprise memory graphs<br> • Otter, MCP, conversational data, and AI agents<br> • Hyperscalers vs. alternative cloud and storage providers<br> • The future of consolidation in cloud infrastructure<br> • AI’s environmental impact and the role of efficient data systems<br> • NVIDIA GTC, healthcare AI, and where venture is heading next</p><p>If you’re a founder, investor, infrastructure leader, or enterprise operator trying to understand where AI is really going, this episode is packed with practical insights.</p><p>Subscribe to DataStorage.com for more conversations on AI infrastructure, data storage, cloud, and the future of enterprise technology.</p><p>#AIInfrastructure #LuZhang #FusionFund #DataQuality #EdgeAI #CloudInfrastructure #VentureCapital #HealthcareAI #MultimodalAI #DataStorage</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of the DataStorage.com podcast, John sits down with Lu Zhang, Founder and Managing Partner of Fusion Fund, to talk about where AI infrastructure, data quality, cloud architecture, and venture capital are headed next.</p><p>Lu shares her perspective as both a former founder and an early-stage investor backing deep tech, healthcare AI, infrastructure, and enterprise startups. The conversation explores what actually matters in AI today: high-quality data, inference cost, architecture design, multimodal data, edge AI, governance, and the growing importance of specialized agents over one-size-fits-all models.</p><p>They also discuss how startups should think about hyperscalers vs. neo-cloud providers, why data curation is becoming mission-critical, how regulated industries like healthcare and finance are approaching AI adoption, and what Lu learned from this year’s NVIDIA GTC.</p><p>Topics covered:<br> • What Lu Zhang looks for in early-stage AI founders<br> • Why data quality matters more than data quantity<br> • AI infrastructure vs. AI applications<br> • Inference cost, model efficiency, and architecture design<br> • Edge AI and the future of AI deployment<br> • Data governance, compliance, and regulated industries<br> • Multimodal data and enterprise memory graphs<br> • Otter, MCP, conversational data, and AI agents<br> • Hyperscalers vs. alternative cloud and storage providers<br> • The future of consolidation in cloud infrastructure<br> • AI’s environmental impact and the role of efficient data systems<br> • NVIDIA GTC, healthcare AI, and where venture is heading next</p><p>If you’re a founder, investor, infrastructure leader, or enterprise operator trying to understand where AI is really going, this episode is packed with practical insights.</p><p>Subscribe to DataStorage.com for more conversations on AI infrastructure, data storage, cloud, and the future of enterprise technology.</p><p>#AIInfrastructure #LuZhang #FusionFund #DataQuality #EdgeAI #CloudInfrastructure #VentureCapital #HealthcareAI #MultimodalAI #DataStorage</p>]]>
      </content:encoded>
      <pubDate>Tue, 31 Mar 2026 00:39:39 -0400</pubDate>
      <author>Datastorage.com</author>
      <enclosure url="https://media.transistor.fm/773d705e/39432176.mp3" length="42126664" type="audio/mpeg"/>
      <itunes:author>Datastorage.com</itunes:author>
      <itunes:duration>2631</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of the DataStorage.com podcast, John sits down with Lu Zhang, Founder and Managing Partner of Fusion Fund, to talk about where AI infrastructure, data quality, cloud architecture, and venture capital are headed next.</p><p>Lu shares her perspective as both a former founder and an early-stage investor backing deep tech, healthcare AI, infrastructure, and enterprise startups. The conversation explores what actually matters in AI today: high-quality data, inference cost, architecture design, multimodal data, edge AI, governance, and the growing importance of specialized agents over one-size-fits-all models.</p><p>They also discuss how startups should think about hyperscalers vs. neo-cloud providers, why data curation is becoming mission-critical, how regulated industries like healthcare and finance are approaching AI adoption, and what Lu learned from this year’s NVIDIA GTC.</p><p>Topics covered:<br> • What Lu Zhang looks for in early-stage AI founders<br> • Why data quality matters more than data quantity<br> • AI infrastructure vs. AI applications<br> • Inference cost, model efficiency, and architecture design<br> • Edge AI and the future of AI deployment<br> • Data governance, compliance, and regulated industries<br> • Multimodal data and enterprise memory graphs<br> • Otter, MCP, conversational data, and AI agents<br> • Hyperscalers vs. alternative cloud and storage providers<br> • The future of consolidation in cloud infrastructure<br> • AI’s environmental impact and the role of efficient data systems<br> • NVIDIA GTC, healthcare AI, and where venture is heading next</p><p>If you’re a founder, investor, infrastructure leader, or enterprise operator trying to understand where AI is really going, this episode is packed with practical insights.</p><p>Subscribe to DataStorage.com for more conversations on AI infrastructure, data storage, cloud, and the future of enterprise technology.</p><p>#AIInfrastructure #LuZhang #FusionFund #DataQuality #EdgeAI #CloudInfrastructure #VentureCapital #HealthcareAI #MultimodalAI #DataStorage</p>]]>
      </itunes:summary>
      <itunes:keywords>AI Infrastructure, Cloud Storage, Multi-cloud, Data infrastructure, Cloud economics, Infrastructure venture capital, machine learning</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The Future of AI Infrastructure: GPUs, Neo-Clouds, and Why Data Will Anchor the AI Economy | Russ Artzt</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>The Future of AI Infrastructure: GPUs, Neo-Clouds, and Why Data Will Anchor the AI Economy | Russ Artzt</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">dab8b53e-ef49-49ee-9258-b855bca33cb6</guid>
      <link>https://share.transistor.fm/s/ed05044c</link>
      <description>
        <![CDATA[<p>Russ Artzt, co-founder of Computer Associates (CA Technologies), joins the DataStorage.com podcast to discuss how AI is reshaping enterprise infrastructure.</p><p><br></p><p>Drawing on decades of experience spanning the mainframe era, the rise of SaaS, cloud computing, and today’s AI boom, Russ explains why the current shift may be the most transformative yet.</p><p><br></p><p>In this episode we cover:</p><p>• Why GPUs changed the economics of AI infrastructure</p><p>• The rise of specialized <strong>neo-cloud providers</strong> like CoreWeave</p><p>• How hyperscalers, multi-cloud, and hybrid architectures will evolve</p><p>• Why <strong>data and storage are becoming the anchor of the AI ecosystem</strong></p><p>• The role of alternative storage providers and the impact of egress fees</p><p>• How companies should design infrastructure for the next decade of AI</p><p><br></p><p>If you’re building AI applications, managing infrastructure, or evaluating alternatives to hyperscaler cloud providers, this conversation offers a strategic look at where the industry is heading.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Russ Artzt, co-founder of Computer Associates (CA Technologies), joins the DataStorage.com podcast to discuss how AI is reshaping enterprise infrastructure.</p><p><br></p><p>Drawing on decades of experience spanning the mainframe era, the rise of SaaS, cloud computing, and today’s AI boom, Russ explains why the current shift may be the most transformative yet.</p><p><br></p><p>In this episode we cover:</p><p>• Why GPUs changed the economics of AI infrastructure</p><p>• The rise of specialized <strong>neo-cloud providers</strong> like CoreWeave</p><p>• How hyperscalers, multi-cloud, and hybrid architectures will evolve</p><p>• Why <strong>data and storage are becoming the anchor of the AI ecosystem</strong></p><p>• The role of alternative storage providers and the impact of egress fees</p><p>• How companies should design infrastructure for the next decade of AI</p><p><br></p><p>If you’re building AI applications, managing infrastructure, or evaluating alternatives to hyperscaler cloud providers, this conversation offers a strategic look at where the industry is heading.</p>]]>
      </content:encoded>
      <pubDate>Tue, 10 Mar 2026 15:50:01 -0400</pubDate>
      <author>Datastorage.com</author>
      <enclosure url="https://media.transistor.fm/ed05044c/021f9e16.mp3" length="15747619" type="audio/mpeg"/>
      <itunes:author>Datastorage.com</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/zuHNuNQAlOGVf_LPUNkcr2HTeNXUz_xR6jH9y1ko5U4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wMjc4/MDllNGI4MDU3ODI4/OTFjZTRhNmVlZDhj/ZWIzOS5wbmc.jpg"/>
      <itunes:duration>1963</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Russ Artzt, co-founder of Computer Associates (CA Technologies), joins the DataStorage.com podcast to discuss how AI is reshaping enterprise infrastructure.</p><p><br></p><p>Drawing on decades of experience spanning the mainframe era, the rise of SaaS, cloud computing, and today’s AI boom, Russ explains why the current shift may be the most transformative yet.</p><p><br></p><p>In this episode we cover:</p><p>• Why GPUs changed the economics of AI infrastructure</p><p>• The rise of specialized <strong>neo-cloud providers</strong> like CoreWeave</p><p>• How hyperscalers, multi-cloud, and hybrid architectures will evolve</p><p>• Why <strong>data and storage are becoming the anchor of the AI ecosystem</strong></p><p>• The role of alternative storage providers and the impact of egress fees</p><p>• How companies should design infrastructure for the next decade of AI</p><p><br></p><p>If you’re building AI applications, managing infrastructure, or evaluating alternatives to hyperscaler cloud providers, this conversation offers a strategic look at where the industry is heading.</p>]]>
      </itunes:summary>
      <itunes:keywords>AI Infrastructure, Cloud Storage, Multi-cloud, Data infrastructure, Cloud economics, Infrastructure venture capital, machine learning</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Synthetic Data, Robotics &amp; AI: Symage on How NASA, Warehouses &amp; LLMs Are Shaping the Future</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>Synthetic Data, Robotics &amp; AI: Symage on How NASA, Warehouses &amp; LLMs Are Shaping the Future</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d90969f4-4ff0-425a-a450-995889939506</guid>
      <link>https://share.transistor.fm/s/027c0a27</link>
      <description>
        <![CDATA[<p>Synthetic data is rapidly becoming one of the most important building blocks in AI — from training autonomous vehicles to powering robotics on Mars.</p><p>In this episode, we sit down with Brian Geisel, Founder of Symage from Geisel Software, to explore how synthetic data, robotics, machine learning, and large language models (LLMs) are reshaping AI infrastructure.</p><p>We cover:<br> 🚀 How NASA trained Mars rovers using synthetic environments<br> 🤖 Why humanoid robots may not be the future<br> 📦 How robotics is transforming warehouse automation and micro-fulfillment centers<br> 🧠 Why foundation models must rely on synthetic data going forward<br> 📊 The hidden storage challenges behind AI training<br> 🔐 How synthetic data solves PII and regulated industry problems<br> ⚡ The balance between real-world data and AI-generated data</p><p>Brian explains why synthetic data isn’t just about generating more data — it’s about generating better, targeted data that improves model performance while reducing waste.</p><p>We also dive into:<br> • AI model training strategies<br> • GPU vs object storage considerations<br> • High-throughput data movement<br> • Robotics + physical AI<br> • The future of multimodal synthetic generation</p><p>If you’re building AI systems, training models, managing data infrastructure, or investing in AI and robotics, this conversation is packed with insights.</p><p>Explore more content like this at datastorage.com </p><p>Learn more about Symage: <br>https://geisel.software/synthetic-data-storage-strategies/?utm_source=datastorage&amp;utm_medium=podcast</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Synthetic data is rapidly becoming one of the most important building blocks in AI — from training autonomous vehicles to powering robotics on Mars.</p><p>In this episode, we sit down with Brian Geisel, Founder of Symage from Geisel Software, to explore how synthetic data, robotics, machine learning, and large language models (LLMs) are reshaping AI infrastructure.</p><p>We cover:<br> 🚀 How NASA trained Mars rovers using synthetic environments<br> 🤖 Why humanoid robots may not be the future<br> 📦 How robotics is transforming warehouse automation and micro-fulfillment centers<br> 🧠 Why foundation models must rely on synthetic data going forward<br> 📊 The hidden storage challenges behind AI training<br> 🔐 How synthetic data solves PII and regulated industry problems<br> ⚡ The balance between real-world data and AI-generated data</p><p>Brian explains why synthetic data isn’t just about generating more data — it’s about generating better, targeted data that improves model performance while reducing waste.</p><p>We also dive into:<br> • AI model training strategies<br> • GPU vs object storage considerations<br> • High-throughput data movement<br> • Robotics + physical AI<br> • The future of multimodal synthetic generation</p><p>If you’re building AI systems, training models, managing data infrastructure, or investing in AI and robotics, this conversation is packed with insights.</p><p>Explore more content like this at datastorage.com </p><p>Learn more about Symage: <br>https://geisel.software/synthetic-data-storage-strategies/?utm_source=datastorage&amp;utm_medium=podcast</p>]]>
      </content:encoded>
      <pubDate>Tue, 24 Feb 2026 09:09:22 -0500</pubDate>
      <author>Datastorage.com</author>
      <enclosure url="https://media.transistor.fm/027c0a27/614b5ee6.mp3" length="57575373" type="audio/mpeg"/>
      <itunes:author>Datastorage.com</itunes:author>
      <itunes:duration>2398</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Synthetic data is rapidly becoming one of the most important building blocks in AI — from training autonomous vehicles to powering robotics on Mars.</p><p>In this episode, we sit down with Brian Geisel, Founder of Symage from Geisel Software, to explore how synthetic data, robotics, machine learning, and large language models (LLMs) are reshaping AI infrastructure.</p><p>We cover:<br> 🚀 How NASA trained Mars rovers using synthetic environments<br> 🤖 Why humanoid robots may not be the future<br> 📦 How robotics is transforming warehouse automation and micro-fulfillment centers<br> 🧠 Why foundation models must rely on synthetic data going forward<br> 📊 The hidden storage challenges behind AI training<br> 🔐 How synthetic data solves PII and regulated industry problems<br> ⚡ The balance between real-world data and AI-generated data</p><p>Brian explains why synthetic data isn’t just about generating more data — it’s about generating better, targeted data that improves model performance while reducing waste.</p><p>We also dive into:<br> • AI model training strategies<br> • GPU vs object storage considerations<br> • High-throughput data movement<br> • Robotics + physical AI<br> • The future of multimodal synthetic generation</p><p>If you’re building AI systems, training models, managing data infrastructure, or investing in AI and robotics, this conversation is packed with insights.</p><p>Explore more content like this at datastorage.com </p><p>Learn more about Symage: <br>https://geisel.software/synthetic-data-storage-strategies/?utm_source=datastorage&amp;utm_medium=podcast</p>]]>
      </itunes:summary>
      <itunes:keywords>AI Infrastructure, Cloud Storage, Multi-cloud, Data infrastructure, Cloud economics, Infrastructure venture capital, machine learning</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>How IONOS Is Challenging Hyperscalers with $4.99/TB Object Storage &amp; AI-Ready Cloud Infrastructure</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>How IONOS Is Challenging Hyperscalers with $4.99/TB Object Storage &amp; AI-Ready Cloud Infrastructure</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">de4f1d04-e466-4b6a-ac5f-927991149e01</guid>
      <link>https://share.transistor.fm/s/c975c3f0</link>
      <description>
        <![CDATA[<p>What does cloud infrastructure look like after hyperscalers—and how does AI change the rules?</p><p>In this episode of the Datastorage.com podcast, we’re joined by Seth Helgesen, Solutions Architect at IONOS, to unpack how cloud, object storage, and AI infrastructure are evolving—and why simplicity, performance, and transparency are becoming the real differentiators.</p><p>IONOS started as a domains and hosting company in Europe and has grown into a full-stack cloud provider offering always-hot object storage, CPU and GPU compute, block storage, and hybrid cloud infrastructure—all at a fraction of hyperscaler costs.</p><p>What we cover in this episode:<br> • Why always-hot object storage matters in the AI era<br> • How IONOS delivers $4.99/TB object storage with low egress fees<br> • S3 compatibility, IAM support, and migrating from AWS or other hyperscalers<br> • The rise of hybrid and multi-cloud architectures for AI workloads<br> • Why data proximity is critical for GPU training and inference<br> • Edge computing, smaller LLMs, and the future of multimodal AI<br> • How IONOS simplifies cloud infrastructure without hyperscaler complexity<br> • Who IONOS is best positioned to serve: MSPs, SaaS platforms, media, AI, and enterprise IT</p><p>If you’re evaluating alternatives to AWS, thinking about hybrid cloud strategies, or building AI-driven products that require fast, affordable access to data, this conversation is for you.</p><p>👉 Learn more at Datastorage.com<br>👉 Explore IONOS cloud infrastructure and object storage: <br>👉 Subscribe for more conversations on cloud, AI, and modern data infrastructure</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What does cloud infrastructure look like after hyperscalers—and how does AI change the rules?</p><p>In this episode of the Datastorage.com podcast, we’re joined by Seth Helgesen, Solutions Architect at IONOS, to unpack how cloud, object storage, and AI infrastructure are evolving—and why simplicity, performance, and transparency are becoming the real differentiators.</p><p>IONOS started as a domains and hosting company in Europe and has grown into a full-stack cloud provider offering always-hot object storage, CPU and GPU compute, block storage, and hybrid cloud infrastructure—all at a fraction of hyperscaler costs.</p><p>What we cover in this episode:<br> • Why always-hot object storage matters in the AI era<br> • How IONOS delivers $4.99/TB object storage with low egress fees<br> • S3 compatibility, IAM support, and migrating from AWS or other hyperscalers<br> • The rise of hybrid and multi-cloud architectures for AI workloads<br> • Why data proximity is critical for GPU training and inference<br> • Edge computing, smaller LLMs, and the future of multimodal AI<br> • How IONOS simplifies cloud infrastructure without hyperscaler complexity<br> • Who IONOS is best positioned to serve: MSPs, SaaS platforms, media, AI, and enterprise IT</p><p>If you’re evaluating alternatives to AWS, thinking about hybrid cloud strategies, or building AI-driven products that require fast, affordable access to data, this conversation is for you.</p><p>👉 Learn more at Datastorage.com<br>👉 Explore IONOS cloud infrastructure and object storage: <br>👉 Subscribe for more conversations on cloud, AI, and modern data infrastructure</p>]]>
      </content:encoded>
      <pubDate>Tue, 24 Feb 2026 09:06:41 -0500</pubDate>
      <author>Datastorage.com</author>
      <enclosure url="https://media.transistor.fm/c975c3f0/22dee136.mp3" length="68119511" type="audio/mpeg"/>
      <itunes:author>Datastorage.com</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/d-1w85DlZJ81xv62tIFk80tSoF1xsk_0tJElPLLCAYc/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jZDcx/ZTliNmNkMWM0NTAy/NWE4MTc2NGNmYTVh/MGNlOC5qcGc.jpg"/>
      <itunes:duration>2837</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What does cloud infrastructure look like after hyperscalers—and how does AI change the rules?</p><p>In this episode of the Datastorage.com podcast, we’re joined by Seth Helgesen, Solutions Architect at IONOS, to unpack how cloud, object storage, and AI infrastructure are evolving—and why simplicity, performance, and transparency are becoming the real differentiators.</p><p>IONOS started as a domains and hosting company in Europe and has grown into a full-stack cloud provider offering always-hot object storage, CPU and GPU compute, block storage, and hybrid cloud infrastructure—all at a fraction of hyperscaler costs.</p><p>What we cover in this episode:<br> • Why always-hot object storage matters in the AI era<br> • How IONOS delivers $4.99/TB object storage with low egress fees<br> • S3 compatibility, IAM support, and migrating from AWS or other hyperscalers<br> • The rise of hybrid and multi-cloud architectures for AI workloads<br> • Why data proximity is critical for GPU training and inference<br> • Edge computing, smaller LLMs, and the future of multimodal AI<br> • How IONOS simplifies cloud infrastructure without hyperscaler complexity<br> • Who IONOS is best positioned to serve: MSPs, SaaS platforms, media, AI, and enterprise IT</p><p>If you’re evaluating alternatives to AWS, thinking about hybrid cloud strategies, or building AI-driven products that require fast, affordable access to data, this conversation is for you.</p><p>👉 Learn more at Datastorage.com<br>👉 Explore IONOS cloud infrastructure and object storage: <br>👉 Subscribe for more conversations on cloud, AI, and modern data infrastructure</p>]]>
      </itunes:summary>
      <itunes:keywords>AI Infrastructure, Cloud Storage, Multi-cloud, Data infrastructure, Cloud economics, Infrastructure venture capital, machine learning</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Why Traditional AI Fails in Drug Discovery - Elucidata's Data-Centric AI Approach</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>Why Traditional AI Fails in Drug Discovery - Elucidata's Data-Centric AI Approach</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">879f7151-0ee6-4b5c-bbbd-f9aa67753b69</guid>
      <link>https://share.transistor.fm/s/96ed3ef8</link>
      <description>
        <![CDATA[<p>AI is great at pattern matching — but what happens when the most valuable insights don’t fit the pattern?</p><p>In this episode of DataStorage.com, we sit down with Abhishek "AJ" Jha "(AJ), Founder &amp; CEO of Elucidata, to break down why traditional AI approaches fail in drug discovery — and how data-centric AI is reshaping the future of pharma, healthcare, and beyond.</p><p>AJ shares Elucidata’s journey from resisting the “AI” label to fully embracing it in the post-LLM era — not by building bigger models, but by focusing on data quality, governance, and out-of-distribution problems that actually matter in regulated industries.</p><p>🔍 In this conversation, we cover:<br> • Why pattern-matching AI breaks down in drug discovery and healthcare<br> • What “out-of-distribution” problems are — and why they’re so valuable<br> • How data-centric AI differs from model-centric AI<br> • The role of human-in-the-loop AI in high-stakes industries<br> • Preparing multimodal data (text, tabular, imaging) for AI-ready use cases<br> • Why starting with the use case beats “cleaning all the data”<br> • AI infrastructure decisions, cloud costs, GPUs, and egress challenges<br> • Deploying AI securely in regulated environments (HIPAA, SOC 2, GDPR)<br> • Whether companies should build their own models or fine-tune existing ones</p><p>This episode is a must-watch for:<br> • Pharma &amp; biotech leaders<br> • AI and data infrastructure teams<br> • Founders building AI products in regulated industries<br> • Anyone questioning whether bigger models = better AI</p><p>⸻</p><p>🎧 Subscribe to DataStorage.com for deep conversations on AI, cloud infrastructure, data storage, and the real systems powering the AI economy.</p><p>🔗 Learn more about Elucidata: https://elucidata.io<br>🔗 More episodes: https://datastorage.com</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>AI is great at pattern matching — but what happens when the most valuable insights don’t fit the pattern?</p><p>In this episode of DataStorage.com, we sit down with Abhishek "AJ" Jha "(AJ), Founder &amp; CEO of Elucidata, to break down why traditional AI approaches fail in drug discovery — and how data-centric AI is reshaping the future of pharma, healthcare, and beyond.</p><p>AJ shares Elucidata’s journey from resisting the “AI” label to fully embracing it in the post-LLM era — not by building bigger models, but by focusing on data quality, governance, and out-of-distribution problems that actually matter in regulated industries.</p><p>🔍 In this conversation, we cover:<br> • Why pattern-matching AI breaks down in drug discovery and healthcare<br> • What “out-of-distribution” problems are — and why they’re so valuable<br> • How data-centric AI differs from model-centric AI<br> • The role of human-in-the-loop AI in high-stakes industries<br> • Preparing multimodal data (text, tabular, imaging) for AI-ready use cases<br> • Why starting with the use case beats “cleaning all the data”<br> • AI infrastructure decisions, cloud costs, GPUs, and egress challenges<br> • Deploying AI securely in regulated environments (HIPAA, SOC 2, GDPR)<br> • Whether companies should build their own models or fine-tune existing ones</p><p>This episode is a must-watch for:<br> • Pharma &amp; biotech leaders<br> • AI and data infrastructure teams<br> • Founders building AI products in regulated industries<br> • Anyone questioning whether bigger models = better AI</p><p>⸻</p><p>🎧 Subscribe to DataStorage.com for deep conversations on AI, cloud infrastructure, data storage, and the real systems powering the AI economy.</p><p>🔗 Learn more about Elucidata: https://elucidata.io<br>🔗 More episodes: https://datastorage.com</p>]]>
      </content:encoded>
      <pubDate>Tue, 24 Feb 2026 09:04:34 -0500</pubDate>
      <author>Datastorage.com</author>
      <enclosure url="https://media.transistor.fm/96ed3ef8/76cc6a7b.mp3" length="56693785" type="audio/mpeg"/>
      <itunes:author>Datastorage.com</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/BSHEG3sAI4rmX_6IJX95I4oFXqDtdVPI9wvOYeaAT68/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xYzgz/YjE4NGZkN2VlMGE0/Y2Y0NWYwZTNhNzM0/Nzk3Yi53ZWJw.jpg"/>
      <itunes:duration>2361</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>AI is great at pattern matching — but what happens when the most valuable insights don’t fit the pattern?</p><p>In this episode of DataStorage.com, we sit down with Abhishek "AJ" Jha "(AJ), Founder &amp; CEO of Elucidata, to break down why traditional AI approaches fail in drug discovery — and how data-centric AI is reshaping the future of pharma, healthcare, and beyond.</p><p>AJ shares Elucidata’s journey from resisting the “AI” label to fully embracing it in the post-LLM era — not by building bigger models, but by focusing on data quality, governance, and out-of-distribution problems that actually matter in regulated industries.</p><p>🔍 In this conversation, we cover:<br> • Why pattern-matching AI breaks down in drug discovery and healthcare<br> • What “out-of-distribution” problems are — and why they’re so valuable<br> • How data-centric AI differs from model-centric AI<br> • The role of human-in-the-loop AI in high-stakes industries<br> • Preparing multimodal data (text, tabular, imaging) for AI-ready use cases<br> • Why starting with the use case beats “cleaning all the data”<br> • AI infrastructure decisions, cloud costs, GPUs, and egress challenges<br> • Deploying AI securely in regulated environments (HIPAA, SOC 2, GDPR)<br> • Whether companies should build their own models or fine-tune existing ones</p><p>This episode is a must-watch for:<br> • Pharma &amp; biotech leaders<br> • AI and data infrastructure teams<br> • Founders building AI products in regulated industries<br> • Anyone questioning whether bigger models = better AI</p><p>⸻</p><p>🎧 Subscribe to DataStorage.com for deep conversations on AI, cloud infrastructure, data storage, and the real systems powering the AI economy.</p><p>🔗 Learn more about Elucidata: https://elucidata.io<br>🔗 More episodes: https://datastorage.com</p>]]>
      </itunes:summary>
      <itunes:keywords>Data-centric AI 	•	AI in drug discovery 	•	AI in healthcare 	•	Out-of-distribution AI 	•	Human-in-the-loop AI 	•	AI for pharma 	•	Regulated AI deployment</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Rewriting the Cloud Playbook with Gleb Budman (Backblaze) - Datastorage.com </title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>Rewriting the Cloud Playbook with Gleb Budman (Backblaze) - Datastorage.com </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5520d641-bf8c-4ff7-b5fc-d1cab74cd19c</guid>
      <link>https://share.transistor.fm/s/fd960c08</link>
      <description>
        <![CDATA[<p>In this episode of The Data Storage Podcast, we sit down with the CEO of Backblaze to discuss the broader state of cloud computing — from hyperscaler dominance to the rise of AI infrastructure and multi-cloud strategy.</p><p><br></p><p>We explore:</p><ul><li>How the cloud market is evolving in the AI era</li><li>Whether hyperscaler pricing models are sustainable</li><li>The pressure egress fees create for modern workloads</li><li>The rise of alternative cloud providers (“neoclouds”)</li><li>What founders and infrastructure leaders should consider in 2026</li></ul><p><br></p><p>This conversation goes beyond product features to examine the structural shifts reshaping cloud economics and infrastructure strategy.</p><p><br></p><p>If you’re building or managing AI workloads, evaluating multi-cloud architecture, or watching the next phase of cloud competition unfold, this episode provides a high-level executive perspective.</p><p><br></p><p>Produced by Datastorage.com.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of The Data Storage Podcast, we sit down with the CEO of Backblaze to discuss the broader state of cloud computing — from hyperscaler dominance to the rise of AI infrastructure and multi-cloud strategy.</p><p><br></p><p>We explore:</p><ul><li>How the cloud market is evolving in the AI era</li><li>Whether hyperscaler pricing models are sustainable</li><li>The pressure egress fees create for modern workloads</li><li>The rise of alternative cloud providers (“neoclouds”)</li><li>What founders and infrastructure leaders should consider in 2026</li></ul><p><br></p><p>This conversation goes beyond product features to examine the structural shifts reshaping cloud economics and infrastructure strategy.</p><p><br></p><p>If you’re building or managing AI workloads, evaluating multi-cloud architecture, or watching the next phase of cloud competition unfold, this episode provides a high-level executive perspective.</p><p><br></p><p>Produced by Datastorage.com.</p>]]>
      </content:encoded>
      <pubDate>Tue, 24 Feb 2026 08:59:54 -0500</pubDate>
      <author>Datastorage.com</author>
      <enclosure url="https://media.transistor.fm/fd960c08/96a09192.mp3" length="78608381" type="audio/mpeg"/>
      <itunes:author>Datastorage.com</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/mqflGYoTVBs4Cll-2IE17vHr070emSjGTIgr1rP5_7k/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kYTI0/ODQ5ZmIxZDNjZGYx/OTVjY2U1YzRlNjE5/NjU4NC5qcGc.jpg"/>
      <itunes:duration>3273</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of The Data Storage Podcast, we sit down with the CEO of Backblaze to discuss the broader state of cloud computing — from hyperscaler dominance to the rise of AI infrastructure and multi-cloud strategy.</p><p><br></p><p>We explore:</p><ul><li>How the cloud market is evolving in the AI era</li><li>Whether hyperscaler pricing models are sustainable</li><li>The pressure egress fees create for modern workloads</li><li>The rise of alternative cloud providers (“neoclouds”)</li><li>What founders and infrastructure leaders should consider in 2026</li></ul><p><br></p><p>This conversation goes beyond product features to examine the structural shifts reshaping cloud economics and infrastructure strategy.</p><p><br></p><p>If you’re building or managing AI workloads, evaluating multi-cloud architecture, or watching the next phase of cloud competition unfold, this episode provides a high-level executive perspective.</p><p><br></p><p>Produced by Datastorage.com.</p>]]>
      </itunes:summary>
      <itunes:keywords>State of cloud computing, Cloud market trends, AI infrastructure, Multi-cloud strategy, Hyperscalers, Cloud economics, Cloud pricing Egress fees, Object storage, Cloud competition, Neocloud, Backblaze CEO</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
  </channel>
</rss>
