<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/rss+xml" href="https://feeds.transistor.fm/margin-of-thought" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Margin of Thought with Priten</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/margin-of-thought</itunes:new-feed-url>
    <description>Margin of Thought is a podcast about the questions we don’t always make time for but should.

Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students.

Each season centers on a key tension in modern life that affects how we raise and educate our children.

Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI &amp; K-12 at priten.org and ethicaledtech.org.</description>
    <copyright>© 2026 Priten Soundar-Shah</copyright>
    <podcast:guid>50abecba-c9bc-5d02-be0c-807b27413690</podcast:guid>
    <podcast:locked>yes</podcast:locked>
    <itunes:applepodcastsverify>f0cd4610-0142-11f1-abd8-f99525f5ad11</itunes:applepodcastsverify>
    <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
    <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
    <language>en</language>
    <pubDate>Wed, 15 Apr 2026 08:26:34 -0400</pubDate>
    <lastBuildDate>Wed, 15 Apr 2026 08:27:10 -0400</lastBuildDate>
    <link>https://listen.priten.org</link>
    
    <itunes:category text="Education"/>
    <itunes:category text="Society &amp; Culture">
      <itunes:category text="Philosophy"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Priten Soundar-Shah</itunes:author>
    <itunes:image href="https://img.transistorcdn.com/wX4SVsqkbuRg9PAsBFB24OGDN1AjLovk8WofhngxyPQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yOTNk/OTcyZTcxOWE5MGIw/ZTY0MjU4ZGNlN2U5/NjM3My5wbmc.jpg"/>
    <itunes:summary>Margin of Thought is a podcast about the questions we don’t always make time for but should.

Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students.

Each season centers on a key tension in modern life that affects how we raise and educate our children.

Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI &amp; K-12 at priten.org and ethicaledtech.org.</itunes:summary>
    <itunes:subtitle>Margin of Thought is a podcast about the questions we don’t always make time for but should.</itunes:subtitle>
    <itunes:keywords>ai education, philosophy, parenting, education, ai ethics</itunes:keywords>
    <itunes:owner>
      <itunes:name>Priten Soundar-Shah</itunes:name>
      <itunes:email>mot@bepodcast.network</itunes:email>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>What Is Age-Appropriate AI in Education? - Megan Barnes</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>What Is Age-Appropriate AI in Education? - Megan Barnes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6c09106b-2f2d-4768-9b35-bd2a1f49b875</guid>
      <link>https://listen.priten.org/s1/21</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Megan Barnes, a PhD student in learning technologies at the University of North Texas and a K-12 librarian with 14 years of experience, about what age-appropriate AI in education actually means. Megan holds dual roles as library director and director of educational technology for early childhood through fourth grade in Dallas, and her research draws on cognitive and affective neuroscience to evaluate how emerging tools interact with child development. The conversation moves through the real-versus-synthetic distinction that young children struggle with, the attention economy driving AI product design, information literacy as a foundation for AI literacy, and why curiosity may be the most important thing educators need to protect.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Before children can use chatbots, they need a solid concept of real versus not real.</strong> Most kindergartners interact with AI through voice and animated characters, adding layers of anthropomorphization that make it nearly impossible for them to distinguish a computer from a person. Megan argues that chatbot-based AI is not developmentally appropriate at this age, and any exposure should be adult-controlled and side-by-side, consistent with American Academy of Pediatrics guidance on co-viewing media.</li><li><strong>The attention economy is becoming a relational economy—and children are the target.</strong> The same design logic that removed page numbers from Google search results is now being applied to conversational AI. If a child builds five years of chat history with a platform before adulthood, that relationship becomes a powerful lock-in mechanism. Megan also raises the concern that chat histories are now being used to drive advertising, meaning the tools students use for learning are simultaneously selling to them.</li><li><strong>AI literacy in elementary school means information literacy, not prompt engineering.</strong> Rather than teaching young students how to use AI tools directly, Megan focuses on helping them understand who generates information, who validates it, and where AI is already present in their daily lives. During morning announcements, she points out the background remover tool and tells students, "This is AI right here." The goal is building foundational skills for evaluating any new technology, not training on a specific product.</li><li><strong>Every generation of creative technology triggers the same panic—and the pattern holds.</strong> Megan draws on her background as a violinist and recording arts student. When Apple's GarageBand launched during her final semester, her synthesizer professor declared it the downfall of music. Instead, it democratized creativity. More people creating doesn't mean everything produced is good, but the tool itself is not the threat. AI follows the same arc.</li><li><strong>Curiosity doesn't need to be taught—it needs to be protected.</strong> Young children arrive with natural wonder intact. Megan distinguishes between formal classroom learning and the informal learning space of the library, where autonomy and exploration still drive engagement. The job of early education is not to instill curiosity but to give children frameworks for approaching new things with wonder while still thinking critically, so that instinct survives into adulthood.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Megan Barnes, a PhD student in learning technologies at the University of North Texas and a K-12 librarian with 14 years of experience, about what age-appropriate AI in education actually means. Megan holds dual roles as library director and director of educational technology for early childhood through fourth grade in Dallas, and her research draws on cognitive and affective neuroscience to evaluate how emerging tools interact with child development. The conversation moves through the real-versus-synthetic distinction that young children struggle with, the attention economy driving AI product design, information literacy as a foundation for AI literacy, and why curiosity may be the most important thing educators need to protect.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Before children can use chatbots, they need a solid concept of real versus not real.</strong> Most kindergartners interact with AI through voice and animated characters, adding layers of anthropomorphization that make it nearly impossible for them to distinguish a computer from a person. Megan argues that chatbot-based AI is not developmentally appropriate at this age, and any exposure should be adult-controlled and side-by-side, consistent with American Academy of Pediatrics guidance on co-viewing media.</li><li><strong>The attention economy is becoming a relational economy—and children are the target.</strong> The same design logic that removed page numbers from Google search results is now being applied to conversational AI. If a child builds five years of chat history with a platform before adulthood, that relationship becomes a powerful lock-in mechanism. Megan also raises the concern that chat histories are now being used to drive advertising, meaning the tools students use for learning are simultaneously selling to them.</li><li><strong>AI literacy in elementary school means information literacy, not prompt engineering.</strong> Rather than teaching young students how to use AI tools directly, Megan focuses on helping them understand who generates information, who validates it, and where AI is already present in their daily lives. During morning announcements, she points out the background remover tool and tells students, "This is AI right here." The goal is building foundational skills for evaluating any new technology, not training on a specific product.</li><li><strong>Every generation of creative technology triggers the same panic—and the pattern holds.</strong> Megan draws on her background as a violinist and recording arts student. When Apple's GarageBand launched during her final semester, her synthesizer professor declared it the downfall of music. Instead, it democratized creativity. More people creating doesn't mean everything produced is good, but the tool itself is not the threat. AI follows the same arc.</li><li><strong>Curiosity doesn't need to be taught—it needs to be protected.</strong> Young children arrive with natural wonder intact. Megan distinguishes between formal classroom learning and the informal learning space of the library, where autonomy and exploration still drive engagement. The job of early education is not to instill curiosity but to give children frameworks for approaching new things with wonder while still thinking critically, so that instinct survives into adulthood.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 14 Apr 2026 20:08:00 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/e4d43693/b407aa13.mp3" length="41983899" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/x_aQjdEE6hepDCsiRRc17gKRE0puU181nbpLWh9vHgs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84NTE5/NmZkY2YxMzkzOGRh/YmQ1OGZmYWFmYWEy/YTYwMC5wbmc.jpg"/>
      <itunes:duration>2623</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Megan Barnes, a PhD student in learning technologies at the University of North Texas and a K-12 librarian with 14 years of experience, about what age-appropriate AI in education actually means. Megan holds dual roles as library director and director of educational technology for early childhood through fourth grade in Dallas, and her research draws on cognitive and affective neuroscience to evaluate how emerging tools interact with child development. The conversation moves through the real-versus-synthetic distinction that young children struggle with, the attention economy driving AI product design, information literacy as a foundation for AI literacy, and why curiosity may be the most important thing educators need to protect.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Before children can use chatbots, they need a solid concept of real versus not real.</strong> Most kindergartners interact with AI through voice and animated characters, adding layers of anthropomorphization that make it nearly impossible for them to distinguish a computer from a person. Megan argues that chatbot-based AI is not developmentally appropriate at this age, and any exposure should be adult-controlled and side-by-side, consistent with American Academy of Pediatrics guidance on co-viewing media.</li><li><strong>The attention economy is becoming a relational economy—and children are the target.</strong> The same design logic that removed page numbers from Google search results is now being applied to conversational AI. If a child builds five years of chat history with a platform before adulthood, that relationship becomes a powerful lock-in mechanism. Megan also raises the concern that chat histories are now being used to drive advertising, meaning the tools students use for learning are simultaneously selling to them.</li><li><strong>AI literacy in elementary school means information literacy, not prompt engineering.</strong> Rather than teaching young students how to use AI tools directly, Megan focuses on helping them understand who generates information, who validates it, and where AI is already present in their daily lives. During morning announcements, she points out the background remover tool and tells students, "This is AI right here." The goal is building foundational skills for evaluating any new technology, not training on a specific product.</li><li><strong>Every generation of creative technology triggers the same panic—and the pattern holds.</strong> Megan draws on her background as a violinist and recording arts student. When Apple's GarageBand launched during her final semester, her synthesizer professor declared it the downfall of music. Instead, it democratized creativity. More people creating doesn't mean everything produced is good, but the tool itself is not the threat. AI follows the same arc.</li><li><strong>Curiosity doesn't need to be taught—it needs to be protected.</strong> Young children arrive with natural wonder intact. Megan distinguishes between formal classroom learning and the informal learning space of the library, where autonomy and exploration still drive engagement. The job of early education is not to instill curiosity but to give children frameworks for approaching new things with wonder while still thinking critically, so that instinct survives into adulthood.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>age-appropriate ai,information literacy,elementary education,child development,edtech ethics</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/megan-barnes" img="https://img.transistorcdn.com/5KSev38rXb96OGPmjzp_wmKn0XEkfHAyclIWqaoI8_k/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZThm/MDVlMGE3MThjMzA1/ZWY0YmZlZmY1YWIy/MDAyOS5wbmc.jpg">Megan Barnes</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/e4d43693/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/e4d43693/transcript.json" type="application/json"/>
      <podcast:transcript url="https://share.transistor.fm/s/e4d43693/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mjim3phzik23"/>
    </item>
    <item>
      <title>Is AI Literacy the New Professional Credential? - Anna Zendall</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Is AI Literacy the New Professional Credential? - Anna Zendall</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">807da2de-67d4-4077-b55b-432aa3efad56</guid>
      <link>https://listen.priten.org/s1/20</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Anna Zendell, a social worker turned educator who oversees healthcare management, human services, and wellness programs at Bay Path University, about what it takes to rebuild a curriculum around AI when the stakes are patient outcomes. Zendell is currently piloting an AI-enhanced program from the ground up, designing courses where a closed AI system mentors students through interactive activities while faculty retain grading authority and instructional presence. The conversation covers why traditional learning outcomes don't translate cleanly into AI-driven instruction, how adult learners in healthcare face unique pressure to acquire AI literacy for careers that already demand it, and the trust gaps between students, faculty, and administrators that complicate adoption.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Curriculum doesn't absorb AI -- it has to be rebuilt for it.</strong> Zendell found that standard learning outcomes written with Bloom's Taxonomy are too broad for an AI system to use as mentoring scaffolds. Her team breaks each outcome into granular component steps, essentially teaching the AI how to guide a student the way an experienced instructor would.</li><li><strong>AI is the first classroom technology to split faculty, students, and administration into opposing camps.</strong> Some faculty add zero-tolerance rubric rows while others experiment eagerly. Students range from uneasy to already reliant. Zendell describes a three-way perception gap she hasn't seen with any previous technology, including the transition to online learning.</li><li><strong>Healthcare employers aren't waiting for higher ed to figure this out.</strong> Zendell regularly scans job postings for healthcare leadership roles and finds AI literacy and AI tool proficiency appearing with increasing frequency, particularly in informatics, clinical data analytics, and healthcare finance. Her students are asking for these skills and feeling the urgency themselves.</li><li><strong>A student tester changed the entire design process.</strong> Zendell recruited an informatics student with an interest in healthcare AI to take each module as a learner before it goes live. That feedback loop -- where the student flags where prompts mislead or where the AI drifts into unproductive territory -- became central to how the team iterates on course design.</li><li><strong>The real danger isn't AI itself -- it's losing the habit of questioning it.</strong> Zendell's deepest concern is dependency: that convenience erodes the capacity to critically evaluate AI output. In healthcare especially, where students might default to ChatGPT instead of dedicated clinical interfaces, the gap between accessible and appropriate matters.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Anna Zendell, a social worker turned educator who oversees healthcare management, human services, and wellness programs at Bay Path University, about what it takes to rebuild a curriculum around AI when the stakes are patient outcomes. Zendell is currently piloting an AI-enhanced program from the ground up, designing courses where a closed AI system mentors students through interactive activities while faculty retain grading authority and instructional presence. The conversation covers why traditional learning outcomes don't translate cleanly into AI-driven instruction, how adult learners in healthcare face unique pressure to acquire AI literacy for careers that already demand it, and the trust gaps between students, faculty, and administrators that complicate adoption.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Curriculum doesn't absorb AI -- it has to be rebuilt for it.</strong> Zendell found that standard learning outcomes written with Bloom's Taxonomy are too broad for an AI system to use as mentoring scaffolds. Her team breaks each outcome into granular component steps, essentially teaching the AI how to guide a student the way an experienced instructor would.</li><li><strong>AI is the first classroom technology to split faculty, students, and administration into opposing camps.</strong> Some faculty add zero-tolerance rubric rows while others experiment eagerly. Students range from uneasy to already reliant. Zendell describes a three-way perception gap she hasn't seen with any previous technology, including the transition to online learning.</li><li><strong>Healthcare employers aren't waiting for higher ed to figure this out.</strong> Zendell regularly scans job postings for healthcare leadership roles and finds AI literacy and AI tool proficiency appearing with increasing frequency, particularly in informatics, clinical data analytics, and healthcare finance. Her students are asking for these skills and feeling the urgency themselves.</li><li><strong>A student tester changed the entire design process.</strong> Zendell recruited an informatics student with an interest in healthcare AI to take each module as a learner before it goes live. That feedback loop -- where the student flags where prompts mislead or where the AI drifts into unproductive territory -- became central to how the team iterates on course design.</li><li><strong>The real danger isn't AI itself -- it's losing the habit of questioning it.</strong> Zendell's deepest concern is dependency: that convenience erodes the capacity to critically evaluate AI output. In healthcare especially, where students might default to ChatGPT instead of dedicated clinical interfaces, the gap between accessible and appropriate matters.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 09 Apr 2026 01:47:00 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/ab1d4568/df6e1709.mp3" length="26534056" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/JcICgDhteTs-A2ej4lp6KLc5ez6Htbwx1PwN2ZvS6lM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83NmU3/ZDRlNmRhYjQyN2Rh/MmY2YzQyOTEwOTFi/Y2ZiZS5wbmc.jpg"/>
      <itunes:duration>1657</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Anna Zendell, a social worker turned educator who oversees healthcare management, human services, and wellness programs at Bay Path University, about what it takes to rebuild a curriculum around AI when the stakes are patient outcomes. Zendell is currently piloting an AI-enhanced program from the ground up, designing courses where a closed AI system mentors students through interactive activities while faculty retain grading authority and instructional presence. The conversation covers why traditional learning outcomes don't translate cleanly into AI-driven instruction, how adult learners in healthcare face unique pressure to acquire AI literacy for careers that already demand it, and the trust gaps between students, faculty, and administrators that complicate adoption.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Curriculum doesn't absorb AI -- it has to be rebuilt for it.</strong> Zendell found that standard learning outcomes written with Bloom's Taxonomy are too broad for an AI system to use as mentoring scaffolds. Her team breaks each outcome into granular component steps, essentially teaching the AI how to guide a student the way an experienced instructor would.</li><li><strong>AI is the first classroom technology to split faculty, students, and administration into opposing camps.</strong> Some faculty add zero-tolerance rubric rows while others experiment eagerly. Students range from uneasy to already reliant. Zendell describes a three-way perception gap she hasn't seen with any previous technology, including the transition to online learning.</li><li><strong>Healthcare employers aren't waiting for higher ed to figure this out.</strong> Zendell regularly scans job postings for healthcare leadership roles and finds AI literacy and AI tool proficiency appearing with increasing frequency, particularly in informatics, clinical data analytics, and healthcare finance. Her students are asking for these skills and feeling the urgency themselves.</li><li><strong>A student tester changed the entire design process.</strong> Zendell recruited an informatics student with an interest in healthcare AI to take each module as a learner before it goes live. That feedback loop -- where the student flags where prompts mislead or where the AI drifts into unproductive territory -- became central to how the team iterates on course design.</li><li><strong>The real danger isn't AI itself -- it's losing the habit of questioning it.</strong> Zendell's deepest concern is dependency: that convenience erodes the capacity to critically evaluate AI output. In healthcare especially, where students might default to ChatGPT instead of dedicated clinical interfaces, the gap between accessible and appropriate matters.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>ai literacy,professional credential,workforce readiness,educator certification,digital competency</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/anna-zendall" img="https://img.transistorcdn.com/TPSFutu5LUgxKH18IKjD45DYhxxxofClYLafeniMVZs/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84ZTBj/Y2Q5NDRkYzllY2Fl/NzhhY2Y4MDRlYTNh/NzhjOC5wbmc.jpg">Anna Zendall </podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/ab1d4568/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mj6zh4gubz27"/>
    </item>
    <item>
      <title>What's the Line Between Research Integrity and Using AI as a Tool? - Kari Weaver</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>What's the Line Between Research Integrity and Using AI as a Tool? - Kari Weaver</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d35de46d-9fe5-4ca3-8ea2-e161d0b8917d</guid>
      <link>https://listen.priten.org/s1/19</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Citation can't bridge the gap between AI-generated ideas and their sources.</strong> Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.</li><li><strong>A global AI disclosure standard is actively being built.</strong> Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.</li><li><strong>AI use in research often falls outside methodology entirely.</strong> A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.</li><li><strong>Separating the disclosure from the assignment makes students more likely to do it.</strong> At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.</li><li><strong>Authorship will likely settle at the disciplinary level, not the universal one.</strong> Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Citation can't bridge the gap between AI-generated ideas and their sources.</strong> Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.</li><li><strong>A global AI disclosure standard is actively being built.</strong> Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.</li><li><strong>AI use in research often falls outside methodology entirely.</strong> A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.</li><li><strong>Separating the disclosure from the assignment makes students more likely to do it.</strong> At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.</li><li><strong>Authorship will likely settle at the disciplinary level, not the universal one.</strong> Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 07 Apr 2026 17:45:53 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/0976a29d/64102a11.mp3" length="36970052" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/kGKBH1RidFLde2Qwhd0s1Qf1gp0ZfE9cBQnhntq_7MM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iZDc1/MGViZTg4NWExYzBl/NWVkY2UwOTliZmI5/MGE2Yy5wbmc.jpg"/>
      <itunes:duration>2309</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Citation can't bridge the gap between AI-generated ideas and their sources.</strong> Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.</li><li><strong>A global AI disclosure standard is actively being built.</strong> Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.</li><li><strong>AI use in research often falls outside methodology entirely.</strong> A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.</li><li><strong>Separating the disclosure from the assignment makes students more likely to do it.</strong> At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.</li><li><strong>Authorship will likely settle at the disciplinary level, not the universal one.</strong> Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>research integrity,ai as tool,academic research,citation ethics,scholarly writing</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/kari-weaver" img="https://img.transistorcdn.com/3C2Vfx_SMVnK3eopNyN1PoHykDQT60907iJp8OekBvo/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lNGVi/NzkyY2Q1ZjI4YmYx/ZTExNmFmYjBiNzIx/NDlkZi5wbmc.jpg">Kari Weaver</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/0976a29d/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3miwqvka63l24"/>
    </item>
    <item>
      <title>What Does Medicine Look Like When AI in the Room? - Jack Kincaid</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>What Does Medicine Look Like When AI in the Room? - Jack Kincaid</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">731c94ba-6f8e-4b11-a8e3-e82f56259e89</guid>
      <link>https://listen.priten.org/s1/18</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Jack Kincaid, a third-year medical student at Harvard Medical School, about navigating clinical training in an era of powerful AI tools. Jack shares his perspective on Open Evidence (a medical LLM), Harvard's AI Sandbox, and the tension between leveraging new technology and developing as a physician.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>AI tools can accelerate diagnostic reasoning—but training still requires struggle.</strong> Platforms like Open Evidence can reliably synthesize evidence and suggest diagnoses, but reflexively reaching for them risks stunting the critical thinking that clinical practice demands. The goal should be building heuristics strong enough to stay present with patients, not offloading cognition.</li><li><strong>Transparency about surveillance matters.</strong> From Canvas quiz monitoring in college to clinical logging systems, students often don't know what's being tracked. Jack's experience as a TA revealed the extent of visibility administrators have—and raised questions about whether strategic ambiguity helps maintain standards or just breeds anxiety.</li><li><strong>Institutions are starting to take AI governance seriously.</strong> Harvard Medical School's AI Sandbox gives trainees access to multiple LLMs in a secure environment that protects curriculum materials and personal data (though it's not HIPAA compliant). This kind of infrastructure signals that leadership is thinking carefully about responsible use.</li><li><strong>Career concerns about AI replacement are real.</strong> For students considering imaging-heavy specialties like radiology or radiation oncology, the specter of AI "scope creep" is a recurring topic in conversations with attendings and senior trainees. It's not paranoia—it's a practical factor in career planning.</li><li><strong>Discovery often happens peer-to-peer.</strong> Jack first learned about Open Evidence by glancing at a classmate's screen during a simulation exercise. The most impactful tools aren't always introduced through formal curricula—they spread through observation and word of mouth.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Jack Kincaid, a third-year medical student at Harvard Medical School, about navigating clinical training in an era of powerful AI tools. Jack shares his perspective on Open Evidence (a medical LLM), Harvard's AI Sandbox, and the tension between leveraging new technology and developing as a physician.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>AI tools can accelerate diagnostic reasoning—but training still requires struggle.</strong> Platforms like Open Evidence can reliably synthesize evidence and suggest diagnoses, but reflexively reaching for them risks stunting the critical thinking that clinical practice demands. The goal should be building heuristics strong enough to stay present with patients, not offloading cognition.</li><li><strong>Transparency about surveillance matters.</strong> From Canvas quiz monitoring in college to clinical logging systems, students often don't know what's being tracked. Jack's experience as a TA revealed the extent of visibility administrators have—and raised questions about whether strategic ambiguity helps maintain standards or just breeds anxiety.</li><li><strong>Institutions are starting to take AI governance seriously.</strong> Harvard Medical School's AI Sandbox gives trainees access to multiple LLMs in a secure environment that protects curriculum materials and personal data (though it's not HIPAA compliant). This kind of infrastructure signals that leadership is thinking carefully about responsible use.</li><li><strong>Career concerns about AI replacement are real.</strong> For students considering imaging-heavy specialties like radiology or radiation oncology, the specter of AI "scope creep" is a recurring topic in conversations with attendings and senior trainees. It's not paranoia—it's a practical factor in career planning.</li><li><strong>Discovery often happens peer-to-peer.</strong> Jack first learned about Open Evidence by glancing at a classmate's screen during a simulation exercise. The most impactful tools aren't always introduced through formal curricula—they spread through observation and word of mouth.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 02 Apr 2026 00:20:00 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/6e9413da/fb8ab01c.mp3" length="21768695" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/cCfVX7WoADEfNQKBHolsABExLnVugqjZe9lQAhvlyaw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85Y2Nk/MDQ0MjE0NGZiZWFi/MmZjNDM1ZDg3MTc5/ZjMxMy5wbmc.jpg"/>
      <itunes:duration>1359</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Jack Kincaid, a third-year medical student at Harvard Medical School, about navigating clinical training in an era of powerful AI tools. Jack shares his perspective on Open Evidence (a medical LLM), Harvard's AI Sandbox, and the tension between leveraging new technology and developing as a physician.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>AI tools can accelerate diagnostic reasoning—but training still requires struggle.</strong> Platforms like Open Evidence can reliably synthesize evidence and suggest diagnoses, but reflexively reaching for them risks stunting the critical thinking that clinical practice demands. The goal should be building heuristics strong enough to stay present with patients, not offloading cognition.</li><li><strong>Transparency about surveillance matters.</strong> From Canvas quiz monitoring in college to clinical logging systems, students often don't know what's being tracked. Jack's experience as a TA revealed the extent of visibility administrators have—and raised questions about whether strategic ambiguity helps maintain standards or just breeds anxiety.</li><li><strong>Institutions are starting to take AI governance seriously.</strong> Harvard Medical School's AI Sandbox gives trainees access to multiple LLMs in a secure environment that protects curriculum materials and personal data (though it's not HIPAA compliant). This kind of infrastructure signals that leadership is thinking carefully about responsible use.</li><li><strong>Career concerns about AI replacement are real.</strong> For students considering imaging-heavy specialties like radiology or radiation oncology, the specter of AI "scope creep" is a recurring topic in conversations with attendings and senior trainees. It's not paranoia—it's a practical factor in career planning.</li><li><strong>Discovery often happens peer-to-peer.</strong> Jack first learned about Open Evidence by glancing at a classmate's screen during a simulation exercise. The most impactful tools aren't always introduced through formal curricula—they spread through observation and word of mouth.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>ai in medicine,clinical training,medical education,diagnostic reasoning,healthcare ai</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/jack-kincaid" img="https://img.transistorcdn.com/Gzywah8Fy7EzBkep6f-QkXFUeuwIdaxNzPCUXqBgWQc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kMjhk/YjNkZjRlOTA4NTEx/MzY0YTEzNjRjNmU4/MTcxNi5wbmc.jpg">Jack Kincaid</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6e9413da/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/6e9413da/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3miie4rxp5s2y"/>
    </item>
    <item>
      <title>Who Builds the Tools Teachers Are Asked to Use? - Yanni Chen</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>Who Builds the Tools Teachers Are Asked to Use? - Yanni Chen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">90d8c733-81a8-4a9d-adae-bb0974297c77</guid>
      <link>https://listen.priten.org/s1/17</link>
      <description>
        <![CDATA[<p>In this episode, Priten and Yanni Chen explore what it actually looks like to build AI tools that support learning rather than shortcut it. Yanni, a master's student at Harvard Graduate School of Education and product developer at Deep Brain Academy, shares her experience creating an AI math tutor with a genuine commitment to scaffolding, cultural inclusivity, and keeping teachers central to the learning process.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Scaffolding matters more than speed.</strong> AI tools often give direct answers because that's what they're engineered for. But real learning requires guiding students through the thinking process—something teachers do that AI cannot replicate. Educators should look for tools that provide step-by-step guidance rather than instant solutions.</li><li><strong>Teacher skepticism is healthy—and often fades with use.</strong> Most teachers approach AI with skepticism, which is appropriate. But just like PowerPoint and video once were new classroom tools, AI becomes less intimidating through hands-on experience. The recommendation: start with personal, low-stakes use before thinking about classroom implementation.</li><li><strong>Gen Alpha's AI fluency makes teacher presence more important, not less.</strong> Students are already fluent AI users. This doesn't diminish the teacher's role—it elevates it. Teachers need to help students navigate bias, develop critical thinking, and understand when AI is appropriate and when it isn't.</li><li><strong>We lack clear guidelines—so educators must set their own.</strong> In the absence of federal or state AI policies, individual educators need to establish clear ethical boundaries around data security, safety, and appropriate use. The technology is moving faster than regulation can keep up.</li><li><strong>Creative technologies extend beyond chatbots.</strong> From 3D printing and laser cutting that let students build physical objects to AR/VR simulations for medical training, there's a whole landscape of educational technology that emphasizes hands-on learning and creative exploration—not just AI conversation.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten and Yanni Chen explore what it actually looks like to build AI tools that support learning rather than shortcut it. Yanni, a master's student at Harvard Graduate School of Education and product developer at Deep Brain Academy, shares her experience creating an AI math tutor with a genuine commitment to scaffolding, cultural inclusivity, and keeping teachers central to the learning process.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Scaffolding matters more than speed.</strong> AI tools often give direct answers because that's what they're engineered for. But real learning requires guiding students through the thinking process—something teachers do that AI cannot replicate. Educators should look for tools that provide step-by-step guidance rather than instant solutions.</li><li><strong>Teacher skepticism is healthy—and often fades with use.</strong> Most teachers approach AI with skepticism, which is appropriate. But just like PowerPoint and video once were new classroom tools, AI becomes less intimidating through hands-on experience. The recommendation: start with personal, low-stakes use before thinking about classroom implementation.</li><li><strong>Gen Alpha's AI fluency makes teacher presence more important, not less.</strong> Students are already fluent AI users. This doesn't diminish the teacher's role—it elevates it. Teachers need to help students navigate bias, develop critical thinking, and understand when AI is appropriate and when it isn't.</li><li><strong>We lack clear guidelines—so educators must set their own.</strong> In the absence of federal or state AI policies, individual educators need to establish clear ethical boundaries around data security, safety, and appropriate use. The technology is moving faster than regulation can keep up.</li><li><strong>Creative technologies extend beyond chatbots.</strong> From 3D printing and laser cutting that let students build physical objects to AR/VR simulations for medical training, there's a whole landscape of educational technology that emphasizes hands-on learning and creative exploration—not just AI conversation.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 31 Mar 2026 00:08:00 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/b9dccc3f/3096cd47.mp3" length="28907123" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/sslsDvxPRGmA7N5C9q5Of3mauL1ivkgM6OuZiDFYAaE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xMzhh/ZTZlYTM4ZGM1NmM1/NmE0NzE5NjFjMWFl/Y2UxZC5wbmc.jpg"/>
      <itunes:duration>1806</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten and Yanni Chen explore what it actually looks like to build AI tools that support learning rather than shortcut it. Yanni, a master's student at Harvard Graduate School of Education and product developer at Deep Brain Academy, shares her experience creating an AI math tutor with a genuine commitment to scaffolding, cultural inclusivity, and keeping teachers central to the learning process.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Scaffolding matters more than speed.</strong> AI tools often give direct answers because that's what they're engineered for. But real learning requires guiding students through the thinking process—something teachers do that AI cannot replicate. Educators should look for tools that provide step-by-step guidance rather than instant solutions.</li><li><strong>Teacher skepticism is healthy—and often fades with use.</strong> Most teachers approach AI with skepticism, which is appropriate. But just like PowerPoint and video once were new classroom tools, AI becomes less intimidating through hands-on experience. The recommendation: start with personal, low-stakes use before thinking about classroom implementation.</li><li><strong>Gen Alpha's AI fluency makes teacher presence more important, not less.</strong> Students are already fluent AI users. This doesn't diminish the teacher's role—it elevates it. Teachers need to help students navigate bias, develop critical thinking, and understand when AI is appropriate and when it isn't.</li><li><strong>We lack clear guidelines—so educators must set their own.</strong> In the absence of federal or state AI policies, individual educators need to establish clear ethical boundaries around data security, safety, and appropriate use. The technology is moving faster than regulation can keep up.</li><li><strong>Creative technologies extend beyond chatbots.</strong> From 3D printing and laser cutting that let students build physical objects to AR/VR simulations for medical training, there's a whole landscape of educational technology that emphasizes hands-on learning and creative exploration—not just AI conversation.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>edtech design,teacher input,product development,scaffolding,ai math tutor</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/yanni-chen" img="https://img.transistorcdn.com/lLYiTzQ8wYY6eCw5l4GPeBsoJVJAxCIE4t7TDme-1KU/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wZmM3/MmQzNWEyM2IzMDI4/ZDBmOWNhZWQ0ZDBm/YmQ3MC5qcGVn.jpg">Yanni Chen</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/b9dccc3f/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/b9dccc3f/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3midcjp7s5b2v"/>
    </item>
    <item>
      <title>Is Surveillance Culture Ruining Trust in Schools? - Jessica Maddry</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>Is Surveillance Culture Ruining Trust in Schools? - Jessica Maddry</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c8f77edf-9be5-40f7-ad63-9da69955aad5</guid>
      <link>https://listen.priten.org/s1/16</link>
      <description>
        <![CDATA[<p>In this episode, Priten and Jessica Maddry examine how surveillance culture and rigid policy enforcement are eroding trust and genuine learning in schools. From cell phone bans that criminalize normal behavior to reading programs that strip away the joy of stories, they explore how the gap between written policies and their ethical implementation has created environments of control rather than connection. The conversation spans zero-tolerance enforcement, AI detection tools, and the critical importance of human relationships in education.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Policies should serve ethics, not replace them.</strong> Following rules isn't the same as doing the right thing. When a student has their phone off in their pocket but gets suspended because it's not in their backpack, the punishment no longer serves the policy's original intent of reducing distraction.</li><li><strong>Surveillance culture damages the learning environment.</strong> Constant monitoring and zero-tolerance enforcement create an atmosphere where students feel unsafe and disengaged. When students associate school with punishment rather than growth, absenteeism and mental health crises follow naturally.</li><li><strong>Deep literacy is becoming a privilege again.</strong> Many students no longer read books from start to finish, instead consuming only passages for standardized tests. This loss of story-based learning strips away both the joy of reading and critical thinking skills.</li><li><strong>AI detection is an unwinnable arms race.</strong> The cycle of AI detectors, humanizers, and humanizer-detectors demonstrates a fundamental misunderstanding of how to address academic integrity—tools cannot replace the trust and relationships needed for genuine learning.</li><li><strong>Human connection is irreplaceable in education.</strong> Whether it's a professor scrapping class to process a difficult moment with students, or a teacher stepping aside to comfort a struggling child, the most impactful educational experiences come from authentic human relationships—something no technology can replicate.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten and Jessica Maddry examine how surveillance culture and rigid policy enforcement are eroding trust and genuine learning in schools. From cell phone bans that criminalize normal behavior to reading programs that strip away the joy of stories, they explore how the gap between written policies and their ethical implementation has created environments of control rather than connection. The conversation spans zero-tolerance enforcement, AI detection tools, and the critical importance of human relationships in education.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Policies should serve ethics, not replace them.</strong> Following rules isn't the same as doing the right thing. When a student has their phone off in their pocket but gets suspended because it's not in their backpack, the punishment no longer serves the policy's original intent of reducing distraction.</li><li><strong>Surveillance culture damages the learning environment.</strong> Constant monitoring and zero-tolerance enforcement create an atmosphere where students feel unsafe and disengaged. When students associate school with punishment rather than growth, absenteeism and mental health crises follow naturally.</li><li><strong>Deep literacy is becoming a privilege again.</strong> Many students no longer read books from start to finish, instead consuming only passages for standardized tests. This loss of story-based learning strips away both the joy of reading and critical thinking skills.</li><li><strong>AI detection is an unwinnable arms race.</strong> The cycle of AI detectors, humanizers, and humanizer-detectors demonstrates a fundamental misunderstanding of how to address academic integrity—tools cannot replace the trust and relationships needed for genuine learning.</li><li><strong>Human connection is irreplaceable in education.</strong> Whether it's a professor scrapping class to process a difficult moment with students, or a teacher stepping aside to comfort a struggling child, the most impactful educational experiences come from authentic human relationships—something no technology can replicate.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 26 Mar 2026 22:38:23 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/556355c9/e97c6c60.mp3" length="30942209" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/W7hjMia53vHBuahXy2lE6ix20VQsTTDanydW9OXDmHs/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82NDM2/MTZiZTk5ZDIxNTRm/NzI4ZWI5ZDQ1ZmIy/ZTgzYS5wbmc.jpg"/>
      <itunes:duration>1933</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten and Jessica Maddry examine how surveillance culture and rigid policy enforcement are eroding trust and genuine learning in schools. From cell phone bans that criminalize normal behavior to reading programs that strip away the joy of stories, they explore how the gap between written policies and their ethical implementation has created environments of control rather than connection. The conversation spans zero-tolerance enforcement, AI detection tools, and the critical importance of human relationships in education.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Policies should serve ethics, not replace them.</strong> Following rules isn't the same as doing the right thing. When a student has their phone off in their pocket but gets suspended because it's not in their backpack, the punishment no longer serves the policy's original intent of reducing distraction.</li><li><strong>Surveillance culture damages the learning environment.</strong> Constant monitoring and zero-tolerance enforcement create an atmosphere where students feel unsafe and disengaged. When students associate school with punishment rather than growth, absenteeism and mental health crises follow naturally.</li><li><strong>Deep literacy is becoming a privilege again.</strong> Many students no longer read books from start to finish, instead consuming only passages for standardized tests. This loss of story-based learning strips away both the joy of reading and critical thinking skills.</li><li><strong>AI detection is an unwinnable arms race.</strong> The cycle of AI detectors, humanizers, and humanizer-detectors demonstrates a fundamental misunderstanding of how to address academic integrity—tools cannot replace the trust and relationships needed for genuine learning.</li><li><strong>Human connection is irreplaceable in education.</strong> Whether it's a professor scrapping class to process a difficult moment with students, or a teacher stepping aside to comfort a struggling child, the most impactful educational experiences come from authentic human relationships—something no technology can replicate.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>surveillance culture,school trust,student monitoring,privacy in schools,proctoring software</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/jessica-maddry" img="https://img.transistorcdn.com/c4TPsG6Ig3lzM7kwAgGAC_p6NA6ZoXlF5lcBspHNZjc/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xNjQz/N2Q3NDM5NmQ3OTY2/Mjg5NjdhNjY5NTc5/OTg0Zi5wbmc.jpg">Jessica Maddry</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/556355c9/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/556355c9/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mhz3mzvnv22v"/>
    </item>
    <item>
      <title>What Does Representative Governance Mean for Our Future? - Nathán Goldberg</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>What Does Representative Governance Mean for Our Future? - Nathán Goldberg</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">64f605ac-ad23-4420-9745-8257a3b61586</guid>
      <link>https://listen.priten.org/s1/15</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Nathán Goldberg, a philosopher-statistician whose career weaves together two unlikely threads: professional soccer and democratic activism. As Vice President of the US Soccer Federation and founder of both Harvard Forward and Bluebonnet Data, Nathán has spent years thinking about who gets to sit in the rooms where decisions are made—and why it matters.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Voting isn't enough—perspective is.</strong> The people impacted by decisions need to be in the rooms where those decisions get made.</li><li><strong>Outsiders can win.</strong> Harvard Forward gathered 4,500 signatures on parchment paper, won board seats, and a decade of resistance to divestment collapsed within a year.</li><li><strong>Institutions resist until they can't.</strong> Harvard ignored them, then attacked them. It didn't work.</li><li><strong>The model scales.</strong> The same playbook worked at Yale and Penn State. One elected climate scientist shifted Penn State's investment policy.</li><li><strong>Soccer has the same problem.</strong> 4 million youth players, zero recent youth players in governance. </li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Nathán Goldberg, a philosopher-statistician whose career weaves together two unlikely threads: professional soccer and democratic activism. As Vice President of the US Soccer Federation and founder of both Harvard Forward and Bluebonnet Data, Nathán has spent years thinking about who gets to sit in the rooms where decisions are made—and why it matters.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Voting isn't enough—perspective is.</strong> The people impacted by decisions need to be in the rooms where those decisions get made.</li><li><strong>Outsiders can win.</strong> Harvard Forward gathered 4,500 signatures on parchment paper, won board seats, and a decade of resistance to divestment collapsed within a year.</li><li><strong>Institutions resist until they can't.</strong> Harvard ignored them, then attacked them. It didn't work.</li><li><strong>The model scales.</strong> The same playbook worked at Yale and Penn State. One elected climate scientist shifted Penn State's investment policy.</li><li><strong>Soccer has the same problem.</strong> 4 million youth players, zero recent youth players in governance. </li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 24 Mar 2026 23:59:07 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/fc57e2ad/1f9a88e0.mp3" length="46657625" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/cYRu84uEPSDevfd2ggSnKmVTGsbLm4tqtQtx488etpE/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80Yjhh/MjE3OGFjZjE0ZGQ4/NGUxNjUzOTVjODE3/ODQ1NC5wbmc.jpg"/>
      <itunes:duration>2915</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Nathán Goldberg, a philosopher-statistician whose career weaves together two unlikely threads: professional soccer and democratic activism. As Vice President of the US Soccer Federation and founder of both Harvard Forward and Bluebonnet Data, Nathán has spent years thinking about who gets to sit in the rooms where decisions are made—and why it matters.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Voting isn't enough—perspective is.</strong> The people impacted by decisions need to be in the rooms where those decisions get made.</li><li><strong>Outsiders can win.</strong> Harvard Forward gathered 4,500 signatures on parchment paper, won board seats, and a decade of resistance to divestment collapsed within a year.</li><li><strong>Institutions resist until they can't.</strong> Harvard ignored them, then attacked them. It didn't work.</li><li><strong>The model scales.</strong> The same playbook worked at Yale and Penn State. One elected climate scientist shifted Penn State's investment policy.</li><li><strong>Soccer has the same problem.</strong> 4 million youth players, zero recent youth players in governance. </li></ul>]]>
      </itunes:summary>
      <itunes:keywords>representative governance,civic education,democracy,student voice,civic participation</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/nathan-goldberg" img="https://img.transistorcdn.com/s9Nx_YLECFOOSZltyHzFE3faI5Harkx9j_sR8kywIcM/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jYzY0/ODQwMGE5ZjhhZjIz/NTE4NWRmY2JkMTZj/YTM1NS5qcGc.jpg">Nathán Goldberg</podcast:person>
      <podcast:person role="Guest" href="https://new.academy4sc.org" img="https://img.transistorcdn.com/8-6dfw2nE1hkunZpu0jTBdTfzx2arWjLlCBNXi5-Au4/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9iZWYw/NGMyY2NjNGYzYjE2/MzM4Njc3MmEzZGRj/NmM4ZS5wbmc.jpg">Academy 4 Social Civics</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/fc57e2ad/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:transcript url="https://share.transistor.fm/s/fc57e2ad/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mhu7a44g7e2d"/>
    </item>
    <item>
      <title>How Do We Teach the Journey When AI Offers the Destination? - Varun Gupta</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>How Do We Teach the Journey When AI Offers the Destination? - Varun Gupta</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">857e84d8-104c-4b7a-9dd9-17ef4e09412c</guid>
      <link>https://listen.priten.org/s1/14</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Varun Gupta, an Accounting and Economics professor at Wharton County Junior College in the Houston area who has been teaching since 2007. Varun is refreshingly candid about his own complicated relationship with AI—he uses it extensively for lesson planning, assignment creation, and communication, but worries deeply about what happens when students skip the grind entirely. </p><p><strong>Key Takeaways:</strong></p><ul><li><strong>The helicopter problem is real.</strong> Using AI to get answers without effort is like taking a helicopter to the top of Mount Everest. You get there, but you missed the point. The grind, the failure, the figuring-it-out—that's where the learning lives.</li><li><strong>Cognitive offloading is already happening to teachers, too.</strong> Varun no longer does mental math. He GPS's the airport he's been to hundreds of times. AI is next. The concern isn't hypothetical—it's already underway for him personally.</li><li><strong>Post-COVID is the bigger shift, not post-ChatGPT.</strong> Students who came through COVID developed habits of not showing up, not following through, and not asking questions. That behavioral shift is more visible than any change attributable to AI alone.</li><li><strong>The stress is gone—and that's the tell.</strong> Before ChatGPT, students peppered him with term paper questions all semester. Now? Silence. They're not less anxious because they're more prepared. They're less anxious because they've already decided how they'll produce the paper.</li><li><strong>There's inherent hypocrisy in the dynamic—and it's worth naming.</strong> Using AI to create assignments while discouraging students from using it to complete them isn't perfectly clean. Varun acknowledges it. The distinction is in where the journey matters: for the teacher creating the prompt, or for the student doing the thinking.</li><li><strong>The human value is in the face-to-face.</strong> In asynchronous online courses, the line between professor and bot is thin. Where Varun sees his irreplaceable value is in the in-person relationship—lived experience, empathy, career conversations, and the daily modeling of what professional effort actually looks like.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Varun Gupta, an Accounting and Economics professor at Wharton County Junior College in the Houston area who has been teaching since 2007. Varun is refreshingly candid about his own complicated relationship with AI—he uses it extensively for lesson planning, assignment creation, and communication, but worries deeply about what happens when students skip the grind entirely. </p><p><strong>Key Takeaways:</strong></p><ul><li><strong>The helicopter problem is real.</strong> Using AI to get answers without effort is like taking a helicopter to the top of Mount Everest. You get there, but you missed the point. The grind, the failure, the figuring-it-out—that's where the learning lives.</li><li><strong>Cognitive offloading is already happening to teachers, too.</strong> Varun no longer does mental math. He GPS's the airport he's been to hundreds of times. AI is next. The concern isn't hypothetical—it's already underway for him personally.</li><li><strong>Post-COVID is the bigger shift, not post-ChatGPT.</strong> Students who came through COVID developed habits of not showing up, not following through, and not asking questions. That behavioral shift is more visible than any change attributable to AI alone.</li><li><strong>The stress is gone—and that's the tell.</strong> Before ChatGPT, students peppered him with term paper questions all semester. Now? Silence. They're not less anxious because they're more prepared. They're less anxious because they've already decided how they'll produce the paper.</li><li><strong>There's inherent hypocrisy in the dynamic—and it's worth naming.</strong> Using AI to create assignments while discouraging students from using it to complete them isn't perfectly clean. Varun acknowledges it. The distinction is in where the journey matters: for the teacher creating the prompt, or for the student doing the thinking.</li><li><strong>The human value is in the face-to-face.</strong> In asynchronous online courses, the line between professor and bot is thin. Where Varun sees his irreplaceable value is in the in-person relationship—lived experience, empathy, career conversations, and the daily modeling of what professional effort actually looks like.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 19 Mar 2026 00:08:00 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/6fd895e2/54bfa954.mp3" length="27687173" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/ODtamm52OKoXluhQC6u9WOdytq7uva_-KUX-Go5xaAo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83ZGVh/MjVhMTAzYjJjZDc2/ZDFmYTE0MGVhOTdj/OGRhMy5wbmc.jpg"/>
      <itunes:duration>1729</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Varun Gupta, an Accounting and Economics professor at Wharton County Junior College in the Houston area who has been teaching since 2007. Varun is refreshingly candid about his own complicated relationship with AI—he uses it extensively for lesson planning, assignment creation, and communication, but worries deeply about what happens when students skip the grind entirely. </p><p><strong>Key Takeaways:</strong></p><ul><li><strong>The helicopter problem is real.</strong> Using AI to get answers without effort is like taking a helicopter to the top of Mount Everest. You get there, but you missed the point. The grind, the failure, the figuring-it-out—that's where the learning lives.</li><li><strong>Cognitive offloading is already happening to teachers, too.</strong> Varun no longer does mental math. He GPS's the airport he's been to hundreds of times. AI is next. The concern isn't hypothetical—it's already underway for him personally.</li><li><strong>Post-COVID is the bigger shift, not post-ChatGPT.</strong> Students who came through COVID developed habits of not showing up, not following through, and not asking questions. That behavioral shift is more visible than any change attributable to AI alone.</li><li><strong>The stress is gone—and that's the tell.</strong> Before ChatGPT, students peppered him with term paper questions all semester. Now? Silence. They're not less anxious because they're more prepared. They're less anxious because they've already decided how they'll produce the paper.</li><li><strong>There's inherent hypocrisy in the dynamic—and it's worth naming.</strong> Using AI to create assignments while discouraging students from using it to complete them isn't perfectly clean. Varun acknowledges it. The distinction is in where the journey matters: for the teacher creating the prompt, or for the student doing the thinking.</li><li><strong>The human value is in the face-to-face.</strong> In asynchronous online courses, the line between professor and bot is thin. Where Varun sees his irreplaceable value is in the in-person relationship—lived experience, empathy, career conversations, and the daily modeling of what professional effort actually looks like.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>learning process,ai shortcuts,productive struggle,student growth,destination vs journey</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/varun-gupta" img="https://img.transistorcdn.com/RGlvoiWrbzmwzYn6ey6l5_h8mHMqg9F9dWQoUiIBMdY/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wZjJm/NjRlZTRkY2Y1NTBi/OGVjOGI1NWJkMDk2/Mzk0ZC5wbmc.jpg">Varun Gupta</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/6fd895e2/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/6fd895e2/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mhfabcppkn22"/>
    </item>
    <item>
      <title>Can We Preserve Core Classrooms Values While Integrating Ed Tech? - Brian Tash</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Can We Preserve Core Classrooms Values While Integrating Ed Tech? - Brian Tash</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3863a1db-7c78-4ea1-933f-fe952296bc11</guid>
      <link>https://listen.priten.org/s1/13</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Brian Tash, an elementary school teacher with nearly 30 years of experience who has witnessed the complete arc of education technology—from Scantrons to Google Classroom to AI. Brian shares how he balances technology integration with preserving fundamental skills like reading stamina and handwriting. The conversation covers his transparent approach to using AI for faster student feedback, why he's concerned about declining empathy and attention spans post-COVID, how he teaches prompt engineering to third and fourth graders, and his hope that educators will become more mindful about <em>why</em> they're using technology rather than just adopting everything new. He argues that personal connection, problem-solving, and collaboration are what students need most—and those can't come from a screen.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Follow the 80-20 rule with AI.</strong> AI gets you 80% of the way—the other 20% is you adding your own elements. This applies to teachers giving feedback and students creating work.</li><li><strong>Transparency builds trust.</strong> When students understand <em>why</em> you're using AI for feedback, they embrace it. Brian's study found 90% of students were in favor once they understood the reasoning.</li><li><strong>Technology can't replace human connection.</strong> Students need to learn how to talk to each other, problem-solve collaboratively, and develop empathy—skills that don't come from screens.</li><li><strong>Stamina is the real crisis.</strong> Post-COVID students struggle to push through hard things. The growth mindset isn't there. Writing a paragraph makes their hands hurt.</li><li><strong>Teach prompting, not just usage.</strong> Focus on prompt engineering—how to get what you want from AI. Experiment with students: change the words, add details, see what happens.</li><li><strong>Standards-based grading may help.</strong> With clear standards, teachers can focus instruction, use AI to target specific skills, and have more time for the human elements once mastery is achieved.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Brian Tash, an elementary school teacher with nearly 30 years of experience who has witnessed the complete arc of education technology—from Scantrons to Google Classroom to AI. Brian shares how he balances technology integration with preserving fundamental skills like reading stamina and handwriting. The conversation covers his transparent approach to using AI for faster student feedback, why he's concerned about declining empathy and attention spans post-COVID, how he teaches prompt engineering to third and fourth graders, and his hope that educators will become more mindful about <em>why</em> they're using technology rather than just adopting everything new. He argues that personal connection, problem-solving, and collaboration are what students need most—and those can't come from a screen.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Follow the 80-20 rule with AI.</strong> AI gets you 80% of the way—the other 20% is you adding your own elements. This applies to teachers giving feedback and students creating work.</li><li><strong>Transparency builds trust.</strong> When students understand <em>why</em> you're using AI for feedback, they embrace it. Brian's study found 90% of students were in favor once they understood the reasoning.</li><li><strong>Technology can't replace human connection.</strong> Students need to learn how to talk to each other, problem-solve collaboratively, and develop empathy—skills that don't come from screens.</li><li><strong>Stamina is the real crisis.</strong> Post-COVID students struggle to push through hard things. The growth mindset isn't there. Writing a paragraph makes their hands hurt.</li><li><strong>Teach prompting, not just usage.</strong> Focus on prompt engineering—how to get what you want from AI. Experiment with students: change the words, add details, see what happens.</li><li><strong>Standards-based grading may help.</strong> With clear standards, teachers can focus instruction, use AI to target specific skills, and have more time for the human elements once mastery is achieved.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 17 Mar 2026 00:14:00 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/7a3d3e5c/85559dcb.mp3" length="27326109" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/y-EVd9vSkSUAf3Ax2jNz6QebdgAhBTc2arLDzU8JKRQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xNjZi/OTQyMjUxNWU2YjNl/YTcwMWY3M2Q3OWJk/ODcwYi5wbmc.jpg"/>
      <itunes:duration>1707</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Brian Tash, an elementary school teacher with nearly 30 years of experience who has witnessed the complete arc of education technology—from Scantrons to Google Classroom to AI. Brian shares how he balances technology integration with preserving fundamental skills like reading stamina and handwriting. The conversation covers his transparent approach to using AI for faster student feedback, why he's concerned about declining empathy and attention spans post-COVID, how he teaches prompt engineering to third and fourth graders, and his hope that educators will become more mindful about <em>why</em> they're using technology rather than just adopting everything new. He argues that personal connection, problem-solving, and collaboration are what students need most—and those can't come from a screen.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Follow the 80-20 rule with AI.</strong> AI gets you 80% of the way—the other 20% is you adding your own elements. This applies to teachers giving feedback and students creating work.</li><li><strong>Transparency builds trust.</strong> When students understand <em>why</em> you're using AI for feedback, they embrace it. Brian's study found 90% of students were in favor once they understood the reasoning.</li><li><strong>Technology can't replace human connection.</strong> Students need to learn how to talk to each other, problem-solve collaboratively, and develop empathy—skills that don't come from screens.</li><li><strong>Stamina is the real crisis.</strong> Post-COVID students struggle to push through hard things. The growth mindset isn't there. Writing a paragraph makes their hands hurt.</li><li><strong>Teach prompting, not just usage.</strong> Focus on prompt engineering—how to get what you want from AI. Experiment with students: change the words, add details, see what happens.</li><li><strong>Standards-based grading may help.</strong> With clear standards, teachers can focus instruction, use AI to target specific skills, and have more time for the human elements once mastery is achieved.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>classroom values,edtech integration,teacher autonomy,core pedagogy,educational technology</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/brian-tash">Brian Tash</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/7a3d3e5c/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/7a3d3e5c/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mha4de4nlv2t"/>
    </item>
    <item>
      <title>Why Do We Teach Foreign Languages When AI is Multilingual? - Noelia Pozo</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Why Do We Teach Foreign Languages When AI is Multilingual? - Noelia Pozo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">614f0dd9-2e17-4272-9fc8-f4e8d06f8291</guid>
      <link>https://listen.priten.org/s1/12</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Noelia Pozo, a high school Spanish and French teacher with nearly two decades of experience who now heads the Foreign Language and Classical Department at her school. Noelia shares how she transformed her classroom by using AI openly alongside students rather than policing it. The conversation covers how she handles AI-generated work through relationship-building rather than detection tools, why she collects phones in a "Telephone Hotel," how exploring AI bias with students sparked deeper learning than lectures, and her frustration with colleagues who refuse to adapt while hypocritically using AI themselves. She argues that the question isn't whether to engage with these tools, but how to do so while preserving human connection, critical thinking, and genuine learning.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Show students language is already in their lives.</strong> From "in lieu of" to Chipotle menus—they're already speaking foreign languages without realizing it. Recognition breeds respect.</li><li><strong>AI can't replace human connection.</strong> You can't build trust through a machine. Professional relationships require authentic communication, not a technological relay.</li><li><strong>Create honesty, not surveillance.</strong> Use AI openly alongside students and ask only for transparency. When trust flows both ways, students voluntarily admit mistakes—and learn from them.</li><li><strong>Teach students to verify AI output.</strong> AI isn't infallible. Once you put something in your paper, you own it—right or wrong.</li><li><strong>Explore AI bias together.</strong> "Nobody looks like me" in AI images sparked deeper conversations about bias and better prompting than any lecture could.</li><li><strong>Adapt or be replaced.</strong> Teachers won't lose jobs to AI—but they may lose them to teachers who use AI well.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Noelia Pozo, a high school Spanish and French teacher with nearly two decades of experience who now heads the Foreign Language and Classical Department at her school. Noelia shares how she transformed her classroom by using AI openly alongside students rather than policing it. The conversation covers how she handles AI-generated work through relationship-building rather than detection tools, why she collects phones in a "Telephone Hotel," how exploring AI bias with students sparked deeper learning than lectures, and her frustration with colleagues who refuse to adapt while hypocritically using AI themselves. She argues that the question isn't whether to engage with these tools, but how to do so while preserving human connection, critical thinking, and genuine learning.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Show students language is already in their lives.</strong> From "in lieu of" to Chipotle menus—they're already speaking foreign languages without realizing it. Recognition breeds respect.</li><li><strong>AI can't replace human connection.</strong> You can't build trust through a machine. Professional relationships require authentic communication, not a technological relay.</li><li><strong>Create honesty, not surveillance.</strong> Use AI openly alongside students and ask only for transparency. When trust flows both ways, students voluntarily admit mistakes—and learn from them.</li><li><strong>Teach students to verify AI output.</strong> AI isn't infallible. Once you put something in your paper, you own it—right or wrong.</li><li><strong>Explore AI bias together.</strong> "Nobody looks like me" in AI images sparked deeper conversations about bias and better prompting than any lecture could.</li><li><strong>Adapt or be replaced.</strong> Teachers won't lose jobs to AI—but they may lose them to teachers who use AI well.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 12 Mar 2026 00:06:00 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/d92c13a7/41bfc133.mp3" length="27802774" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/safIHMnUX_JknS0NKLdvyQQicyeXVhFGYUWvfj10bJI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ZGJj/NDM1N2IyYzEwMmM2/Zjk4NDE4ZGU1YmRl/YjFlZi5wbmc.jpg"/>
      <itunes:duration>1736</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Noelia Pozo, a high school Spanish and French teacher with nearly two decades of experience who now heads the Foreign Language and Classical Department at her school. Noelia shares how she transformed her classroom by using AI openly alongside students rather than policing it. The conversation covers how she handles AI-generated work through relationship-building rather than detection tools, why she collects phones in a "Telephone Hotel," how exploring AI bias with students sparked deeper learning than lectures, and her frustration with colleagues who refuse to adapt while hypocritically using AI themselves. She argues that the question isn't whether to engage with these tools, but how to do so while preserving human connection, critical thinking, and genuine learning.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Show students language is already in their lives.</strong> From "in lieu of" to Chipotle menus—they're already speaking foreign languages without realizing it. Recognition breeds respect.</li><li><strong>AI can't replace human connection.</strong> You can't build trust through a machine. Professional relationships require authentic communication, not a technological relay.</li><li><strong>Create honesty, not surveillance.</strong> Use AI openly alongside students and ask only for transparency. When trust flows both ways, students voluntarily admit mistakes—and learn from them.</li><li><strong>Teach students to verify AI output.</strong> AI isn't infallible. Once you put something in your paper, you own it—right or wrong.</li><li><strong>Explore AI bias together.</strong> "Nobody looks like me" in AI images sparked deeper conversations about bias and better prompting than any lecture could.</li><li><strong>Adapt or be replaced.</strong> Teachers won't lose jobs to AI—but they may lose them to teachers who use AI well.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>foreign language education,ai translation,multilingual ai,language pedagogy,cultural competence</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/noelia-pozo">Noelia Pozo</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d92c13a7/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/d92c13a7/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mgtjkwkxhx27"/>
    </item>
    <item>
      <title>Do Kids Need Phones? - Shon Holland</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Do Kids Need Phones? - Shon Holland</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f86eef82-483f-4ac5-bc89-3213bb944bd1</guid>
      <link>https://listen.priten.org/s1/11</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Shon Holland, a middle school science teacher at Sells Middle School in Dublin, Ohio. After a first career in hazardous waste management and environmental health and safety, Shon made the leap to education about 20 years ago. His experience with both seventh and eighth graders gives him frontline insight into how adolescents interact with technology. The conversation explores his balanced approach to tools like GoGuardian—using technology to monitor without creating surveillance culture—why he believes giving students responsibility actually lightens a teacher's load, and his blunt assessment that smartphones simply aren't healthy for middle schoolers.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Misuse is inevitable—guidance is the goal.</strong> Middle schoolers can misuse anything from rulers to AI. Instead of trying to eliminate misuse, focus on teaching students how to make tools work for them and guiding them when they stumble.</li><li><strong>Relationships trump detection tools.</strong> Teachers who know their students can spot AI-generated work by recognizing when writing doesn't match a student's voice or level—no software required. Treat violations as learning moments, not punishments.</li><li><strong>Give responsibility to gain freedom.</strong> When you trust students with responsibility and show them consequences aren't personal, they give you space to actually teach. The more ownership they have, the less you need to police.</li><li><strong>Parents need to parent.</strong> The research on smartphones and adolescent brains is irrefutable. Kids don't need iPhones—they need dumb phones, landlines, and parents willing to set boundaries even when their children push back.</li><li><strong>Know the time and place.</strong> Technology and AI are fantastic tools that can differentiate instruction, translate languages, and unlock learning. But sometimes you just need human brain power. The skill is knowing when to use tech and when to walk away.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Shon Holland, a middle school science teacher at Sells Middle School in Dublin, Ohio. After a first career in hazardous waste management and environmental health and safety, Shon made the leap to education about 20 years ago. His experience with both seventh and eighth graders gives him frontline insight into how adolescents interact with technology. The conversation explores his balanced approach to tools like GoGuardian—using technology to monitor without creating surveillance culture—why he believes giving students responsibility actually lightens a teacher's load, and his blunt assessment that smartphones simply aren't healthy for middle schoolers.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Misuse is inevitable—guidance is the goal.</strong> Middle schoolers can misuse anything from rulers to AI. Instead of trying to eliminate misuse, focus on teaching students how to make tools work for them and guiding them when they stumble.</li><li><strong>Relationships trump detection tools.</strong> Teachers who know their students can spot AI-generated work by recognizing when writing doesn't match a student's voice or level—no software required. Treat violations as learning moments, not punishments.</li><li><strong>Give responsibility to gain freedom.</strong> When you trust students with responsibility and show them consequences aren't personal, they give you space to actually teach. The more ownership they have, the less you need to police.</li><li><strong>Parents need to parent.</strong> The research on smartphones and adolescent brains is irrefutable. Kids don't need iPhones—they need dumb phones, landlines, and parents willing to set boundaries even when their children push back.</li><li><strong>Know the time and place.</strong> Technology and AI are fantastic tools that can differentiate instruction, translate languages, and unlock learning. But sometimes you just need human brain power. The skill is knowing when to use tech and when to walk away.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 10 Mar 2026 22:35:06 -0400</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/53c29894/d00ad131.mp3" length="25424418" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/bYGXCRRWqtms2OYFTcvBW-YRU2lU4RFGzWXCN0GlwCM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yNGUz/NGUzMDU2MzE4ZjJm/YmMyNzE5MWYxMWM3/OTBmYi5wbmc.jpg"/>
      <itunes:duration>1588</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Shon Holland, a middle school science teacher at Sells Middle School in Dublin, Ohio. After a first career in hazardous waste management and environmental health and safety, Shon made the leap to education about 20 years ago. His experience with both seventh and eighth graders gives him frontline insight into how adolescents interact with technology. The conversation explores his balanced approach to tools like GoGuardian—using technology to monitor without creating surveillance culture—why he believes giving students responsibility actually lightens a teacher's load, and his blunt assessment that smartphones simply aren't healthy for middle schoolers.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Misuse is inevitable—guidance is the goal.</strong> Middle schoolers can misuse anything from rulers to AI. Instead of trying to eliminate misuse, focus on teaching students how to make tools work for them and guiding them when they stumble.</li><li><strong>Relationships trump detection tools.</strong> Teachers who know their students can spot AI-generated work by recognizing when writing doesn't match a student's voice or level—no software required. Treat violations as learning moments, not punishments.</li><li><strong>Give responsibility to gain freedom.</strong> When you trust students with responsibility and show them consequences aren't personal, they give you space to actually teach. The more ownership they have, the less you need to police.</li><li><strong>Parents need to parent.</strong> The research on smartphones and adolescent brains is irrefutable. Kids don't need iPhones—they need dumb phones, landlines, and parents willing to set boundaries even when their children push back.</li><li><strong>Know the time and place.</strong> Technology and AI are fantastic tools that can differentiate instruction, translate languages, and unlock learning. But sometimes you just need human brain power. The skill is knowing when to use tech and when to walk away.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>phone bans,youth and technology,parenting and screens,student attention,device policy</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/shon-holland" img="https://img.transistorcdn.com/DBAY6SuL4f1d-ltIvWlXDF4LSs1c36jj2oRwSbgTv-U/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MTY1/M2VmODM4MWQyYTBj/MzI1NjA4MjZhZjVj/NzZiZi5wbmc.jpg">Shon Holland</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/53c29894/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/53c29894/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mgojvpa7ne2t"/>
    </item>
    <item>
      <title>How Can AI Support Writing Instruction? - Kim Cowperthwaite</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>How Can AI Support Writing Instruction? - Kim Cowperthwaite</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">389e6b80-d515-41ee-bfe4-83431ac222fb</guid>
      <link>https://listen.priten.org/s1/10</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Kim Cowperthwaite, an English Language Arts teacher at Freeport Middle School in Maine who has been teaching for over 20 years. Growing up in a tech-forward household in the 1970s and later working in the newspaper industry as it faced digital disruption, Kim brings a unique perspective on technological change. She was among the first teachers in the nation to work in Maine's pioneering one-to-one laptop program starting in 2004. The conversation explores her unconventional approach to AI in the classroom—treating it like "a book or a pencil"—why she believes building community and relationships matters more than policing technology use, and how she helps students recognize when AI has written their work without making it punitive.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Know your students better than any detector.</strong> Teachers who build relationships with their students can identify AI-generated work by recognizing changes in sentence length, structure, and voice—no detection tools required.</li><li><strong>Make AI conversations transparent, not secretive.</strong> Rather than creating a surveillance culture, openly discuss how AI works, when it's appropriate, and how you can tell when it's been used—students respond better to honesty than to policing.</li><li><strong>Technology should amplify human expression, not replace it.</strong> Start with handwritten journals and personal ideas first, then bring in technology as a tool to enhance what students have already created on their own.</li><li><strong>Teaching self-control is lifelong.</strong> Help students recognize their own impulse patterns with technology—the habit of drifting to games during a thinking pause—because they'll need to manage this their whole lives.</li><li><strong>Focus on the goal, then find the tool.</strong> Instead of teaching specific AI technologies that come and go, teach students to identify what they want to achieve first, then select appropriate tools—this approach works for both students and teachers in professional development.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Kim Cowperthwaite, an English Language Arts teacher at Freeport Middle School in Maine who has been teaching for over 20 years. Growing up in a tech-forward household in the 1970s and later working in the newspaper industry as it faced digital disruption, Kim brings a unique perspective on technological change. She was among the first teachers in the nation to work in Maine's pioneering one-to-one laptop program starting in 2004. The conversation explores her unconventional approach to AI in the classroom—treating it like "a book or a pencil"—why she believes building community and relationships matters more than policing technology use, and how she helps students recognize when AI has written their work without making it punitive.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Know your students better than any detector.</strong> Teachers who build relationships with their students can identify AI-generated work by recognizing changes in sentence length, structure, and voice—no detection tools required.</li><li><strong>Make AI conversations transparent, not secretive.</strong> Rather than creating a surveillance culture, openly discuss how AI works, when it's appropriate, and how you can tell when it's been used—students respond better to honesty than to policing.</li><li><strong>Technology should amplify human expression, not replace it.</strong> Start with handwritten journals and personal ideas first, then bring in technology as a tool to enhance what students have already created on their own.</li><li><strong>Teaching self-control is lifelong.</strong> Help students recognize their own impulse patterns with technology—the habit of drifting to games during a thinking pause—because they'll need to manage this their whole lives.</li><li><strong>Focus on the goal, then find the tool.</strong> Instead of teaching specific AI technologies that come and go, teach students to identify what they want to achieve first, then select appropriate tools—this approach works for both students and teachers in professional development.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 05 Mar 2026 00:38:00 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/7f4b583a/261e588d.mp3" length="24108019" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/VPp4bvSyeX7pxmRl9JuLvsScMVNq1fa2gWUZRe7xLdw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lODQz/MTIwMmFkOTJmNGYw/ZGRkMzBlMWNmNjA1/NjRhMC5wbmc.jpg"/>
      <itunes:duration>1506</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Kim Cowperthwaite, an English Language Arts teacher at Freeport Middle School in Maine who has been teaching for over 20 years. Growing up in a tech-forward household in the 1970s and later working in the newspaper industry as it faced digital disruption, Kim brings a unique perspective on technological change. She was among the first teachers in the nation to work in Maine's pioneering one-to-one laptop program starting in 2004. The conversation explores her unconventional approach to AI in the classroom—treating it like "a book or a pencil"—why she believes building community and relationships matters more than policing technology use, and how she helps students recognize when AI has written their work without making it punitive.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Know your students better than any detector.</strong> Teachers who build relationships with their students can identify AI-generated work by recognizing changes in sentence length, structure, and voice—no detection tools required.</li><li><strong>Make AI conversations transparent, not secretive.</strong> Rather than creating a surveillance culture, openly discuss how AI works, when it's appropriate, and how you can tell when it's been used—students respond better to honesty than to policing.</li><li><strong>Technology should amplify human expression, not replace it.</strong> Start with handwritten journals and personal ideas first, then bring in technology as a tool to enhance what students have already created on their own.</li><li><strong>Teaching self-control is lifelong.</strong> Help students recognize their own impulse patterns with technology—the habit of drifting to games during a thinking pause—because they'll need to manage this their whole lives.</li><li><strong>Focus on the goal, then find the tool.</strong> Instead of teaching specific AI technologies that come and go, teach students to identify what they want to achieve first, then select appropriate tools—this approach works for both students and teachers in professional development.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>writing instruction,ai writing tools,composition pedagogy,feedback,literacy education</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/kim-cowperthwaite" img="https://img.transistorcdn.com/o0D5rUkzBISKZMQA_De3EJ5eQYX75veSO6R1Ev5e1V0/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yZDI0/Mzg0MTVhYjhiNTA4/NWI2NjU3OTIzNTA0/YjFjYy5qcGc.jpg">Kim Cowperthwaite</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/7f4b583a/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/7f4b583a/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mgc3gcarut2h"/>
    </item>
    <item>
      <title>Should Students Be Trusted With Phones During Exams? - Dini Arini</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>Should Students Be Trusted With Phones During Exams? - Dini Arini</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">185a4db0-56d9-41c3-a40f-46b171387467</guid>
      <link>https://listen.priten.org/s1/9</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Dini Arini, a PhD candidate in language literacy and technology at Washington State University who has been teaching for over 15 years. Growing up in Indonesia without access to English courses that her classmates had, Dini experienced firsthand the anxiety of being left behind—an experience that now fuels her optimism about AI's potential to democratize education. The conversation explores her unconventional approach to classroom technology, including allowing students to use phones during exams, why she believes teachers who truly know their students don't need AI detectors, and how her research into AI ethics policy is uncovering the gap between institutional guidelines and classroom reality. Dini also shares what genuinely worries her: emerging research suggesting that over-reliance on AI may be physically changing our brains.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Know your students better than any detector.</strong> Teachers who truly understand their students' abilities and writing styles can identify AI-generated work without relying on detection tools—you become the filter.</li><li><strong>Technology can bridge access gaps.</strong> For students without resources for tutoring or courses, AI tools can serve as supplementary learning support that was previously unavailable.</li><li><strong>Trust can work as enforcement.</strong> Having students acknowledge an honor statement and knowing their baseline abilities can be as effective as surveillance—students often rise to the expectation of integrity.</li><li><strong>Adapt assessments to what you're testing.</strong> Use technology-enabled tests when appropriate, but return to pen-and-paper or presentations when the skill being assessed requires it.</li><li><strong>Stay creative ahead of AI.</strong> As AI improves, teachers must develop AI-resistant assignments and varied assessment methods rather than abandoning technology entirely.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Dini Arini, a PhD candidate in language literacy and technology at Washington State University who has been teaching for over 15 years. Growing up in Indonesia without access to English courses that her classmates had, Dini experienced firsthand the anxiety of being left behind—an experience that now fuels her optimism about AI's potential to democratize education. The conversation explores her unconventional approach to classroom technology, including allowing students to use phones during exams, why she believes teachers who truly know their students don't need AI detectors, and how her research into AI ethics policy is uncovering the gap between institutional guidelines and classroom reality. Dini also shares what genuinely worries her: emerging research suggesting that over-reliance on AI may be physically changing our brains.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Know your students better than any detector.</strong> Teachers who truly understand their students' abilities and writing styles can identify AI-generated work without relying on detection tools—you become the filter.</li><li><strong>Technology can bridge access gaps.</strong> For students without resources for tutoring or courses, AI tools can serve as supplementary learning support that was previously unavailable.</li><li><strong>Trust can work as enforcement.</strong> Having students acknowledge an honor statement and knowing their baseline abilities can be as effective as surveillance—students often rise to the expectation of integrity.</li><li><strong>Adapt assessments to what you're testing.</strong> Use technology-enabled tests when appropriate, but return to pen-and-paper or presentations when the skill being assessed requires it.</li><li><strong>Stay creative ahead of AI.</strong> As AI improves, teachers must develop AI-resistant assignments and varied assessment methods rather than abandoning technology entirely.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 03 Mar 2026 00:11:00 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/38623b76/58e7301d.mp3" length="22043907" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/rCuKmqPTGWaabtz5-S7g-xzLge4GyTE8tDopaRYq_GI/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hNWI0/NDNhM2VjZTliZjg3/NWYzMDc3M2M0YWIy/NzBlMi5wbmc.jpg"/>
      <itunes:duration>1377</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Dini Arini, a PhD candidate in language literacy and technology at Washington State University who has been teaching for over 15 years. Growing up in Indonesia without access to English courses that her classmates had, Dini experienced firsthand the anxiety of being left behind—an experience that now fuels her optimism about AI's potential to democratize education. The conversation explores her unconventional approach to classroom technology, including allowing students to use phones during exams, why she believes teachers who truly know their students don't need AI detectors, and how her research into AI ethics policy is uncovering the gap between institutional guidelines and classroom reality. Dini also shares what genuinely worries her: emerging research suggesting that over-reliance on AI may be physically changing our brains.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Know your students better than any detector.</strong> Teachers who truly understand their students' abilities and writing styles can identify AI-generated work without relying on detection tools—you become the filter.</li><li><strong>Technology can bridge access gaps.</strong> For students without resources for tutoring or courses, AI tools can serve as supplementary learning support that was previously unavailable.</li><li><strong>Trust can work as enforcement.</strong> Having students acknowledge an honor statement and knowing their baseline abilities can be as effective as surveillance—students often rise to the expectation of integrity.</li><li><strong>Adapt assessments to what you're testing.</strong> Use technology-enabled tests when appropriate, but return to pen-and-paper or presentations when the skill being assessed requires it.</li><li><strong>Stay creative ahead of AI.</strong> As AI improves, teachers must develop AI-resistant assignments and varied assessment methods rather than abandoning technology entirely.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>phone policy,exam integrity,student trust,academic honesty,classroom management</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/dini-arini" img="https://img.transistorcdn.com/00ZaCmAwYh18ZEwwyfUCESvmCSRNg5kKSD7q_GKwIKA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82ODZm/NzE1M2U0ZmIwMTY5/NWFjZTIwZTQxYjMy/NTk4Ni5wbmc.jpg">Dini Arini</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/38623b76/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/38623b76/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mg4yy24yps2u"/>
    </item>
    <item>
      <title>What If the Answer to Technology Overload Isn't Better Tech But Real Relationships? - Nate Otey</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>What If the Answer to Technology Overload Isn't Better Tech But Real Relationships? - Nate Otey</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e6bb0a48-810e-4be1-8c77-70f732c95d9e</guid>
      <link>https://listen.priten.org/s1/8</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Nate Otey, a ninth grade humanities, statistics, and calculus teacher at Boston Trinity Academy, a school  that has deliberately chosen a low-tech approach. Nate shares how his school has banned phones for students up to 10th grade, with parents and students largely on board. The conversation explores what happens when a school community prioritizes relationality over connectivity, why friction in human relationships might be essential rather than something to eliminate, and how faith-based education can provide a framework for understanding why face-to-face connection matters. Nate reflects on the practical challenges of enforcing device policies, how teachers can use AI ethically while modeling integrity for students, and the coming wave of emotionally convincing AI that may challenge our understanding of human relationships.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Students often want the boundaries.</strong> Research shows many students know phones are bad for them and appreciate when schools take them away—they just can't opt out alone due to social pressure.</li><li><strong>Use the "would I tell my students?" heuristic.</strong> Teachers can ethically use AI for lesson prep and practice exercises, but should avoid using it for grading or tasks where students would feel cheated if they knew.</li><li><strong>Relationships require friction.</strong> Technology is designed to eliminate friction, but meaningful human connection is inherently awkward and difficult—that's what makes it valuable.</li><li><strong>Consistent enforcement matters more than strict rules.</strong> Students accept boundaries when they're applied fairly and uniformly; arbitrary enforcement breeds resentment.</li><li><strong>The next wave isn't intellectual—it's emotional.</strong> AI that perfectly imitates consciousness will soon challenge how we help students distinguish between real relationships and convincing simulations.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Nate Otey, a ninth grade humanities, statistics, and calculus teacher at Boston Trinity Academy, a school  that has deliberately chosen a low-tech approach. Nate shares how his school has banned phones for students up to 10th grade, with parents and students largely on board. The conversation explores what happens when a school community prioritizes relationality over connectivity, why friction in human relationships might be essential rather than something to eliminate, and how faith-based education can provide a framework for understanding why face-to-face connection matters. Nate reflects on the practical challenges of enforcing device policies, how teachers can use AI ethically while modeling integrity for students, and the coming wave of emotionally convincing AI that may challenge our understanding of human relationships.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Students often want the boundaries.</strong> Research shows many students know phones are bad for them and appreciate when schools take them away—they just can't opt out alone due to social pressure.</li><li><strong>Use the "would I tell my students?" heuristic.</strong> Teachers can ethically use AI for lesson prep and practice exercises, but should avoid using it for grading or tasks where students would feel cheated if they knew.</li><li><strong>Relationships require friction.</strong> Technology is designed to eliminate friction, but meaningful human connection is inherently awkward and difficult—that's what makes it valuable.</li><li><strong>Consistent enforcement matters more than strict rules.</strong> Students accept boundaries when they're applied fairly and uniformly; arbitrary enforcement breeds resentment.</li><li><strong>The next wave isn't intellectual—it's emotional.</strong> AI that perfectly imitates consciousness will soon challenge how we help students distinguish between real relationships and convincing simulations.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 26 Feb 2026 19:05:37 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/c16211fc/90e627d2.mp3" length="24641465" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/g3LMfGwi2x5KgSD2aWWkulW8G5k2jeRuYrbM7ug3XDo/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kZDY1/MzUzMzFjZDdmNjZh/ZGJlYmFhZmEzOWI4/OTVhZS5wbmc.jpg"/>
      <itunes:duration>1539</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Nate Otey, a ninth grade humanities, statistics, and calculus teacher at Boston Trinity Academy, a school  that has deliberately chosen a low-tech approach. Nate shares how his school has banned phones for students up to 10th grade, with parents and students largely on board. The conversation explores what happens when a school community prioritizes relationality over connectivity, why friction in human relationships might be essential rather than something to eliminate, and how faith-based education can provide a framework for understanding why face-to-face connection matters. Nate reflects on the practical challenges of enforcing device policies, how teachers can use AI ethically while modeling integrity for students, and the coming wave of emotionally convincing AI that may challenge our understanding of human relationships.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Students often want the boundaries.</strong> Research shows many students know phones are bad for them and appreciate when schools take them away—they just can't opt out alone due to social pressure.</li><li><strong>Use the "would I tell my students?" heuristic.</strong> Teachers can ethically use AI for lesson prep and practice exercises, but should avoid using it for grading or tasks where students would feel cheated if they knew.</li><li><strong>Relationships require friction.</strong> Technology is designed to eliminate friction, but meaningful human connection is inherently awkward and difficult—that's what makes it valuable.</li><li><strong>Consistent enforcement matters more than strict rules.</strong> Students accept boundaries when they're applied fairly and uniformly; arbitrary enforcement breeds resentment.</li><li><strong>The next wave isn't intellectual—it's emotional.</strong> AI that perfectly imitates consciousness will soon challenge how we help students distinguish between real relationships and convincing simulations.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>technology overload,real relationships,student wellbeing,screen time,human connection</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/nate-otey" img="https://img.transistorcdn.com/o0xU3md8iDKyDs75PAxy_y1hP1vzPu6YPDNBpOcteA4/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85N2Rk/MjczZjZiNDBjYjMw/MTg5NTFlMzUzNzk4/MmVlMS5qcGVn.jpg">Nate Otey</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/c16211fc/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/c16211fc/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mfsg2ukiar2g"/>
    </item>
    <item>
      <title>How Can AI Support Inclusive Education? - Tamsyn Smith</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>How Can AI Support Inclusive Education? - Tamsyn Smith</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">46b4f49b-9cc7-4f1e-bf3f-37ff2bc4b563</guid>
      <link>https://listen.priten.org/s1/7</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Tamsyn Smith, Senior Learning Designer and Team Lead at the University of Southampton, who is halfway through a PhD investigating how generative AI can support inclusive education. Tamsyn shares her journey from childhood programming to classroom teaching to higher ed learning design, and reflects on how COVID-19 and AI arrived as dual "cataclysmic shifts" that educators are still navigating. The conversation explores data privacy pitfalls, the myth of digitally-native students, and why Universal Design for Learning matters more than ever—ultimately landing on a hopeful note: most students are ethical, and the real question isn't whether they're cheating, but whether we're giving them meaningful reasons to learn.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Students still need foundational skills.</strong> Just as calculators didn't eliminate the need to understand math, AI doesn't eliminate the need to write well—you can't evaluate output you couldn't create yourself.</li><li><strong>Don't assume students are cheating.</strong> Research shows most students use AI ethically; if they're over-relying on it, ask whether assignments are meaningful or just busy work.</li><li><strong>Read the terms and conditions.</strong> Before asking students to use any tool, educators must understand what data it collects and where that data goes.</li><li><strong>Use a simple privacy heuristic.</strong> If you wouldn't post it on social media, don't put it into a generative AI tool.</li><li><strong>Technology should open doors, not add burdens.</strong> Universal Design for Learning means educators do the work to minimize barriers—not hand students another tool and call it support.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Tamsyn Smith, Senior Learning Designer and Team Lead at the University of Southampton, who is halfway through a PhD investigating how generative AI can support inclusive education. Tamsyn shares her journey from childhood programming to classroom teaching to higher ed learning design, and reflects on how COVID-19 and AI arrived as dual "cataclysmic shifts" that educators are still navigating. The conversation explores data privacy pitfalls, the myth of digitally-native students, and why Universal Design for Learning matters more than ever—ultimately landing on a hopeful note: most students are ethical, and the real question isn't whether they're cheating, but whether we're giving them meaningful reasons to learn.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Students still need foundational skills.</strong> Just as calculators didn't eliminate the need to understand math, AI doesn't eliminate the need to write well—you can't evaluate output you couldn't create yourself.</li><li><strong>Don't assume students are cheating.</strong> Research shows most students use AI ethically; if they're over-relying on it, ask whether assignments are meaningful or just busy work.</li><li><strong>Read the terms and conditions.</strong> Before asking students to use any tool, educators must understand what data it collects and where that data goes.</li><li><strong>Use a simple privacy heuristic.</strong> If you wouldn't post it on social media, don't put it into a generative AI tool.</li><li><strong>Technology should open doors, not add burdens.</strong> Universal Design for Learning means educators do the work to minimize barriers—not hand students another tool and call it support.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 24 Feb 2026 00:56:00 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/278f1177/da8d704d.mp3" length="25754430" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/rWtgpiK6C6kWCZlN5RMooKb1EVAzKa2H73_WCbDeaMA/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xM2E5/ZDVhYWQyYzBlYmVh/YmFlMGM0N2E1Mjg2/NTcxNS5wbmc.jpg"/>
      <itunes:duration>1609</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Tamsyn Smith, Senior Learning Designer and Team Lead at the University of Southampton, who is halfway through a PhD investigating how generative AI can support inclusive education. Tamsyn shares her journey from childhood programming to classroom teaching to higher ed learning design, and reflects on how COVID-19 and AI arrived as dual "cataclysmic shifts" that educators are still navigating. The conversation explores data privacy pitfalls, the myth of digitally-native students, and why Universal Design for Learning matters more than ever—ultimately landing on a hopeful note: most students are ethical, and the real question isn't whether they're cheating, but whether we're giving them meaningful reasons to learn.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Students still need foundational skills.</strong> Just as calculators didn't eliminate the need to understand math, AI doesn't eliminate the need to write well—you can't evaluate output you couldn't create yourself.</li><li><strong>Don't assume students are cheating.</strong> Research shows most students use AI ethically; if they're over-relying on it, ask whether assignments are meaningful or just busy work.</li><li><strong>Read the terms and conditions.</strong> Before asking students to use any tool, educators must understand what data it collects and where that data goes.</li><li><strong>Use a simple privacy heuristic.</strong> If you wouldn't post it on social media, don't put it into a generative AI tool.</li><li><strong>Technology should open doors, not add burdens.</strong> Universal Design for Learning means educators do the work to minimize barriers—not hand students another tool and call it support.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>inclusive education,ai accessibility,special needs,assistive technology,universal design</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/tamsyn-smith" img="https://img.transistorcdn.com/4vkGu6Wq7qgTpgRr2zwpj2sty6F-hVxShcAYwdVpQc8/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yODQ2/YzQxMDI0N2I3MjIz/NTdjZTJmN2VkMTZh/YzUyYi5qcGc.jpg">Tamsyn Smith</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/278f1177/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/278f1177/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mflia3jany2y"/>
    </item>
    <item>
      <title>How Might AI Support Early Education Interventions in India? - Ratna Gill</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>How Might AI Support Early Education Interventions in India? - Ratna Gill</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">54fa7c07-7f6e-4681-8a72-711e42479e99</guid>
      <link>https://listen.priten.org/s1/6</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Ratna Gill, who supports the partnerships team at Rocket Learning, a nonprofit tackling early childhood education in India through WhatsApp. Ratna shares her journey from child safety work to early childhood education and explains how Rocket Learning delivers bite-sized educational content to caregivers and Anganwadi workers serving 5 million children who lack access to early stimulation. The conversation explores their AI-powered personalized tutor,  the importance of cultural contextualization, and what ethical ed tech looks like when working with resource-constrained communities—ultimately landing on a hopeful note: technology can expand access to education without replacing the irreplaceable human connections that make learning joyful.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Meet communities where they already are.</strong> Rocket uses WhatsApp because families are already there—no new apps, no tech burden.</li><li><strong>Technology should supplement, not replace, human interaction.</strong> APU is capped at 15-20 minutes daily to preserve parent-child engagement.</li><li><strong>Context matters more than content.</strong> Effective ed tech adapts cultural references, not just language, for each region.</li><li><strong>Test slowly, learn deeply.</strong> Field testing revealed that background noise breaks speech-to-text—rushing would have shipped a broken product.</li><li><strong>Parents are the most transformative tool.</strong> AI can model joyful pedagogy, but it can't replace human connection.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Ratna Gill, who supports the partnerships team at Rocket Learning, a nonprofit tackling early childhood education in India through WhatsApp. Ratna shares her journey from child safety work to early childhood education and explains how Rocket Learning delivers bite-sized educational content to caregivers and Anganwadi workers serving 5 million children who lack access to early stimulation. The conversation explores their AI-powered personalized tutor,  the importance of cultural contextualization, and what ethical ed tech looks like when working with resource-constrained communities—ultimately landing on a hopeful note: technology can expand access to education without replacing the irreplaceable human connections that make learning joyful.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Meet communities where they already are.</strong> Rocket uses WhatsApp because families are already there—no new apps, no tech burden.</li><li><strong>Technology should supplement, not replace, human interaction.</strong> APU is capped at 15-20 minutes daily to preserve parent-child engagement.</li><li><strong>Context matters more than content.</strong> Effective ed tech adapts cultural references, not just language, for each region.</li><li><strong>Test slowly, learn deeply.</strong> Field testing revealed that background noise breaks speech-to-text—rushing would have shipped a broken product.</li><li><strong>Parents are the most transformative tool.</strong> AI can model joyful pedagogy, but it can't replace human connection.</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 19 Feb 2026 00:28:00 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/d88784cf/fe35b594.mp3" length="30031554" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/WeOZzxBKKt436YzXhwpkhhX_TeyitDQiSly9oKUOPHM/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zNTk1/ZGQ1OTYxNzI0Zjc2/YjRmZjRjNTIyZWE1/NDU5Ni5wbmc.jpg"/>
      <itunes:duration>1876</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Ratna Gill, who supports the partnerships team at Rocket Learning, a nonprofit tackling early childhood education in India through WhatsApp. Ratna shares her journey from child safety work to early childhood education and explains how Rocket Learning delivers bite-sized educational content to caregivers and Anganwadi workers serving 5 million children who lack access to early stimulation. The conversation explores their AI-powered personalized tutor,  the importance of cultural contextualization, and what ethical ed tech looks like when working with resource-constrained communities—ultimately landing on a hopeful note: technology can expand access to education without replacing the irreplaceable human connections that make learning joyful.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Meet communities where they already are.</strong> Rocket uses WhatsApp because families are already there—no new apps, no tech burden.</li><li><strong>Technology should supplement, not replace, human interaction.</strong> APU is capped at 15-20 minutes daily to preserve parent-child engagement.</li><li><strong>Context matters more than content.</strong> Effective ed tech adapts cultural references, not just language, for each region.</li><li><strong>Test slowly, learn deeply.</strong> Field testing revealed that background noise breaks speech-to-text—rushing would have shipped a broken product.</li><li><strong>Parents are the most transformative tool.</strong> AI can model joyful pedagogy, but it can't replace human connection.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>early education,ai interventions,india education,early childhood,educational equity</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/ratna-gill" img="https://img.transistorcdn.com/JnsAcAmjl3e35qPnBcWNvJz1LxAgOcaG7rsspGNYpLg/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zMGIx/YzUwYzBkZTQ3NjE1/NjFkNTlkNDg3ZGQ5/MzU2Ni5qcGVn.jpg">Ratna Gill</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/d88784cf/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/d88784cf/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mf6udjpuqv22"/>
    </item>
    <item>
      <title>How Can We Center Pedagogy During the AI Tech Wave? - Lance Eaton</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>How Can We Center Pedagogy During the AI Tech Wave? - Lance Eaton</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">42dcd015-0f3c-4847-8ad6-6af489036db6</guid>
      <link>https://listen.priten.org/s1/5</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Lance Eaton, Senior Associate Director of AI and Teaching and Learning at Northeastern University, about navigating the integration of AI and educational technology in higher education. Lance shares his 15-year journey through instructional design—from community colleges to Ivy League institutions—and offers practical wisdom on how educators can thoughtfully adopt AI without losing sight of pedagogy. The conversation explores everything from reflection bots and embodied learning to the tension between commercial tech platforms and educational values, ultimately landing on a hopeful note: we've navigated dozens of technological shifts before, and we can figure this one out too.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Start small and ground AI in learning goals.</strong> Like any instructional design challenge, begin with what you want students to demonstrate—then find where AI fits naturally.</li><li><strong>Use AI to deepen reflection, not replace it.</strong> A "reflection bot" that asks follow-up questions can help students dig deeper than a one-time submission ever could.</li><li><strong>Pick two or three tools and stick with them.</strong> The app explosion taught us this lesson—chasing every new AI tool leads to burnout, not better teaching.</li><li><strong>AI literacy is discipline-specific.</strong> Every field will be impacted differently; the goal isn't generic AI skills but understanding what AI means for your particular context.</li><li><strong>We've been here before.</strong> Higher ed has absorbed 80+ technologies since the 1970s. The playbooks exist—we just need to adapt them for this moment.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Lance Eaton, Senior Associate Director of AI and Teaching and Learning at Northeastern University, about navigating the integration of AI and educational technology in higher education. Lance shares his 15-year journey through instructional design—from community colleges to Ivy League institutions—and offers practical wisdom on how educators can thoughtfully adopt AI without losing sight of pedagogy. The conversation explores everything from reflection bots and embodied learning to the tension between commercial tech platforms and educational values, ultimately landing on a hopeful note: we've navigated dozens of technological shifts before, and we can figure this one out too.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Start small and ground AI in learning goals.</strong> Like any instructional design challenge, begin with what you want students to demonstrate—then find where AI fits naturally.</li><li><strong>Use AI to deepen reflection, not replace it.</strong> A "reflection bot" that asks follow-up questions can help students dig deeper than a one-time submission ever could.</li><li><strong>Pick two or three tools and stick with them.</strong> The app explosion taught us this lesson—chasing every new AI tool leads to burnout, not better teaching.</li><li><strong>AI literacy is discipline-specific.</strong> Every field will be impacted differently; the goal isn't generic AI skills but understanding what AI means for your particular context.</li><li><strong>We've been here before.</strong> Higher ed has absorbed 80+ technologies since the 1970s. The playbooks exist—we just need to adapt them for this moment.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 17 Feb 2026 00:01:00 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/4faba971/7ea2510c.mp3" length="32225794" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/I9i63TUfOL8TNHvFwLVAKc76dKHr2kvvl3ldSv3Suso/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hOTkz/NzEzMGE5MzhlN2Mw/NDBmNTA3MDA2NmQ2/MjQyZC5wbmc.jpg"/>
      <itunes:duration>2013</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Lance Eaton, Senior Associate Director of AI and Teaching and Learning at Northeastern University, about navigating the integration of AI and educational technology in higher education. Lance shares his 15-year journey through instructional design—from community colleges to Ivy League institutions—and offers practical wisdom on how educators can thoughtfully adopt AI without losing sight of pedagogy. The conversation explores everything from reflection bots and embodied learning to the tension between commercial tech platforms and educational values, ultimately landing on a hopeful note: we've navigated dozens of technological shifts before, and we can figure this one out too.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Start small and ground AI in learning goals.</strong> Like any instructional design challenge, begin with what you want students to demonstrate—then find where AI fits naturally.</li><li><strong>Use AI to deepen reflection, not replace it.</strong> A "reflection bot" that asks follow-up questions can help students dig deeper than a one-time submission ever could.</li><li><strong>Pick two or three tools and stick with them.</strong> The app explosion taught us this lesson—chasing every new AI tool leads to burnout, not better teaching.</li><li><strong>AI literacy is discipline-specific.</strong> Every field will be impacted differently; the goal isn't generic AI skills but understanding what AI means for your particular context.</li><li><strong>We've been here before.</strong> Higher ed has absorbed 80+ technologies since the 1970s. The playbooks exist—we just need to adapt them for this moment.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>pedagogy first,ai in education,teaching with technology,instructional design,lance eaton</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://www.lanceeaton.com/" img="https://img.transistorcdn.com/EevYqdDgxRJfy-JgG5cF6i8OMN2q-hWOZ3fFFS60S-Y/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MWVl/MmFhY2NkYWQ0MWVh/YTM3ZjgwMDdiNzIz/ZDU0YS5qcGc.jpg">Lance Eaton</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/4faba971/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/4faba971/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mezrvgpy4r27"/>
    </item>
    <item>
      <title>What Are Some Ethical Tech Integration Strategies for K-12? - Justin Cerenzia</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>What Are Some Ethical Tech Integration Strategies for K-12? - Justin Cerenzia</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">846d1fbc-5831-4108-b942-8ed8399cebe1</guid>
      <link>https://listen.priten.org/s1/4</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Justin Cerenzia, Executive Director of the Center for Teaching and Learning at Episcopal Academy, about navigating the complex ethical decisions administrators face when integrating AI and educational technology in K-12 schools. Justin shares his journey from early AI adoption with GPT-3.5 to implementing thoughtful frameworks for tech integration, discussing everything from AI tutors and cell phone policies to the tension between preparing students for the workforce versus fostering deep learning. The conversation explores how schools can balance innovation with pedagogy, the importance of making student thinking visible, and why ethical decision-making requires moving beyond simple policies to embrace experimentation, nuance, and a design mindset that puts learning outcomes first.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>There's no shared AI experience.</strong> Different platforms and access levels mean students and teachers use fundamentally different tools—making unified policies nearly impossible.</li><li><strong>AI detection is a losing battle.</strong> Focus instead on making student thinking visible through conversations and walled-garden tools like Flint.</li><li><strong>"Do no harm" cuts both ways.</strong> Schools must prevent misuse while also ensuring students aren't left behind on AI literacy.</li><li><strong>Understand learning science before deploying AI.</strong> The key question: are students cognitively offloading the task, or genuinely learning?</li><li><strong>The future is a design problem, not a prediction problem.</strong> Decide what you want from AI and build toward it—don't just react to updates.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Justin Cerenzia, Executive Director of the Center for Teaching and Learning at Episcopal Academy, about navigating the complex ethical decisions administrators face when integrating AI and educational technology in K-12 schools. Justin shares his journey from early AI adoption with GPT-3.5 to implementing thoughtful frameworks for tech integration, discussing everything from AI tutors and cell phone policies to the tension between preparing students for the workforce versus fostering deep learning. The conversation explores how schools can balance innovation with pedagogy, the importance of making student thinking visible, and why ethical decision-making requires moving beyond simple policies to embrace experimentation, nuance, and a design mindset that puts learning outcomes first.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>There's no shared AI experience.</strong> Different platforms and access levels mean students and teachers use fundamentally different tools—making unified policies nearly impossible.</li><li><strong>AI detection is a losing battle.</strong> Focus instead on making student thinking visible through conversations and walled-garden tools like Flint.</li><li><strong>"Do no harm" cuts both ways.</strong> Schools must prevent misuse while also ensuring students aren't left behind on AI literacy.</li><li><strong>Understand learning science before deploying AI.</strong> The key question: are students cognitively offloading the task, or genuinely learning?</li><li><strong>The future is a design problem, not a prediction problem.</strong> Decide what you want from AI and build toward it—don't just react to updates.</li></ul>]]>
      </content:encoded>
      <pubDate>Fri, 13 Feb 2026 00:05:12 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/36820bf9/3a647f9c.mp3" length="33881841" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/EJ8PCquV0oEVXQRTbLyb50kGDixxryJw2wcCwXvAJZ4/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81NjRh/ZDU1ZmM0N2JhMjBm/NzUxMDU4YmQzMjQ1/YTVhZC5wbmc.jpg"/>
      <itunes:duration>2117</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Justin Cerenzia, Executive Director of the Center for Teaching and Learning at Episcopal Academy, about navigating the complex ethical decisions administrators face when integrating AI and educational technology in K-12 schools. Justin shares his journey from early AI adoption with GPT-3.5 to implementing thoughtful frameworks for tech integration, discussing everything from AI tutors and cell phone policies to the tension between preparing students for the workforce versus fostering deep learning. The conversation explores how schools can balance innovation with pedagogy, the importance of making student thinking visible, and why ethical decision-making requires moving beyond simple policies to embrace experimentation, nuance, and a design mindset that puts learning outcomes first.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>There's no shared AI experience.</strong> Different platforms and access levels mean students and teachers use fundamentally different tools—making unified policies nearly impossible.</li><li><strong>AI detection is a losing battle.</strong> Focus instead on making student thinking visible through conversations and walled-garden tools like Flint.</li><li><strong>"Do no harm" cuts both ways.</strong> Schools must prevent misuse while also ensuring students aren't left behind on AI literacy.</li><li><strong>Understand learning science before deploying AI.</strong> The key question: are students cognitively offloading the task, or genuinely learning?</li><li><strong>The future is a design problem, not a prediction problem.</strong> Decide what you want from AI and build toward it—don't just react to updates.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>k-12 technology,ethical tech integration,classroom technology,digital citizenship,teacher leadership</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/justin-cerenzia" img="https://img.transistorcdn.com/-D5aHSNydWy2pk0q3dzjkqSsQUuInCQqzC3-eTU8JFA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82OTY4/MzYxMTQ0NTNhNGFj/YjQ2OTQ0YTcxNmI5/NWU2MC5qcGVn.jpg">Justin Cerenzia</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/36820bf9/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/36820bf9/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3men7j6f36u2x"/>
    </item>
    <item>
      <title>What Does Values-Driven Education Technology Policy Look Like? - Joe Carver</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>What Does Values-Driven Education Technology Policy Look Like? - Joe Carver</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7ea507c1-cabc-4f17-b879-a2120bea0ee5</guid>
      <link>https://listen.priten.org/s1/3</link>
      <description>
        <![CDATA[<p>In this episode, Priten talks with Joe Carver, Associate Head of School at The Meadow School. Joe shares his unconventional journey from debate coach to technology director to school leadership. He discusses his philosophy of values-driven technology integration—one that involves all stakeholders, resists both hasty adoption and knee-jerk resistance, and centers the teacher-student relationship. He explores how schools can thoughtfully embrace AI and educational technology by using core values as a North Star, building cultures of innovation through targeted adoption, and preparing educators to stay conversant with emerging tools. Joe emphasizes the importance of reverse-engineering what students miss in digital-first communities and advocates for data-informed, iterative decision-making that protects what matters most while navigating what's coming.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Schools shouldn't rush to adopt every new technology.</strong> Taking time for thoughtful due diligence and involving all stakeholders (teachers, division directors, student support services) leads to better outcomes than being the first to implement.</li><li><strong>Technology decisions should trace back to institutional core values.</strong> If a tool can't be connected to values like inquiry or community, it's a hard no.</li><li><strong>Implement a three-tier approach: no access for youngest students, guided access for middle grades, and unfettered access for upper school.</strong>  </li><li><strong>Educators must remain conversant in emerging technologies even if they choose not to adopt them.</strong> You can't effectively guide students away from tools you don't understand.</li><li><strong>Today's students are building digital communities without the face-to-face foundation previous generations had.</strong> Schools must explicitly teach digital norms and social skills that used to develop naturally through in-person interaction.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten talks with Joe Carver, Associate Head of School at The Meadow School. Joe shares his unconventional journey from debate coach to technology director to school leadership. He discusses his philosophy of values-driven technology integration—one that involves all stakeholders, resists both hasty adoption and knee-jerk resistance, and centers the teacher-student relationship. He explores how schools can thoughtfully embrace AI and educational technology by using core values as a North Star, building cultures of innovation through targeted adoption, and preparing educators to stay conversant with emerging tools. Joe emphasizes the importance of reverse-engineering what students miss in digital-first communities and advocates for data-informed, iterative decision-making that protects what matters most while navigating what's coming.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Schools shouldn't rush to adopt every new technology.</strong> Taking time for thoughtful due diligence and involving all stakeholders (teachers, division directors, student support services) leads to better outcomes than being the first to implement.</li><li><strong>Technology decisions should trace back to institutional core values.</strong> If a tool can't be connected to values like inquiry or community, it's a hard no.</li><li><strong>Implement a three-tier approach: no access for youngest students, guided access for middle grades, and unfettered access for upper school.</strong>  </li><li><strong>Educators must remain conversant in emerging technologies even if they choose not to adopt them.</strong> You can't effectively guide students away from tools you don't understand.</li><li><strong>Today's students are building digital communities without the face-to-face foundation previous generations had.</strong> Schools must explicitly teach digital norms and social skills that used to develop naturally through in-person interaction.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 10 Feb 2026 00:16:00 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/7352894c/1a92f356.mp3" length="27982256" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/10slDn-shrvD_-OzL8YFq9xZ20lPR0sMzEUeyWroDBU/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lNDZh/MDg3YjY2ZGNhODIw/NDA0MWQ2ZTBlMTM5/ZDliNy5wbmc.jpg"/>
      <itunes:duration>1748</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten talks with Joe Carver, Associate Head of School at The Meadow School. Joe shares his unconventional journey from debate coach to technology director to school leadership. He discusses his philosophy of values-driven technology integration—one that involves all stakeholders, resists both hasty adoption and knee-jerk resistance, and centers the teacher-student relationship. He explores how schools can thoughtfully embrace AI and educational technology by using core values as a North Star, building cultures of innovation through targeted adoption, and preparing educators to stay conversant with emerging tools. Joe emphasizes the importance of reverse-engineering what students miss in digital-first communities and advocates for data-informed, iterative decision-making that protects what matters most while navigating what's coming.</p><p><strong>Key Takeaways:</strong></p><ul><li><strong>Schools shouldn't rush to adopt every new technology.</strong> Taking time for thoughtful due diligence and involving all stakeholders (teachers, division directors, student support services) leads to better outcomes than being the first to implement.</li><li><strong>Technology decisions should trace back to institutional core values.</strong> If a tool can't be connected to values like inquiry or community, it's a hard no.</li><li><strong>Implement a three-tier approach: no access for youngest students, guided access for middle grades, and unfettered access for upper school.</strong>  </li><li><strong>Educators must remain conversant in emerging technologies even if they choose not to adopt them.</strong> You can't effectively guide students away from tools you don't understand.</li><li><strong>Today's students are building digital communities without the face-to-face foundation previous generations had.</strong> Schools must explicitly teach digital norms and social skills that used to develop naturally through in-person interaction.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>education policy,values-driven technology,edtech policy,school governance,technology integration</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/joe-carver" img="https://img.transistorcdn.com/UDskZh-ToPv8Ribzk2Ctpu2ABIbFTIkD7s4RUlmoEcE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kZjRk/ZTFhY2M3OTg5N2Vk/YWYyZDZkYTA0ZjE5/OTk3My5qcGVn.jpg">Joe Carver</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/7352894c/transcript.txt" type="text/plain"/>
      <podcast:transcript url="https://share.transistor.fm/s/7352894c/transcript.vtt" type="text/vtt" rel="captions"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3mei7i25e7b2e"/>
    </item>
    <item>
      <title>Can We Teach Critical Thinking and Not Mindless Clicking? - Aidan Kestigian</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>Can We Teach Critical Thinking and Not Mindless Clicking? - Aidan Kestigian</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a1e3ce53-b69a-46c5-bde9-d0f98a72a2bf</guid>
      <link>https://listen.priten.org/s1/2</link>
      <description>
        <![CDATA[<p>In this episode, Priten speaks with Aidan Kestigian, COO of Thinker Analytix, about why nearly half of college graduates lack basic reasoning skills and how explicit instruction in critical thinking can address this gap. They discuss the ethical commitments that should guide EdTech development, including prioritizing pedagogy over gamification, maintaining transparency with students, and building genuine relationships with educators.</p><p><strong>Key Takeaways:</strong></p><ul><li>Critical thinking requires explicit instruction—it's not automatically developed through traditional coursework</li><li>Ethical EdTech means putting pedagogical goals first, not engagement metrics or "stickiness"</li><li>Reasoning is inherently difficult and requires sustained practice; shortcuts undermine real learning</li><li>Direct accountability between EdTech developers and educators leads to better products and outcomes</li><li>Mission-driven organizations can prioritize both growth and integrity when the mission guides decision-making</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten speaks with Aidan Kestigian, COO of Thinker Analytix, about why nearly half of college graduates lack basic reasoning skills and how explicit instruction in critical thinking can address this gap. They discuss the ethical commitments that should guide EdTech development, including prioritizing pedagogy over gamification, maintaining transparency with students, and building genuine relationships with educators.</p><p><strong>Key Takeaways:</strong></p><ul><li>Critical thinking requires explicit instruction—it's not automatically developed through traditional coursework</li><li>Ethical EdTech means putting pedagogical goals first, not engagement metrics or "stickiness"</li><li>Reasoning is inherently difficult and requires sustained practice; shortcuts undermine real learning</li><li>Direct accountability between EdTech developers and educators leads to better products and outcomes</li><li>Mission-driven organizations can prioritize both growth and integrity when the mission guides decision-making</li></ul>]]>
      </content:encoded>
      <pubDate>Thu, 05 Feb 2026 00:22:00 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/bc6ba92c/dddf2973.mp3" length="31619705" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/81mnz4kgyq4c8t6EPw_hCR4BB8KrFZw2GOkoBMyKbNQ/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kZjQ2/MzMyYmE2YzE1Yzc1/N2QxMzYyMTA5M2Jk/ZGNhMy5wbmc.jpg"/>
      <itunes:duration>1975</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten speaks with Aidan Kestigian, COO of Thinker Analytix, about why nearly half of college graduates lack basic reasoning skills and how explicit instruction in critical thinking can address this gap. They discuss the ethical commitments that should guide EdTech development, including prioritizing pedagogy over gamification, maintaining transparency with students, and building genuine relationships with educators.</p><p><strong>Key Takeaways:</strong></p><ul><li>Critical thinking requires explicit instruction—it's not automatically developed through traditional coursework</li><li>Ethical EdTech means putting pedagogical goals first, not engagement metrics or "stickiness"</li><li>Reasoning is inherently difficult and requires sustained practice; shortcuts undermine real learning</li><li>Direct accountability between EdTech developers and educators leads to better products and outcomes</li><li>Mission-driven organizations can prioritize both growth and integrity when the mission guides decision-making</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>critical thinking,digital literacy,mindless clicking,media literacy,student engagement</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://listen.priten.org/people/aidan-kestigian-ph-d" img="https://img.transistorcdn.com/66qmrKAUxWXhB-qDQW_wquU7I70vsj-nGztuprKRezE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mYTdk/M2FjNzcyODAzYjlh/ZWI3MTg4ZDcxNGQw/NzNiYS53ZWJw.jpg">Aidan Kestigian, Ph.D</podcast:person>
      <podcast:person role="Guest" href="https://thinkeranalytix.org" img="https://img.transistorcdn.com/JpE71fzJ9qHtHXI5V86OuDf-9hiJ75oE-mPWKUv20LA/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mMmI2/MjM2MDdmNTExMmFm/ZWI5OGU4NjBlZjJi/MWRkOS5wbmc.jpg">ThinkerAnalytix</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/bc6ba92c/transcript.txt" type="text/plain"/>
      <podcast:socialInteract protocol="atproto" uri="at://did:plc:lp33httd3l7fnkvwnv5kpei2/app.bsky.feed.post/3me3nhoqt7a2a"/>
    </item>
    <item>
      <title>What is Margin of Thought? - Priten Soundar-Shah</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>What is Margin of Thought? - Priten Soundar-Shah</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1f5da917-2907-4eab-b38c-159c0f3fe402</guid>
      <link>https://listen.priten.org/s1/1</link>
      <description>
        <![CDATA[<p>In this episode, Priten introduces <em>Margin of Thought</em>, a podcast that creates space for important questions about education, technology, and civic life. This introductory episode explains the show's mission: to explore tensions in modern life that shape how we raise and educate children.</p><p>The podcast will feature conversations with educators, civic leaders, technologists, academics, and students, focusing on two main threads:</p><ol><li>Ethics of Education Technology – Examining AI, surveillance, privacy, and digital safety in K-12 schools (companion to Priten's upcoming book <em>Ethical Ed Tech</em>)</li><li>Civics Education – Exploring how to prepare students for meaningful democratic participation</li></ol><p>At its core, the show asks: <em>How do we preserve what matters while navigating what's coming?</em> Through thoughtful dialogue and deliberate reflection, <em>Margin of Thought</em> aims to help shape institutions that better serve our children.</p><p>Relevant Links:</p><ul><li><a href="https://listen.priten.org">listen.priten.org</a></li><li><a href="https://ethicaledtech.org/">ethicaledtech.org</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Priten introduces <em>Margin of Thought</em>, a podcast that creates space for important questions about education, technology, and civic life. This introductory episode explains the show's mission: to explore tensions in modern life that shape how we raise and educate children.</p><p>The podcast will feature conversations with educators, civic leaders, technologists, academics, and students, focusing on two main threads:</p><ol><li>Ethics of Education Technology – Examining AI, surveillance, privacy, and digital safety in K-12 schools (companion to Priten's upcoming book <em>Ethical Ed Tech</em>)</li><li>Civics Education – Exploring how to prepare students for meaningful democratic participation</li></ol><p>At its core, the show asks: <em>How do we preserve what matters while navigating what's coming?</em> Through thoughtful dialogue and deliberate reflection, <em>Margin of Thought</em> aims to help shape institutions that better serve our children.</p><p>Relevant Links:</p><ul><li><a href="https://listen.priten.org">listen.priten.org</a></li><li><a href="https://ethicaledtech.org/">ethicaledtech.org</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 03 Feb 2026 15:02:01 -0500</pubDate>
      <author>Priten Soundar-Shah</author>
      <enclosure url="https://media.transistor.fm/8972f30a/3e8ef713.mp3" length="3584928" type="audio/mpeg"/>
      <itunes:author>Priten Soundar-Shah</itunes:author>
      <itunes:image href="https://img.transistorcdn.com/nNROh0Bl6BGaY646-8yyotYeaSmI_adBhbUFMatZDQg/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80NDgz/YzExYTg0YWU2Y2M1/OGQ0OGM5ZWJlNDM2/MjcyNi5wbmc.jpg"/>
      <itunes:duration>223</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Priten introduces <em>Margin of Thought</em>, a podcast that creates space for important questions about education, technology, and civic life. This introductory episode explains the show's mission: to explore tensions in modern life that shape how we raise and educate children.</p><p>The podcast will feature conversations with educators, civic leaders, technologists, academics, and students, focusing on two main threads:</p><ol><li>Ethics of Education Technology – Examining AI, surveillance, privacy, and digital safety in K-12 schools (companion to Priten's upcoming book <em>Ethical Ed Tech</em>)</li><li>Civics Education – Exploring how to prepare students for meaningful democratic participation</li></ol><p>At its core, the show asks: <em>How do we preserve what matters while navigating what's coming?</em> Through thoughtful dialogue and deliberate reflection, <em>Margin of Thought</em> aims to help shape institutions that better serve our children.</p><p>Relevant Links:</p><ul><li><a href="https://listen.priten.org">listen.priten.org</a></li><li><a href="https://ethicaledtech.org/">ethicaledtech.org</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>ethical edtech,podcast introduction,education technology,margin of thought,pedagogy</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:person role="Host" href="https://priten.org" img="https://img.transistorcdn.com/JlYpjP0PmtU6_HZmHSsgaaNQgWcMD1eEmlB3smilNvk/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYTQz/ZDUzOTFhOTA0MDRl/OTBjMGEyMzhiNGYw/N2FmNy5qcGc.jpg">Priten Soundar-Shah</podcast:person>
      <podcast:person role="Guest" href="https://ethicaledtech.org/" img="https://img.transistorcdn.com/nI-yYtsz6CbMWfhgCUuk3r5MitxiueKOI4RNPyTUhAE/rs:fill:0:0:1/w:800/h:800/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZGE3/N2I4ODZhODRkYzFi/NTQwMTI0NTllY2E3/ZGFkZS5wbmc.jpg">Ethical Ed Tech: How Educators Can Lead on Digital Safety &amp;amp; AI in K-12</podcast:person>
      <podcast:transcript url="https://share.transistor.fm/s/8972f30a/transcript.txt" type="text/plain"/>
    </item>
  </channel>
</rss>
