<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/atom+xml" href="https://feeds.transistor.fm/80k-after-hours" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>80k After Hours</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/80k-after-hours</itunes:new-feed-url>
    <description>Resources on how to do good with your career — and anything else we here at 80,000 Hours feel like releasing.</description>
    <copyright>80000 Hours</copyright>
    <podcast:guid>06d3cfe4-b89a-56f8-b80e-c40951fae872</podcast:guid>
    <podcast:locked owner="podcast@80000hours.org">yes</podcast:locked>
    <podcast:trailer pubdate="Thu, 24 Feb 2022 23:47:37 +0000" url="https://media.transistor.fm/724ead45/5028e787.mp3" length="6535866" type="audio/mpeg">Introducing 80k After Hours</podcast:trailer>
    <language>en</language>
    <pubDate>Fri, 25 Jul 2025 15:02:30 +0000</pubDate>
    <lastBuildDate>Fri, 25 Jul 2025 15:03:03 +0000</lastBuildDate>
    <link>https://80000hours.org/after-hours-podcast/</link>
    
    <itunes:category text="Education">
      <itunes:category text="Self-Improvement"/>
    </itunes:category>
    <itunes:category text="Society &amp; Culture">
      <itunes:category text="Documentary"/>
    </itunes:category>
    <itunes:type>episodic</itunes:type>
    <itunes:author>The 80,000 Hours team</itunes:author>
    <itunes:image href="https://img.transistor.fm/EaeMLNbaO5N-_VTN6fbmGIvljfvq6XJFAlcRTjZjs90/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9zaG93/LzI4NjQ1LzE2NDc2/MjkxMjQtYXJ0d29y/ay5qcGc.jpg"/>
    <itunes:summary>Resources on how to do good with your career — and anything else we here at 80,000 Hours feel like releasing.</itunes:summary>
    <itunes:subtitle>Resources on how to do good with your career — and anything else we here at 80,000 Hours feel like releasing..</itunes:subtitle>
    <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
    <itunes:owner>
      <itunes:name>Rob Wiblin, Luisa Rodriguez and 80,000 Hours</itunes:name>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>No</itunes:explicit>
    <item>
      <title>Highlights: #218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good</title>
      <itunes:title>Highlights: #218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5ac66a68-61e4-46ef-8093-45ba20e4ad00</guid>
      <link>https://80000hours.org/podcast/episodes/hugh-white-hard-new-world-end-of-us-global-order/</link>
      <description>
        <![CDATA[<p>For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.</p><p>But according to Hugh White — one of the world's leading strategic thinkers, emeritus professor at the Australian National University, and author of <a href="https://www.audible.ca/pd/Quarterly-Essay-98-Hard-New-World-Audiobook/B0F9P92PD4"><em>Hard New World: Our Post-American Future</em></a> — Trump isn't destroying American hegemony. He's simply revealing that it's already gone.</p><p>These highlights are from episode #218 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/hugh-white-hard-new-world-end-of-us-global-order/"><strong>Hugh White on why Trump is abandoning US hegemony – and that’s probably good</strong></a>, and include:</p><ul><li>America has been all talk, no action when it comes to China and Russia (00:39)</li><li>How Trump has significantly brought forward the inevitable (05:14)</li><li>Westerners always underestimate what China can achieve (10:32)</li><li>We live in a multipolar world; we've got to make a multipolar world work (15:47)</li><li>Trump is half-right that the US was being ripped off (19:06)</li><li>Europe is strong enough to take on Russia, except it lacks nuclear deterrence (22:27)</li><li>A multipolar world is bad, but better than the alternative: nuclear war (28:50)</li><li>Taiwan's position is essentially indefensible — and the rest of the world needs to be honest with them about that (33:24)</li><li>AGI may or may not overcome existing nuclear deterrence (39:16)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.</p><p>But according to Hugh White — one of the world's leading strategic thinkers, emeritus professor at the Australian National University, and author of <a href="https://www.audible.ca/pd/Quarterly-Essay-98-Hard-New-World-Audiobook/B0F9P92PD4"><em>Hard New World: Our Post-American Future</em></a> — Trump isn't destroying American hegemony. He's simply revealing that it's already gone.</p><p>These highlights are from episode #218 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/hugh-white-hard-new-world-end-of-us-global-order/"><strong>Hugh White on why Trump is abandoning US hegemony – and that’s probably good</strong></a>, and include:</p><ul><li>America has been all talk, no action when it comes to China and Russia (00:39)</li><li>How Trump has significantly brought forward the inevitable (05:14)</li><li>Westerners always underestimate what China can achieve (10:32)</li><li>We live in a multipolar world; we've got to make a multipolar world work (15:47)</li><li>Trump is half-right that the US was being ripped off (19:06)</li><li>Europe is strong enough to take on Russia, except it lacks nuclear deterrence (22:27)</li><li>A multipolar world is bad, but better than the alternative: nuclear war (28:50)</li><li>Taiwan's position is essentially indefensible — and the rest of the world needs to be honest with them about that (33:24)</li><li>AGI may or may not overcome existing nuclear deterrence (39:16)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 22 Jul 2025 14:14:15 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/a743d47e/5bb37554.mp3" length="45359896" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/sklf6X23jeK8xmn15sJ0xq_StLNZ-4cRop29BNINP5w/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lMjhi/ODQxYWRmM2EwZTAz/YjgyYjc3MzA3OWU2/NjQzNS5qcGc.jpg"/>
      <itunes:duration>2833</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.</p><p>But according to Hugh White — one of the world's leading strategic thinkers, emeritus professor at the Australian National University, and author of <a href="https://www.audible.ca/pd/Quarterly-Essay-98-Hard-New-World-Audiobook/B0F9P92PD4"><em>Hard New World: Our Post-American Future</em></a> — Trump isn't destroying American hegemony. He's simply revealing that it's already gone.</p><p>These highlights are from episode #218 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/hugh-white-hard-new-world-end-of-us-global-order/"><strong>Hugh White on why Trump is abandoning US hegemony – and that’s probably good</strong></a>, and include:</p><ul><li>America has been all talk, no action when it comes to China and Russia (00:39)</li><li>How Trump has significantly brought forward the inevitable (05:14)</li><li>Westerners always underestimate what China can achieve (10:32)</li><li>We live in a multipolar world; we've got to make a multipolar world work (15:47)</li><li>Trump is half-right that the US was being ripped off (19:06)</li><li>Europe is strong enough to take on Russia, except it lacks nuclear deterrence (22:27)</li><li>A multipolar world is bad, but better than the alternative: nuclear war (28:50)</li><li>Taiwan's position is essentially indefensible — and the rest of the world needs to be honest with them about that (33:24)</li><li>AGI may or may not overcome existing nuclear deterrence (39:16)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/a743d47e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress</title>
      <itunes:title>Highlights: #217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1e9c73b9-6d0c-4c9a-a8fc-bae9954ef2e5</guid>
      <link>https://80000hours.org/podcast/episodes/beth-barnes-ai-safety-evals/</link>
      <description>
        <![CDATA[<p>AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes.</p><p>These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.</p><p>Today’s guest, Beth Barnes, is CEO of <a href="https://metr.org/">METR</a> (Model Evaluation &amp; Threat Research) — the leading organisation measuring these capabilities.</p><p>These highlights are from episode #217 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/beth-barnes-ai-safety-evals/"><strong>Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress</strong></a>, and include:</p><ul><li>Can we see AI scheming in the chain of thought? (00:00:34)</li><li>We have to test model honesty even before they're used inside AI companies (00:05:48)</li><li>It's essential to thoroughly test relevant real-world tasks (00:10:13)</li><li>Recursively self-improving AI might even be here in two years — which is alarming (00:16:09)</li><li>Do we need external auditors doing AI safety tests, not just the companies themselves? (00:21:55)</li><li>A case against safety-focused people working at frontier AI companies (00:29:30)</li><li>Open-weighting models is often good, and Beth has changed her attitude about it (00:34:57)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes.</p><p>These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.</p><p>Today’s guest, Beth Barnes, is CEO of <a href="https://metr.org/">METR</a> (Model Evaluation &amp; Threat Research) — the leading organisation measuring these capabilities.</p><p>These highlights are from episode #217 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/beth-barnes-ai-safety-evals/"><strong>Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress</strong></a>, and include:</p><ul><li>Can we see AI scheming in the chain of thought? (00:00:34)</li><li>We have to test model honesty even before they're used inside AI companies (00:05:48)</li><li>It's essential to thoroughly test relevant real-world tasks (00:10:13)</li><li>Recursively self-improving AI might even be here in two years — which is alarming (00:16:09)</li><li>Do we need external auditors doing AI safety tests, not just the companies themselves? (00:21:55)</li><li>A case against safety-focused people working at frontier AI companies (00:29:30)</li><li>Open-weighting models is often good, and Beth has changed her attitude about it (00:34:57)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 26 Jun 2025 17:12:23 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/0b9df3ea/4d2c69e5.mp3" length="39334501" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/j5AE0kfKt8xg_AAttF4gTOT93SXKV4Z4qCmWQGmR3OU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jYTE3/YzQ2MDlmODdiYjM3/OTVhNTEwNmYyZGRk/ODZiZC5qcGc.jpg"/>
      <itunes:duration>2454</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes.</p><p>These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.</p><p>Today’s guest, Beth Barnes, is CEO of <a href="https://metr.org/">METR</a> (Model Evaluation &amp; Threat Research) — the leading organisation measuring these capabilities.</p><p>These highlights are from episode #217 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/beth-barnes-ai-safety-evals/"><strong>Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress</strong></a>, and include:</p><ul><li>Can we see AI scheming in the chain of thought? (00:00:34)</li><li>We have to test model honesty even before they're used inside AI companies (00:05:48)</li><li>It's essential to thoroughly test relevant real-world tasks (00:10:13)</li><li>Recursively self-improving AI might even be here in two years — which is alarming (00:16:09)</li><li>Do we need external auditors doing AI safety tests, not just the companies themselves? (00:21:55)</li><li>A case against safety-focused people working at frontier AI companies (00:29:30)</li><li>Open-weighting models is often good, and Beth has changed her attitude about it (00:34:57)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/0b9df3ea/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #216 – Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it</title>
      <itunes:title>Highlights: #216 – Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ad232785-b34c-4c18-8600-de93004a682e</guid>
      <link>https://80000hours.org/podcast/episodes/ian-dunt-why-governments-fail/</link>
      <description>
        <![CDATA[<p>When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don’t grasp parliamentary procedure even after decades in office — is the problem the people, or the structure they work in?</p><p>Political journalist <a href="https://www.iandunt.com/">Ian Dunt</a> studies the systemic reasons governments succeed and fail. And in his book <a href="https://www.amazon.com/How-Westminster-Works-Why-Doesnt/dp/139960273X"><em>How Westminster Works …and Why It Doesn’t</em></a>, he argues that Britain’s government dysfunction and multi-decade failure to solve its key problems stems primarily from bad incentives and bad processes.</p><p>These highlights are from episode #216 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/ian-dunt-why-governments-fail/"><strong>Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it</strong></a>, and include:</p><ul><li>Rob's intro (00:00:00)</li><li>The UK is governed from a tiny cramped house (00:00:08)</li><li>Replacing political distractions with departmental organisation (00:02:58)</li><li>The profoundly dangerous development of "delegated legislation" (00:06:42)</li><li>Do more independent-minded legislatures actually lead to better outcomes? (00:09:08)</li><li>MPs waste much of their time helping constituents with random complaints (00:12:50)</li><li>How to keep expert civil servants (00:15:44)</li><li>Unlikely heroes in the House of Lords (00:18:33)</li><li>Proportional representation and other alternatives to first-past-the-post (00:22:02)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don’t grasp parliamentary procedure even after decades in office — is the problem the people, or the structure they work in?</p><p>Political journalist <a href="https://www.iandunt.com/">Ian Dunt</a> studies the systemic reasons governments succeed and fail. And in his book <a href="https://www.amazon.com/How-Westminster-Works-Why-Doesnt/dp/139960273X"><em>How Westminster Works …and Why It Doesn’t</em></a>, he argues that Britain’s government dysfunction and multi-decade failure to solve its key problems stems primarily from bad incentives and bad processes.</p><p>These highlights are from episode #216 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/ian-dunt-why-governments-fail/"><strong>Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it</strong></a>, and include:</p><ul><li>Rob's intro (00:00:00)</li><li>The UK is governed from a tiny cramped house (00:00:08)</li><li>Replacing political distractions with departmental organisation (00:02:58)</li><li>The profoundly dangerous development of "delegated legislation" (00:06:42)</li><li>Do more independent-minded legislatures actually lead to better outcomes? (00:09:08)</li><li>MPs waste much of their time helping constituents with random complaints (00:12:50)</li><li>How to keep expert civil servants (00:15:44)</li><li>Unlikely heroes in the House of Lords (00:18:33)</li><li>Proportional representation and other alternatives to first-past-the-post (00:22:02)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 27 May 2025 14:41:03 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/ed78d68a/10444cc3.mp3" length="29726931" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/e7a0fEtPQTpDFGWh9XuS6rEKkCARJu8LvFZgmLYqzq8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yYjEw/MjJhZDIwNTFjY2M0/M2E5MWFjYjZjYjc5/NjBiZS5qcGc.jpg"/>
      <itunes:duration>1856</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don’t grasp parliamentary procedure even after decades in office — is the problem the people, or the structure they work in?</p><p>Political journalist <a href="https://www.iandunt.com/">Ian Dunt</a> studies the systemic reasons governments succeed and fail. And in his book <a href="https://www.amazon.com/How-Westminster-Works-Why-Doesnt/dp/139960273X"><em>How Westminster Works …and Why It Doesn’t</em></a>, he argues that Britain’s government dysfunction and multi-decade failure to solve its key problems stems primarily from bad incentives and bad processes.</p><p>These highlights are from episode #216 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/ian-dunt-why-governments-fail/"><strong>Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it</strong></a>, and include:</p><ul><li>Rob's intro (00:00:00)</li><li>The UK is governed from a tiny cramped house (00:00:08)</li><li>Replacing political distractions with departmental organisation (00:02:58)</li><li>The profoundly dangerous development of "delegated legislation" (00:06:42)</li><li>Do more independent-minded legislatures actually lead to better outcomes? (00:09:08)</li><li>MPs waste much of their time helping constituents with random complaints (00:12:50)</li><li>How to keep expert civil servants (00:15:44)</li><li>Unlikely heroes in the House of Lords (00:18:33)</li><li>Proportional representation and other alternatives to first-past-the-post (00:22:02)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ed78d68a/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/ed78d68a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power</title>
      <itunes:title>Highlights: #215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">03b4c292-052f-4714-a7d3-a1f5a6ac4b9e</guid>
      <link>https://80000hours.org/podcast/episodes/tom-davidson-ai-enabled-human-power-grabs/</link>
      <description>
        <![CDATA[<p>Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could dominate for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.</p><p>Unfortunately, there’s every reason to think artificial general intelligence (AGI) will reverse that trend.</p><p>In a <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">new paper</a>, <a href="https://tomdavidson-ai.github.io/">Tom Davidson</a> — senior research fellow at the <a href="https://www.forethought.org/">Forethought Centre for AI Strategy</a> — argues that advanced AI systems will enable unprecedented power grabs by tiny groups of people, primarily by removing the need for other human beings to participate.</p><p><br>These highlights are from episode #215 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/tom-davidson-ai-enabled-human-power-grabs/"><strong>Tom Davidson on how AI-enabled coups could allow a tiny group to seize power</strong></a>, and include:</p><ul><li>"No person rules alone" — except now they might (00:00:13)</li><li>The 3 threat scenarios (00:06:17)</li><li>Underpinning all 3 threats: Secret AI loyalties (00:10:15)</li><li>Is this common sense or far-fetched? (00:13:46)</li><li>How to automate a military coup (00:17:41)</li><li>If you took over the US, could you take over the whole world? (00:22:44)</li><li>Secret loyalties all the way down (00:26:27)</li><li>Is it important to have more than one powerful AI country? (00:29:59)</li><li>What transparency actually looks like (00:33:08)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could dominate for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.</p><p>Unfortunately, there’s every reason to think artificial general intelligence (AGI) will reverse that trend.</p><p>In a <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">new paper</a>, <a href="https://tomdavidson-ai.github.io/">Tom Davidson</a> — senior research fellow at the <a href="https://www.forethought.org/">Forethought Centre for AI Strategy</a> — argues that advanced AI systems will enable unprecedented power grabs by tiny groups of people, primarily by removing the need for other human beings to participate.</p><p><br>These highlights are from episode #215 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/tom-davidson-ai-enabled-human-power-grabs/"><strong>Tom Davidson on how AI-enabled coups could allow a tiny group to seize power</strong></a>, and include:</p><ul><li>"No person rules alone" — except now they might (00:00:13)</li><li>The 3 threat scenarios (00:06:17)</li><li>Underpinning all 3 threats: Secret AI loyalties (00:10:15)</li><li>Is this common sense or far-fetched? (00:13:46)</li><li>How to automate a military coup (00:17:41)</li><li>If you took over the US, could you take over the whole world? (00:22:44)</li><li>Secret loyalties all the way down (00:26:27)</li><li>Is it important to have more than one powerful AI country? (00:29:59)</li><li>What transparency actually looks like (00:33:08)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 16 May 2025 15:00:13 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/6598b6f8/bac1348d.mp3" length="35864411" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/1XJXgQBmvftWzbHn9UEcssLcIhO2XKgGWy_JgTrG-Zc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80MGRh/ZGU4MDdmNzZiZjNm/Y2Y3ZWYwMmY1NjAy/OWQwNC5qcGc.jpg"/>
      <itunes:duration>2239</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could dominate for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.</p><p>Unfortunately, there’s every reason to think artificial general intelligence (AGI) will reverse that trend.</p><p>In a <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power">new paper</a>, <a href="https://tomdavidson-ai.github.io/">Tom Davidson</a> — senior research fellow at the <a href="https://www.forethought.org/">Forethought Centre for AI Strategy</a> — argues that advanced AI systems will enable unprecedented power grabs by tiny groups of people, primarily by removing the need for other human beings to participate.</p><p><br>These highlights are from episode #215 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/tom-davidson-ai-enabled-human-power-grabs/"><strong>Tom Davidson on how AI-enabled coups could allow a tiny group to seize power</strong></a>, and include:</p><ul><li>"No person rules alone" — except now they might (00:00:13)</li><li>The 3 threat scenarios (00:06:17)</li><li>Underpinning all 3 threats: Secret AI loyalties (00:10:15)</li><li>Is this common sense or far-fetched? (00:13:46)</li><li>How to automate a military coup (00:17:41)</li><li>If you took over the US, could you take over the whole world? (00:22:44)</li><li>Secret loyalties all the way down (00:26:27)</li><li>Is it important to have more than one powerful AI country? (00:29:59)</li><li>What transparency actually looks like (00:33:08)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/6598b6f8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway</title>
      <itunes:title>Highlights: #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">16a8068c-5bb4-496c-8d4e-c4e1dc6ee8a2</guid>
      <link>https://80000hours.org/podcast/episodes/buck-shlegeris-ai-control-scheming/</link>
      <description>
        <![CDATA[<p>Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.</p><p>So some — including Buck Shlegeris, CEO of <a href="https://www.redwoodresearch.org/">Redwood Research</a> — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, <em>not </em>developing such techniques is probably even crazier. </p><p>These highlights are from episode #214 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/buck-shlegeris-ai-control-scheming/"><strong>Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway</strong></a>, and include:</p><ul><li>What is AI control? (00:00:15)</li><li>One way to catch AIs that are up to no good (00:07:00)</li><li>What do we do once we catch a model trying to escape? (00:13:39)</li><li>Team Human vs Team AI (00:18:24)</li><li>If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)</li><li>Is alignment still useful? (00:32:10)</li><li>Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.</p><p>So some — including Buck Shlegeris, CEO of <a href="https://www.redwoodresearch.org/">Redwood Research</a> — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, <em>not </em>developing such techniques is probably even crazier. </p><p>These highlights are from episode #214 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/buck-shlegeris-ai-control-scheming/"><strong>Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway</strong></a>, and include:</p><ul><li>What is AI control? (00:00:15)</li><li>One way to catch AIs that are up to no good (00:07:00)</li><li>What do we do once we catch a model trying to escape? (00:13:39)</li><li>Team Human vs Team AI (00:18:24)</li><li>If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)</li><li>Is alignment still useful? (00:32:10)</li><li>Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 18 Apr 2025 15:05:36 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/0e8cf71c/1b970ece.mp3" length="39816880" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/BoSaLkXOW-qh9ubGA4VMPhR26XdIfRFOXjngJCEKpz8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wMDkx/NzJiMjllMzA0Nzlh/M2NjODgzOWQxMGRi/YzlmYS5qcGc.jpg"/>
      <itunes:duration>2486</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.</p><p>So some — including Buck Shlegeris, CEO of <a href="https://www.redwoodresearch.org/">Redwood Research</a> — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, <em>not </em>developing such techniques is probably even crazier. </p><p>These highlights are from episode #214 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/buck-shlegeris-ai-control-scheming/"><strong>Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway</strong></a>, and include:</p><ul><li>What is AI control? (00:00:15)</li><li>One way to catch AIs that are up to no good (00:07:00)</li><li>What do we do once we catch a model trying to escape? (00:13:39)</li><li>Team Human vs Team AI (00:18:24)</li><li>If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)</li><li>Is alignment still useful? (00:32:10)</li><li>Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/0e8cf71c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/0e8cf71c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Off the Clock #8: Leaving Las London with Matt Reardon</title>
      <itunes:title>Off the Clock #8: Leaving Las London with Matt Reardon</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">625326c2-c049-4e5f-b590-314dc1b97ddc</guid>
      <link>https://share.transistor.fm/s/53ddf66a</link>
      <description>
        <![CDATA[<p><a href="https://youtu.be/fJssGodnCQg"><strong>Watch this episode on YouTube!</strong></a> https://youtu.be/fJssGodnCQg</p><p>Conor and Arden sit down with Matt in his farewell episode to discuss the law, their team retreat, his lessons learned from 80k, and the fate of the show.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://youtu.be/fJssGodnCQg"><strong>Watch this episode on YouTube!</strong></a> https://youtu.be/fJssGodnCQg</p><p>Conor and Arden sit down with Matt in his farewell episode to discuss the law, their team retreat, his lessons learned from 80k, and the fate of the show.</p>]]>
      </content:encoded>
      <pubDate>Tue, 01 Apr 2025 18:34:56 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/53ddf66a/68484401.mp3" length="148862467" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Vrz8m-eRVwD1P_m4e47nPiSeE-jsfGQz6vsrwKe0mBQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yNmFl/YjUzZWU5M2ZhNmVh/ZjZiNTRlOTVhOTNh/NDQ4Zi5qcGc.jpg"/>
      <itunes:duration>6201</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://youtu.be/fJssGodnCQg"><strong>Watch this episode on YouTube!</strong></a> https://youtu.be/fJssGodnCQg</p><p>Conor and Arden sit down with Matt in his farewell episode to discuss the law, their team retreat, his lessons learned from 80k, and the fate of the show.</p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/53ddf66a/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Highlights: #213 – Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared</title>
      <itunes:title>Highlights: #213 – Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1db06781-205e-42ca-bb26-a4a40a87f09a</guid>
      <link>https://80000hours.org/podcast/episodes/will-macaskill-century-in-a-decade-navigating-intelligence-explosion/</link>
      <description>
        <![CDATA[<p>The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.</p><p>That’s the future Will MacAskill — philosopher and researcher at the <a href="https://www.forethought.org/">Forethought Centre for AI Strategy</a> — argues we need to prepare for in his new paper “<a href="https://www.forethought.org/preparing-for-the-intelligence-explosion">Preparing for the intelligence explosion</a>.” Not in the distant future, but probably in three to seven years.</p><p>These highlights are from episode #213 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/will-macaskill-century-in-a-decade-navigating-intelligence-explosion/"><strong>Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared</strong></a>, and include:</p><ul><li>Rob's intro (00:00:00)</li><li>A century of history crammed into a decade (00:00:17)</li><li>What does a good future with AGI even look like? (00:04:48)</li><li>AI takeover might happen anyway — should we rush to load in our values? (00:09:29)</li><li>Lock-in is plausible where it never was before (00:14:40)</li><li>ML researchers are feverishly working to destroy their own power (00:20:07)</li><li>People distrust utopianism for good reason (00:24:30)</li><li>Non-technological disruption (00:29:18)</li><li>The 3 intelligence explosions (00:31:10)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.</p><p>That’s the future Will MacAskill — philosopher and researcher at the <a href="https://www.forethought.org/">Forethought Centre for AI Strategy</a> — argues we need to prepare for in his new paper “<a href="https://www.forethought.org/preparing-for-the-intelligence-explosion">Preparing for the intelligence explosion</a>.” Not in the distant future, but probably in three to seven years.</p><p>These highlights are from episode #213 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/will-macaskill-century-in-a-decade-navigating-intelligence-explosion/"><strong>Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared</strong></a>, and include:</p><ul><li>Rob's intro (00:00:00)</li><li>A century of history crammed into a decade (00:00:17)</li><li>What does a good future with AGI even look like? (00:04:48)</li><li>AI takeover might happen anyway — should we rush to load in our values? (00:09:29)</li><li>Lock-in is plausible where it never was before (00:14:40)</li><li>ML researchers are feverishly working to destroy their own power (00:20:07)</li><li>People distrust utopianism for good reason (00:24:30)</li><li>Non-technological disruption (00:29:18)</li><li>The 3 intelligence explosions (00:31:10)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 25 Mar 2025 14:59:49 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/5891d57a/e4259df4.mp3" length="32279306" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/K7zy3lD21NBn6De3NqOfvboClHgACKZznBtBetzlixs/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85MDQ3/OTAyYTIwNmViOTAx/ZDg3MWQ5ODY4ZmI1/YTNjNi5qcGc.jpg"/>
      <itunes:duration>2015</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.</p><p>That’s the future Will MacAskill — philosopher and researcher at the <a href="https://www.forethought.org/">Forethought Centre for AI Strategy</a> — argues we need to prepare for in his new paper “<a href="https://www.forethought.org/preparing-for-the-intelligence-explosion">Preparing for the intelligence explosion</a>.” Not in the distant future, but probably in three to seven years.</p><p>These highlights are from episode #213 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/will-macaskill-century-in-a-decade-navigating-intelligence-explosion/"><strong>Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared</strong></a>, and include:</p><ul><li>Rob's intro (00:00:00)</li><li>A century of history crammed into a decade (00:00:17)</li><li>What does a good future with AGI even look like? (00:04:48)</li><li>AI takeover might happen anyway — should we rush to load in our values? (00:09:29)</li><li>Lock-in is plausible where it never was before (00:14:40)</li><li>ML researchers are feverishly working to destroy their own power (00:20:07)</li><li>People distrust utopianism for good reason (00:24:30)</li><li>Non-technological disruption (00:29:18)</li><li>The 3 intelligence explosions (00:31:10)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5891d57a/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/5891d57a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #212 – Allan Dafoe on why technology is unstoppable &amp; how to shape AI development anyway</title>
      <itunes:title>Highlights: #212 – Allan Dafoe on why technology is unstoppable &amp; how to shape AI development anyway</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">56a2f27e-7a56-4f5b-8335-8761f92e2354</guid>
      <link>https://80000hours.org/podcast/episodes/allan-dafoe-unstoppable-technology-human-agency-agi/</link>
      <description>
        <![CDATA[<p>Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through. That’s how <a href="https://www.allandafoe.com/">Allan Dafoe</a> — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.</p><p><br></p><p>These highlights are from episode #212 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/allan-dafoe-unstoppable-technology-human-agency-agi/"><strong>Allan Dafoe on why technology is unstoppable &amp; how to shape AI development anyway</strong></a>, and include:</p><ul><li>Who's Allan Dafoe? (00:00:00)</li><li>Astounding patterns in macrohistory (00:00:23)</li><li>Are humans just along for the ride when it comes to technological progress? (00:03:58)</li><li>Flavours of technological determinism (00:07:11)</li><li>The super-cooperative AGI hypothesis and backdoors (00:12:50)</li><li>Could having more cooperative AIs backfire? (00:19:16)</li><li>The offence-defence balance (00:24:23)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through. That’s how <a href="https://www.allandafoe.com/">Allan Dafoe</a> — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.</p><p><br></p><p>These highlights are from episode #212 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/allan-dafoe-unstoppable-technology-human-agency-agi/"><strong>Allan Dafoe on why technology is unstoppable &amp; how to shape AI development anyway</strong></a>, and include:</p><ul><li>Who's Allan Dafoe? (00:00:00)</li><li>Astounding patterns in macrohistory (00:00:23)</li><li>Are humans just along for the ride when it comes to technological progress? (00:03:58)</li><li>Flavours of technological determinism (00:07:11)</li><li>The super-cooperative AGI hypothesis and backdoors (00:12:50)</li><li>Could having more cooperative AIs backfire? (00:19:16)</li><li>The offence-defence balance (00:24:23)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 12 Mar 2025 15:00:16 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/83fb78ae/db94c50b.mp3" length="28214534" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Xa2tbRxCXRiVRSylnsLMoBUrVZJN1JSz9SrAVKpOKhY/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8yMzFh/MjVjNmRjYTRjMzRl/MjNmMjI0ZjMzYTlh/NmY5Mi5qcGc.jpg"/>
      <itunes:duration>1761</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through. That’s how <a href="https://www.allandafoe.com/">Allan Dafoe</a> — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.</p><p><br></p><p>These highlights are from episode #212 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/allan-dafoe-unstoppable-technology-human-agency-agi/"><strong>Allan Dafoe on why technology is unstoppable &amp; how to shape AI development anyway</strong></a>, and include:</p><ul><li>Who's Allan Dafoe? (00:00:00)</li><li>Astounding patterns in macrohistory (00:00:23)</li><li>Are humans just along for the ride when it comes to technological progress? (00:03:58)</li><li>Flavours of technological determinism (00:07:11)</li><li>The super-cooperative AGI hypothesis and backdoors (00:12:50)</li><li>Could having more cooperative AIs backfire? (00:19:16)</li><li>The offence-defence balance (00:24:23)</li></ul><p>These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. </p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/83fb78ae/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/83fb78ae/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Off the Clock #7: Getting on the Crazy Train with Chi Nguyen</title>
      <itunes:title>Off the Clock #7: Getting on the Crazy Train with Chi Nguyen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9ed7312e-a983-4250-aff6-26f5584dea19</guid>
      <link>https://share.transistor.fm/s/2ceb5001</link>
      <description>
        <![CDATA[<p><strong>Watch this episode on YouTube! </strong><a href="https://youtu.be/IRRwHCK279E"><strong>https://youtu.be/IRRwHCK279E</strong></a></p><p>Matt, Bella, and Huon sit down with Chi Nguyen to discuss cooperating with aliens, elections of future past, and Bad Billionaires pt. 2.</p><p>Check out: </p><ul><li><a href="https://www.bbc.co.uk/programmes/m0020hlh"><strong>Matt’s summer appearance on the BBC</strong></a> on funding for the arts</li><li><a href="https://forum.effectivealtruism.org/posts/JGazpLa3Gvvter4JW/cooperating-with-aliens-and-agis-an-ecl-explainer"><strong>Chi’s ECL Explainer</strong></a> (get in touch to support!)</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><strong>Watch this episode on YouTube! </strong><a href="https://youtu.be/IRRwHCK279E"><strong>https://youtu.be/IRRwHCK279E</strong></a></p><p>Matt, Bella, and Huon sit down with Chi Nguyen to discuss cooperating with aliens, elections of future past, and Bad Billionaires pt. 2.</p><p>Check out: </p><ul><li><a href="https://www.bbc.co.uk/programmes/m0020hlh"><strong>Matt’s summer appearance on the BBC</strong></a> on funding for the arts</li><li><a href="https://forum.effectivealtruism.org/posts/JGazpLa3Gvvter4JW/cooperating-with-aliens-and-agis-an-ecl-explainer"><strong>Chi’s ECL Explainer</strong></a> (get in touch to support!)</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 13 Jan 2025 15:44:13 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/2ceb5001/187c60ef.mp3" length="162478228" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/VX2GwDRb8OOGkp8qaHBIMumQWXF7C0gtKT36Kj1T-Vc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80Yzgz/MzQ5YWJmMmZiMTU4/YTU4NGU3YjdiZGMz/M2VhZC5qcGc.jpg"/>
      <itunes:duration>5067</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><strong>Watch this episode on YouTube! </strong><a href="https://youtu.be/IRRwHCK279E"><strong>https://youtu.be/IRRwHCK279E</strong></a></p><p>Matt, Bella, and Huon sit down with Chi Nguyen to discuss cooperating with aliens, elections of future past, and Bad Billionaires pt. 2.</p><p>Check out: </p><ul><li><a href="https://www.bbc.co.uk/programmes/m0020hlh"><strong>Matt’s summer appearance on the BBC</strong></a> on funding for the arts</li><li><a href="https://forum.effectivealtruism.org/posts/JGazpLa3Gvvter4JW/cooperating-with-aliens-and-agis-an-ecl-explainer"><strong>Chi’s ECL Explainer</strong></a> (get in touch to support!)</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/2ceb5001/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Highlights: #211 – Sam Bowman on why housing still isn’t fixed and what would actually work</title>
      <itunes:title>Highlights: #211 – Sam Bowman on why housing still isn’t fixed and what would actually work</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">78ff20a1-3e80-4e5c-b6c2-b5e230942d04</guid>
      <link>https://80000hours.org/podcast/episodes/sam-bowman-overcoming-nimbys-housing-policy-proposals/</link>
      <description>
        <![CDATA[<p>Economist and editor of <em>Works in Progress</em> <a href="https://www.sambowman.co/about">Sam Bowman</a> isn’t content to just condemn the Not In My Back Yard (NIMBY) mentality behind rich countries' construction stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.</p><p>So Sam lays out three alternative strategies in our full interview with him — including highlights like:</p><ul><li>Rich countries have a crisis of underconstruction (00:00:19)</li><li>The UK builds shockingly little because of its planning permission system (00:04:57)</li><li>Overcoming NIMBYism means fixing incentives (00:07:21)</li><li>NIMBYs aren't wrong: they are often harmed by development (00:10:44)</li><li>Street votes give existing residents a say (00:16:29)</li><li>It's essential to define in advance who gets a say (00:24:37)</li><li>Property tax distribution might be the most important policy you've never heard of (00:28:55)</li><li>Using aesthetics to get buy-in for new construction (00:35:48)</li><li>Locals actually really like having nuclear power plants nearby (00:44:14)</li><li>It can be really useful to let old and new institutions coexist for a while (00:48:27)</li><li>Ozempic and living in the decade that we conquered obesity (00:53:05)</li><li>Northern latitudes still need nuclear power (00:55:30)</li></ul><p>These highlights are from episode #211 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sam-bowman-overcoming-nimbys-housing-policy-proposals/"><strong>Sam Bowman on why housing still isn’t fixed and what would actually work</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. (And you may have noticed this episode is longer than most of our highlights episodes — let us know if you liked that or not!)</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Economist and editor of <em>Works in Progress</em> <a href="https://www.sambowman.co/about">Sam Bowman</a> isn’t content to just condemn the Not In My Back Yard (NIMBY) mentality behind rich countries' construction stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.</p><p>So Sam lays out three alternative strategies in our full interview with him — including highlights like:</p><ul><li>Rich countries have a crisis of underconstruction (00:00:19)</li><li>The UK builds shockingly little because of its planning permission system (00:04:57)</li><li>Overcoming NIMBYism means fixing incentives (00:07:21)</li><li>NIMBYs aren't wrong: they are often harmed by development (00:10:44)</li><li>Street votes give existing residents a say (00:16:29)</li><li>It's essential to define in advance who gets a say (00:24:37)</li><li>Property tax distribution might be the most important policy you've never heard of (00:28:55)</li><li>Using aesthetics to get buy-in for new construction (00:35:48)</li><li>Locals actually really like having nuclear power plants nearby (00:44:14)</li><li>It can be really useful to let old and new institutions coexist for a while (00:48:27)</li><li>Ozempic and living in the decade that we conquered obesity (00:53:05)</li><li>Northern latitudes still need nuclear power (00:55:30)</li></ul><p>These highlights are from episode #211 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sam-bowman-overcoming-nimbys-housing-policy-proposals/"><strong>Sam Bowman on why housing still isn’t fixed and what would actually work</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. (And you may have noticed this episode is longer than most of our highlights episodes — let us know if you liked that or not!)</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 06 Jan 2025 16:01:17 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/3ee59fb0/e91beec5.mp3" length="58918042" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/NlmvAEkXGeI4x4q2ATDFtYIfjsxiSdSgpb-aYTuPCf4/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84ZjIy/ZmNhMmI2NGQwNTE2/YWE2NGJkZTc5OTJi/NzNhMy5qcGc.jpg"/>
      <itunes:duration>3680</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Economist and editor of <em>Works in Progress</em> <a href="https://www.sambowman.co/about">Sam Bowman</a> isn’t content to just condemn the Not In My Back Yard (NIMBY) mentality behind rich countries' construction stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.</p><p>So Sam lays out three alternative strategies in our full interview with him — including highlights like:</p><ul><li>Rich countries have a crisis of underconstruction (00:00:19)</li><li>The UK builds shockingly little because of its planning permission system (00:04:57)</li><li>Overcoming NIMBYism means fixing incentives (00:07:21)</li><li>NIMBYs aren't wrong: they are often harmed by development (00:10:44)</li><li>Street votes give existing residents a say (00:16:29)</li><li>It's essential to define in advance who gets a say (00:24:37)</li><li>Property tax distribution might be the most important policy you've never heard of (00:28:55)</li><li>Using aesthetics to get buy-in for new construction (00:35:48)</li><li>Locals actually really like having nuclear power plants nearby (00:44:14)</li><li>It can be really useful to let old and new institutions coexist for a while (00:48:27)</li><li>Ozempic and living in the decade that we conquered obesity (00:53:05)</li><li>Northern latitudes still need nuclear power (00:55:30)</li></ul><p>These highlights are from episode #211 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sam-bowman-overcoming-nimbys-housing-policy-proposals/"><strong>Sam Bowman on why housing still isn’t fixed and what would actually work</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>. (And you may have noticed this episode is longer than most of our highlights episodes — let us know if you liked that or not!)</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/3ee59fb0/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/3ee59fb0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals</title>
      <itunes:title>Highlights: #210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3b80550e-bcef-4c2d-bf1e-fbcac6c19c8c</guid>
      <link>https://80000hours.org/podcast/episodes/cameron-meyer-shorb-wild-animal-suffering/</link>
      <description>
        <![CDATA[<p>We explored the cutting edge of wild animal welfare science <a href="https://80000hours.org/podcast/episodes/cameron-meyer-shorb-wild-animal-suffering"><strong>our full interview with Cameron Meyer Shorb</strong></a>, executive director of Wild Animal Initiative, including highlights like:</p><ul><li>One concrete example of how we might improve wild animal welfare (00:00:16)</li><li>How many wild animals are there, and which animals are they? (00:04:24)</li><li>Why might wild animals be suffering? (00:08:40)</li><li>The objection that we shouldn't meddle in nature because nature is good (00:12:25)</li><li>Vaccines for wild animals (00:17:37)</li><li>Gene drive technologies (00:20:50)</li><li>Optimising for high-welfare landscapes (00:24:52)</li></ul><p>These highlights are from episode #210 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/cameron-meyer-shorb-wild-animal-suffering"><strong>Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We explored the cutting edge of wild animal welfare science <a href="https://80000hours.org/podcast/episodes/cameron-meyer-shorb-wild-animal-suffering"><strong>our full interview with Cameron Meyer Shorb</strong></a>, executive director of Wild Animal Initiative, including highlights like:</p><ul><li>One concrete example of how we might improve wild animal welfare (00:00:16)</li><li>How many wild animals are there, and which animals are they? (00:04:24)</li><li>Why might wild animals be suffering? (00:08:40)</li><li>The objection that we shouldn't meddle in nature because nature is good (00:12:25)</li><li>Vaccines for wild animals (00:17:37)</li><li>Gene drive technologies (00:20:50)</li><li>Optimising for high-welfare landscapes (00:24:52)</li></ul><p>These highlights are from episode #210 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/cameron-meyer-shorb-wild-animal-suffering"><strong>Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 13 Dec 2024 17:02:53 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/aeb80061/32db46e9.mp3" length="28818561" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/6GjUdn8UNO9-vFmi6bWCOFY3SAEk8MHREqhR1DP7Suw/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wY2I3/MjBkMGU3ZmZkYmY0/NDhmYzJhODMzZDA2/MjllNi5qcGVn.jpg"/>
      <itunes:duration>1796</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We explored the cutting edge of wild animal welfare science <a href="https://80000hours.org/podcast/episodes/cameron-meyer-shorb-wild-animal-suffering"><strong>our full interview with Cameron Meyer Shorb</strong></a>, executive director of Wild Animal Initiative, including highlights like:</p><ul><li>One concrete example of how we might improve wild animal welfare (00:00:16)</li><li>How many wild animals are there, and which animals are they? (00:04:24)</li><li>Why might wild animals be suffering? (00:08:40)</li><li>The objection that we shouldn't meddle in nature because nature is good (00:12:25)</li><li>Vaccines for wild animals (00:17:37)</li><li>Gene drive technologies (00:20:50)</li><li>Optimising for high-welfare landscapes (00:24:52)</li></ul><p>These highlights are from episode #210 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/cameron-meyer-shorb-wild-animal-suffering"><strong>Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/aeb80061/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/aeb80061/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit</title>
      <itunes:title>Highlights: #209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2fb97b35-0cea-4d74-9db4-78d4cf71c791</guid>
      <link>https://80000hours.org/podcast/episodes/rose-chan-loui-openai-breaking-free-nonprofit</link>
      <description>
        <![CDATA[<p>Nonprofit legal expert Rose Chan Loui lays out the legal case and implications of OpenAI's attempt to shed its nonprofit parent. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/rose-chan-loui-openai-breaking-free-nonprofit"><strong>our full interview with Rose</strong></a>, including:</p><ul><li>How OpenAI carefully chose a complex nonprofit structure (00:00:26)</li><li>The nonprofit board is out-resourced and in a tough spot (00:04:09)</li><li>Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:06:47)</li><li>Control of OpenAI is independently incredibly valuable and requires compensation (00:11:06)</li><li>It's very important that the nonprofit gets cash and not just equity (00:16:06)</li><li>How the nonprofit board can best play their hand (00:21:20)</li></ul><p>These highlights are from episode #209 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/rose-chan-loui-openai-breaking-free-nonprofit/"><strong>Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Nonprofit legal expert Rose Chan Loui lays out the legal case and implications of OpenAI's attempt to shed its nonprofit parent. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/rose-chan-loui-openai-breaking-free-nonprofit"><strong>our full interview with Rose</strong></a>, including:</p><ul><li>How OpenAI carefully chose a complex nonprofit structure (00:00:26)</li><li>The nonprofit board is out-resourced and in a tough spot (00:04:09)</li><li>Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:06:47)</li><li>Control of OpenAI is independently incredibly valuable and requires compensation (00:11:06)</li><li>It's very important that the nonprofit gets cash and not just equity (00:16:06)</li><li>How the nonprofit board can best play their hand (00:21:20)</li></ul><p>These highlights are from episode #209 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/rose-chan-loui-openai-breaking-free-nonprofit/"><strong>Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Dec 2024 16:21:18 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/9ad9882a/73a03778.mp3" length="17485015" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Tebq8hxiyFAWNifOPsYY571yydgFgRMkluQQc4auwCE/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS83Yjc2/NmExNmM5OTI5ZTY4/NTdhZjFkYjg3ZDU0/YmFkYS5KUEc.jpg"/>
      <itunes:duration>1453</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Nonprofit legal expert Rose Chan Loui lays out the legal case and implications of OpenAI's attempt to shed its nonprofit parent. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/rose-chan-loui-openai-breaking-free-nonprofit"><strong>our full interview with Rose</strong></a>, including:</p><ul><li>How OpenAI carefully chose a complex nonprofit structure (00:00:26)</li><li>The nonprofit board is out-resourced and in a tough spot (00:04:09)</li><li>Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:06:47)</li><li>Control of OpenAI is independently incredibly valuable and requires compensation (00:11:06)</li><li>It's very important that the nonprofit gets cash and not just equity (00:16:06)</li><li>How the nonprofit board can best play their hand (00:21:20)</li></ul><p>These highlights are from episode #209 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/rose-chan-loui-openai-breaking-free-nonprofit/"><strong>Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/9ad9882a/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/9ad9882a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world</title>
      <itunes:title>Highlights: #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">acbddd40-772e-4128-bb8b-d9b62d71d512</guid>
      <link>https://80000hours.org/podcast/episodes/elizabeth-cox-tv-movies-novels-change-the-world/</link>
      <description>
        <![CDATA[<p>Elizabeth Cox — founder of the independent production company <a href="https://www.shouldwestudio.com/">Should We Studio</a> — makes the case that storytelling can improve the world. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/elizabeth-cox-tv-movies-novels-change-the-world/"><strong>our full interview with Elizabeth</strong></a>, including:</p><ul><li>Keiran’s intro (00:00:00)</li><li>Empirical evidence of the impact of storytelling (00:00:16)</li><li>The hits-based approach to storytelling (00:03:35)</li><li>Debating the merits of thinking about target audiences (00:07:48)</li><li>Ada vs other approaches to impact-focused storytelling (00:13:15)</li><li>Why animation? (00:18:56)</li><li>How long will humans stay relevant as creative writers, given AI advances? (00:22:40)</li></ul><p>These highlights are from episode #208 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/elizabeth-cox-tv-movies-novels-change-the-world"><strong>Elizabeth Cox on the case that TV shows, movies, and novels can improve the world</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Elizabeth Cox — founder of the independent production company <a href="https://www.shouldwestudio.com/">Should We Studio</a> — makes the case that storytelling can improve the world. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/elizabeth-cox-tv-movies-novels-change-the-world/"><strong>our full interview with Elizabeth</strong></a>, including:</p><ul><li>Keiran’s intro (00:00:00)</li><li>Empirical evidence of the impact of storytelling (00:00:16)</li><li>The hits-based approach to storytelling (00:03:35)</li><li>Debating the merits of thinking about target audiences (00:07:48)</li><li>Ada vs other approaches to impact-focused storytelling (00:13:15)</li><li>Why animation? (00:18:56)</li><li>How long will humans stay relevant as creative writers, given AI advances? (00:22:40)</li></ul><p>These highlights are from episode #208 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/elizabeth-cox-tv-movies-novels-change-the-world"><strong>Elizabeth Cox on the case that TV shows, movies, and novels can improve the world</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 05 Dec 2024 16:06:19 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/cf8e0085/11cf3962.mp3" length="28133654" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/nwr0A0DDmhdQGD0NCryXH12pSih6a3-WkOIjN6iSpq8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82Yzli/YjJiY2I4NjYyMDY4/ZDI2ZDQ3NDNmMTRh/YzIyZC5qcGVn.jpg"/>
      <itunes:duration>1755</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Elizabeth Cox — founder of the independent production company <a href="https://www.shouldwestudio.com/">Should We Studio</a> — makes the case that storytelling can improve the world. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/elizabeth-cox-tv-movies-novels-change-the-world/"><strong>our full interview with Elizabeth</strong></a>, including:</p><ul><li>Keiran’s intro (00:00:00)</li><li>Empirical evidence of the impact of storytelling (00:00:16)</li><li>The hits-based approach to storytelling (00:03:35)</li><li>Debating the merits of thinking about target audiences (00:07:48)</li><li>Ada vs other approaches to impact-focused storytelling (00:13:15)</li><li>Why animation? (00:18:56)</li><li>How long will humans stay relevant as creative writers, given AI advances? (00:22:40)</li></ul><p>These highlights are from episode #208 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/elizabeth-cox-tv-movies-novels-change-the-world"><strong>Elizabeth Cox on the case that TV shows, movies, and novels can improve the world</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/cf8e0085/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/cf8e0085/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead</title>
      <itunes:title>Highlights: #207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1741ec56-5a9b-4c22-9770-b36c7ea5ceab</guid>
      <link>https://80000hours.org/podcast/episodes/sarah-eustis-guthrie-founding-shutting-down-charity/</link>
      <description>
        <![CDATA[<p>Charity founder Sarah Eustis-Guthrie has a candid conversation about her experience starting and running her maternal health charity, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/sarah-eustis-guthrie-founding-shutting-down-charity/"><strong>our full interview with Sarah</strong></a>:</p><ul><li>Luisa’s intro (00:00:00)</li><li>What it's like to found a charity (00:00:14)</li><li>Yellow flags and difficult calls (00:03:17)</li><li>Disappointing results (00:06:28)</li><li>The ups and downs of founding an organisation (00:08:37)</li><li>Entrepreneurship and being willing to make risky bets (00:12:58)</li><li>Why aren't more charities shutting down? (00:16:50)</li><li>How to think about shutting down (00:19:39)</li></ul><p>These highlights are from episode #207 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sarah-eustis-guthrie-founding-shutting-down-charity/"><strong>Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Charity founder Sarah Eustis-Guthrie has a candid conversation about her experience starting and running her maternal health charity, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/sarah-eustis-guthrie-founding-shutting-down-charity/"><strong>our full interview with Sarah</strong></a>:</p><ul><li>Luisa’s intro (00:00:00)</li><li>What it's like to found a charity (00:00:14)</li><li>Yellow flags and difficult calls (00:03:17)</li><li>Disappointing results (00:06:28)</li><li>The ups and downs of founding an organisation (00:08:37)</li><li>Entrepreneurship and being willing to make risky bets (00:12:58)</li><li>Why aren't more charities shutting down? (00:16:50)</li><li>How to think about shutting down (00:19:39)</li></ul><p>These highlights are from episode #207 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sarah-eustis-guthrie-founding-shutting-down-charity/"><strong>Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 02 Dec 2024 15:39:12 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/05e2ec2d/f03cfe8b.mp3" length="21658088" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/yUlMybOWJ9g9yo2U7ZEsbAdmAr4tgso18IjJnDPhHVQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85Nzg4/YjI0YzQ5MmEwYmE3/OWY5Y2Y0YTlhYzIy/MzBhZS5qcGc.jpg"/>
      <itunes:duration>1351</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Charity founder Sarah Eustis-Guthrie has a candid conversation about her experience starting and running her maternal health charity, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected. This episode is a selection of highlights from <a href="https://80000hours.org/podcast/episodes/sarah-eustis-guthrie-founding-shutting-down-charity/"><strong>our full interview with Sarah</strong></a>:</p><ul><li>Luisa’s intro (00:00:00)</li><li>What it's like to found a charity (00:00:14)</li><li>Yellow flags and difficult calls (00:03:17)</li><li>Disappointing results (00:06:28)</li><li>The ups and downs of founding an organisation (00:08:37)</li><li>Entrepreneurship and being willing to make risky bets (00:12:58)</li><li>Why aren't more charities shutting down? (00:16:50)</li><li>How to think about shutting down (00:19:39)</li></ul><p>These highlights are from episode #207 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sarah-eustis-guthrie-founding-shutting-down-charity/"><strong>Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/05e2ec2d/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/05e2ec2d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #206 – Anil Seth on the predictive brain and how to study consciousness</title>
      <itunes:title>Highlights: #206 – Anil Seth on the predictive brain and how to study consciousness</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4dca793d-784a-4cdb-b99d-941786817800</guid>
      <link>https://80000hours.org/podcast/episodes/anil-seth-predictive-brain-explaining-consciousness/</link>
      <description>
        <![CDATA[<p>Neuroscientist Anil Seth explains how much we can learn about consciousness by studying the brain in these highlights from <a href="https://80000hours.org/podcast/episodes/anil-seth-predictive-brain-explaining-consciousness/">our full interview</a> — including:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How our brain interprets reality (00:00:15)</li><li>How our brain experiences our organs (00:04:04)</li><li>What psychedelics teach us about consciousness (00:07:37)</li><li>The physical footprint of consciousness in the brain (00:12:10)</li><li>How to study the neural correlates of consciousness (00:15:37)</li></ul><p>This is a selection of highlights from episode #206 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/anil-seth-predictive-brain-explaining-consciousness/"><strong>Anil Seth on the predictive brain and how to study consciousness</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Neuroscientist Anil Seth explains how much we can learn about consciousness by studying the brain in these highlights from <a href="https://80000hours.org/podcast/episodes/anil-seth-predictive-brain-explaining-consciousness/">our full interview</a> — including:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How our brain interprets reality (00:00:15)</li><li>How our brain experiences our organs (00:04:04)</li><li>What psychedelics teach us about consciousness (00:07:37)</li><li>The physical footprint of consciousness in the brain (00:12:10)</li><li>How to study the neural correlates of consciousness (00:15:37)</li></ul><p>This is a selection of highlights from episode #206 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/anil-seth-predictive-brain-explaining-consciousness/"><strong>Anil Seth on the predictive brain and how to study consciousness</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 15 Nov 2024 22:04:20 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/b596ca30/3ff598d2.mp3" length="14202521" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/SHErphXX5oTpQk6CyF53WAMv4qhEOCZXCzCdQcIYqos/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85NTRm/ODA2YmQ5YWFiZDJh/ODBmNDlkODIyYjRh/OWEyZS5qcGVn.jpg"/>
      <itunes:duration>1177</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Neuroscientist Anil Seth explains how much we can learn about consciousness by studying the brain in these highlights from <a href="https://80000hours.org/podcast/episodes/anil-seth-predictive-brain-explaining-consciousness/">our full interview</a> — including:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How our brain interprets reality (00:00:15)</li><li>How our brain experiences our organs (00:04:04)</li><li>What psychedelics teach us about consciousness (00:07:37)</li><li>The physical footprint of consciousness in the brain (00:12:10)</li><li>How to study the neural correlates of consciousness (00:15:37)</li></ul><p>This is a selection of highlights from episode #206 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/anil-seth-predictive-brain-explaining-consciousness/"><strong>Anil Seth on the predictive brain and how to study consciousness</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/b596ca30/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/b596ca30/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #205 – Sébastien Moro on the most insane things fish can do</title>
      <itunes:title>Highlights: #205 – Sébastien Moro on the most insane things fish can do</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">773d4a49-9bcf-4f3e-9133-a7bce4c5c8ac</guid>
      <link>https://80000hours.org/podcast/episodes/sebastien-moro-fish-cognition-senses-social-lives/</link>
      <description>
        <![CDATA[<p><a href="https://www.cervelledoiseau.fr/">Science writer</a> and <a href="https://www.youtube.com/cervelledoiseau">video blogger</a> Sébastien Moro blows our minds with the latest research on fish consciousness, intelligence, and potential sentience.</p><p>This is a selection of highlights from episode #205 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sebastien-moro-fish-cognition-senses-social-lives/"><strong>Sébastien Moro on the most insane things fish can do</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode.</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The ingenious innovations of Atlantic cod (00:00:19)</li><li>The mirror test triumph of cleaner wrasses (00:05:46)</li><li>The astounding accuracy of archerfish (00:10:30)</li><li>The magnificent memory of gobies (00:14:25)</li><li>The tactical teamwork of the grouper and moray eel (00:17:42)</li><li>The remarkable relationships of wild guppies (00:22:01)</li><li>Sébastien's take on fish consciousness (00:26:48)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://www.cervelledoiseau.fr/">Science writer</a> and <a href="https://www.youtube.com/cervelledoiseau">video blogger</a> Sébastien Moro blows our minds with the latest research on fish consciousness, intelligence, and potential sentience.</p><p>This is a selection of highlights from episode #205 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sebastien-moro-fish-cognition-senses-social-lives/"><strong>Sébastien Moro on the most insane things fish can do</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode.</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The ingenious innovations of Atlantic cod (00:00:19)</li><li>The mirror test triumph of cleaner wrasses (00:05:46)</li><li>The astounding accuracy of archerfish (00:10:30)</li><li>The magnificent memory of gobies (00:14:25)</li><li>The tactical teamwork of the grouper and moray eel (00:17:42)</li><li>The remarkable relationships of wild guppies (00:22:01)</li><li>Sébastien's take on fish consciousness (00:26:48)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 12 Nov 2024 16:19:43 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/e07bde94/ccd96dd1.mp3" length="29730331" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/7806-5f-HN_P1MWv1vsD6apyO64QNfscvGtM1MXHEpg/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81NTdm/OTA5NmE3NmVmZGM4/M2Y0ZDUwYmNiNWZh/NWU3ZC5qcGc.jpg"/>
      <itunes:duration>1855</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><a href="https://www.cervelledoiseau.fr/">Science writer</a> and <a href="https://www.youtube.com/cervelledoiseau">video blogger</a> Sébastien Moro blows our minds with the latest research on fish consciousness, intelligence, and potential sentience.</p><p>This is a selection of highlights from episode #205 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/sebastien-moro-fish-cognition-senses-social-lives/"><strong>Sébastien Moro on the most insane things fish can do</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode.</p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The ingenious innovations of Atlantic cod (00:00:19)</li><li>The mirror test triumph of cleaner wrasses (00:05:46)</li><li>The astounding accuracy of archerfish (00:10:30)</li><li>The magnificent memory of gobies (00:14:25)</li><li>The tactical teamwork of the grouper and moray eel (00:17:42)</li><li>The remarkable relationships of wild guppies (00:22:01)</li><li>Sébastien's take on fish consciousness (00:26:48)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e07bde94/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/e07bde94/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism</title>
      <itunes:title>Highlights: #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0a65127b-e928-453c-817e-793dfaf92f3d</guid>
      <link>https://80000hours.org/podcast/episodes/nate-silver-effective-altruism-sbf-art-of-risk/</link>
      <description>
        <![CDATA[<p>Election forecaster Nate Silver gives his takes on: how effective altruism could be better, the stark tradeoffs we faced with COVID, whether the 13 Keys to the White House is "junk science," how to tell whose election predictions are better, and if venture capitalists really take risks.</p><p>This is a selection of highlights from episode #204 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/nate-silver-effective-altruism-sbf-art-of-risk"><strong>Nate Silver on making sense of SBF, and his biggest critiques of effective altruism</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode. <strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Rob’s intro (00:00:00)</li><li>Is anyone doing better at "doing good better"? (00:00:29)</li><li>Is effective altruism too big to succeed? (00:02:19)</li><li>The stark tradeoffs we faced with COVID (00:06:02)</li><li>The 13 Keys to the White House (00:07:53)</li><li>Can we tell whose election predictions are better? (00:11:40)</li><li>Do venture capitalists really take risks? (00:16:29)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Election forecaster Nate Silver gives his takes on: how effective altruism could be better, the stark tradeoffs we faced with COVID, whether the 13 Keys to the White House is "junk science," how to tell whose election predictions are better, and if venture capitalists really take risks.</p><p>This is a selection of highlights from episode #204 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/nate-silver-effective-altruism-sbf-art-of-risk"><strong>Nate Silver on making sense of SBF, and his biggest critiques of effective altruism</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode. <strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Rob’s intro (00:00:00)</li><li>Is anyone doing better at "doing good better"? (00:00:29)</li><li>Is effective altruism too big to succeed? (00:02:19)</li><li>The stark tradeoffs we faced with COVID (00:06:02)</li><li>The 13 Keys to the White House (00:07:53)</li><li>Can we tell whose election predictions are better? (00:11:40)</li><li>Do venture capitalists really take risks? (00:16:29)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 30 Oct 2024 14:12:03 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/48c56fc5/b4e73a98.mp3" length="13971720" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/nQo3Dp0D0YNgMgadkvp1tyf-Ecey95tFJuHtgsp5raA/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hOWMw/OTg1Y2NhZTQyYWY2/MmEyODkzYjU0YTJk/YjJlYy5wbmc.jpg"/>
      <itunes:duration>1160</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Election forecaster Nate Silver gives his takes on: how effective altruism could be better, the stark tradeoffs we faced with COVID, whether the 13 Keys to the White House is "junk science," how to tell whose election predictions are better, and if venture capitalists really take risks.</p><p>This is a selection of highlights from episode #204 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>: <a href="https://80000hours.org/podcast/episodes/nate-silver-effective-altruism-sbf-art-of-risk"><strong>Nate Silver on making sense of SBF, and his biggest critiques of effective altruism</strong></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode. <strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Rob’s intro (00:00:00)</li><li>Is anyone doing better at "doing good better"? (00:00:29)</li><li>Is effective altruism too big to succeed? (00:02:19)</li><li>The stark tradeoffs we faced with COVID (00:06:02)</li><li>The 13 Keys to the White House (00:07:53)</li><li>Can we tell whose election predictions are better? (00:11:40)</li><li>Do venture capitalists really take risks? (00:16:29)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/48c56fc5/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/48c56fc5/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame</title>
      <itunes:title>Highlights: Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">225322dd-211a-4846-b1fe-4a41348333d6</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/luisa-keiran-free-will-guilt-shame/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from our April 2023 episode with host Luisa Rodriguez and producer Keiran Harris on <a href="https://80000hours.org/after-hours-podcast/"><em>80k After Hours</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/luisa-keiran-free-will-guilt-shame/"><strong>Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Keiran’s intro (00:00:00)</li><li>Jerk Syndrome (00:00:53)</li><li>The basic case for free will being an illusion (00:05:10)</li><li>Feeling bad about not being a different person (00:08:29)</li><li>Implications for the criminal justice system (00:10:57)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from our April 2023 episode with host Luisa Rodriguez and producer Keiran Harris on <a href="https://80000hours.org/after-hours-podcast/"><em>80k After Hours</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/luisa-keiran-free-will-guilt-shame/"><strong>Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Keiran’s intro (00:00:00)</li><li>Jerk Syndrome (00:00:53)</li><li>The basic case for free will being an illusion (00:05:10)</li><li>Feeling bad about not being a different person (00:08:29)</li><li>Implications for the criminal justice system (00:10:57)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 21 Oct 2024 15:00:52 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/1deda15e/fb30ed19.mp3" length="9584505" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/wfdTTGFgNB5yvTWtTqxRT9OJ51ICtJjKNt_avUuuMkQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hMzcx/Y2RjZGU4Mzc0NDM5/ODRiOWY5MGI4Mzhj/OTEyZC5qcGc.jpg"/>
      <itunes:duration>795</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from our April 2023 episode with host Luisa Rodriguez and producer Keiran Harris on <a href="https://80000hours.org/after-hours-podcast/"><em>80k After Hours</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/luisa-keiran-free-will-guilt-shame/"><strong>Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Keiran’s intro (00:00:00)</li><li>Jerk Syndrome (00:00:53)</li><li>The basic case for free will being an illusion (00:05:10)</li><li>Feeling bad about not being a different person (00:08:29)</li><li>Implications for the criminal justice system (00:10:57)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1deda15e/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/1deda15e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation</title>
      <itunes:title>Highlights: #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6f9c847a-b6c6-499f-b7e4-2e731902ff0e</guid>
      <link>https://80000hours.org/podcast/episodes/peter-godfrey-smith-wild-animal-suffering-complex-life/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #203 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/peter-godfrey-smith-wild-animal-suffering-complex-life/"><strong>Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Thinking about death (00:00:24)</li><li>Uploads of ourselves (00:05:32)</li><li>Against intervening in wild nature (00:12:36)</li><li>Eliminating the worst experiences in wild nature (00:16:15)</li><li>To be human or wild animal? (00:21:46)</li><li>Challenges for water-based animals (00:27:38)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #203 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/peter-godfrey-smith-wild-animal-suffering-complex-life/"><strong>Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Thinking about death (00:00:24)</li><li>Uploads of ourselves (00:05:32)</li><li>Against intervening in wild nature (00:12:36)</li><li>Eliminating the worst experiences in wild nature (00:16:15)</li><li>To be human or wild animal? (00:21:46)</li><li>Challenges for water-based animals (00:27:38)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 18 Oct 2024 15:00:57 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/f09afae9/b633d84f.mp3" length="24348650" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/OLjSxSm60wRpmY02UN9-4kvIgKy2qoazLDXSokUv7pc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lNDA5/NDIyOTk1ZTdhNTg0/MTMyNGI5MzMxZDgw/MzEzYi5qcGc.jpg"/>
      <itunes:duration>2026</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #203 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/peter-godfrey-smith-wild-animal-suffering-complex-life/"><strong>Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Thinking about death (00:00:24)</li><li>Uploads of ourselves (00:05:32)</li><li>Against intervening in wild nature (00:12:36)</li><li>Eliminating the worst experiences in wild nature (00:16:15)</li><li>To be human or wild animal? (00:21:46)</li><li>Challenges for water-based animals (00:27:38)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f09afae9/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/f09afae9/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Off the Clock #6: Starting Small with Conor Barnes</title>
      <itunes:title>Off the Clock #6: Starting Small with Conor Barnes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">17709137-0f8b-42a3-8ef1-f6758ba7d5ee</guid>
      <link>https://share.transistor.fm/s/ace84848</link>
      <description>
        <![CDATA[<p><strong>Watch this episode on YouTube! </strong><a href="https://youtu.be/yncw2T77OAc"><strong>https://youtu.be/yncw2T77OAc</strong></a></p><p>Matt, Bella, and Huon sit down with Conor Barnes to discuss unlikely journeys, EA criticism, discipline, timeless decision theory, and how to do the most good with a degree in classics. </p><p>Check out:</p><ul><li>Conor’s 100 Tips for a Better Life: <a href="https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life">https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life</a></li><li>Conor’s writing: <a href="https://parhelia.conorbarnes.com/">https://parhelia.conorbarnes.com/</a></li><li>Zvi on timeless decision theory: <a href="https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt">https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><strong>Watch this episode on YouTube! </strong><a href="https://youtu.be/yncw2T77OAc"><strong>https://youtu.be/yncw2T77OAc</strong></a></p><p>Matt, Bella, and Huon sit down with Conor Barnes to discuss unlikely journeys, EA criticism, discipline, timeless decision theory, and how to do the most good with a degree in classics. </p><p>Check out:</p><ul><li>Conor’s 100 Tips for a Better Life: <a href="https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life">https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life</a></li><li>Conor’s writing: <a href="https://parhelia.conorbarnes.com/">https://parhelia.conorbarnes.com/</a></li><li>Zvi on timeless decision theory: <a href="https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt">https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt</a></li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 15 Oct 2024 15:31:47 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/ace84848/e073daf1.mp3" length="158377319" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/LHz5sQlFbBdP8Tbc5n0Zdi2ynFfYin5lSGarII9othM/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kYzBk/NDY5MzNkZjJmMjc0/MWM4NmQ2M2I5MjU3/YTJlMS5qcGc.jpg"/>
      <itunes:duration>3943</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><strong>Watch this episode on YouTube! </strong><a href="https://youtu.be/yncw2T77OAc"><strong>https://youtu.be/yncw2T77OAc</strong></a></p><p>Matt, Bella, and Huon sit down with Conor Barnes to discuss unlikely journeys, EA criticism, discipline, timeless decision theory, and how to do the most good with a degree in classics. </p><p>Check out:</p><ul><li>Conor’s 100 Tips for a Better Life: <a href="https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life">https://www.lesswrong.com/posts/7hFeMWC6Y5eaSixbD/100-tips-for-a-better-life</a></li><li>Conor’s writing: <a href="https://parhelia.conorbarnes.com/">https://parhelia.conorbarnes.com/</a></li><li>Zvi on timeless decision theory: <a href="https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt">https://www.lesswrong.com/posts/scwoBEju75C45W5n3/how-i-lost-100-pounds-using-tdt</a></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ace84848/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Highlights: #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science</title>
      <itunes:title>Highlights: #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ac61b930-1ee6-4e37-90b4-46ab4e1b4b4a</guid>
      <link>https://80000hours.org/podcast/episodes/venki-ramakrishnan-ageing-life-extension/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #202 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/venki-ramakrishnan-ageing-life-extension/"><strong>Venki Ramakrishnan on the cutting edge of anti-ageing science</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Is death an inevitable consequence of evolution? (00:00:15)</li><li>How much additional healthspan will the next 20 to 30 years of ageing research buy us? (00:03:10)</li><li>The social impacts of radical life extension (00:05:46)</li><li>Could increased longevity increase inequality? (00:10:06)</li><li>Does injecting an old body with young blood slow ageing? (00:14:23)</li><li>Freezing cells, organs, and bodies (00:18:35)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #202 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/venki-ramakrishnan-ageing-life-extension/"><strong>Venki Ramakrishnan on the cutting edge of anti-ageing science</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Is death an inevitable consequence of evolution? (00:00:15)</li><li>How much additional healthspan will the next 20 to 30 years of ageing research buy us? (00:03:10)</li><li>The social impacts of radical life extension (00:05:46)</li><li>Could increased longevity increase inequality? (00:10:06)</li><li>Does injecting an old body with young blood slow ageing? (00:14:23)</li><li>Freezing cells, organs, and bodies (00:18:35)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 04 Oct 2024 14:53:40 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/de1b5ad2/aea756f9.mp3" length="16715661" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Id1ziqbAwA1k3UrLBZ-xF5QKrBN2C891ozjzAZXxJow/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82Mjhj/ZGNjYWQ3YjYwNWEz/ZDJjYTAxM2RjNTU5/NzYwZS5qcGc.jpg"/>
      <itunes:duration>1390</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #202 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/venki-ramakrishnan-ageing-life-extension/"><strong>Venki Ramakrishnan on the cutting edge of anti-ageing science</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Is death an inevitable consequence of evolution? (00:00:15)</li><li>How much additional healthspan will the next 20 to 30 years of ageing research buy us? (00:03:10)</li><li>The social impacts of radical life extension (00:05:46)</li><li>Could increased longevity increase inequality? (00:10:06)</li><li>Does injecting an old body with young blood slow ageing? (00:14:23)</li><li>Freezing cells, organs, and bodies (00:18:35)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/de1b5ad2/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/de1b5ad2/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #201 – Ken Goldberg on why your robot butler isn’t here yet</title>
      <itunes:title>Highlights: #201 – Ken Goldberg on why your robot butler isn’t here yet</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8e0d6fd6-a5a1-4df9-8cfa-f24fa20414b1</guid>
      <link>https://80000hours.org/podcast/episodes/ken-goldberg-robotics/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #201 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ken-goldberg-robotics/"><strong>Ken Goldberg on why your robot butler isn’t here yet</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Moravec's paradox (00:00:22)</li><li>Successes in robotics to date (00:03:51)</li><li>Why perception is a big challenge for robotics (00:07:02)</li><li>Why low fault tolerance makes some skills extra hard to automate (00:12:29)</li><li>How might robot labour affect the job market? (00:17:19)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #201 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ken-goldberg-robotics/"><strong>Ken Goldberg on why your robot butler isn’t here yet</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Moravec's paradox (00:00:22)</li><li>Successes in robotics to date (00:03:51)</li><li>Why perception is a big challenge for robotics (00:07:02)</li><li>Why low fault tolerance makes some skills extra hard to automate (00:12:29)</li><li>How might robot labour affect the job market? (00:17:19)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 30 Sep 2024 14:18:05 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/ddf44bbe/b055a047.mp3" length="16167802" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Utz5MlVS1WQaBqbI3LRGnHVbbg4xxgFkMC7j4rYMM7o/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS85ZjIy/NGJjNmQ5Y2YxZjI5/ZjRhYWUyMTQ0MmVk/MzM4Zi5qcGc.jpg"/>
      <itunes:duration>1345</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #201 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ken-goldberg-robotics/"><strong>Ken Goldberg on why your robot butler isn’t here yet</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Moravec's paradox (00:00:22)</li><li>Successes in robotics to date (00:03:51)</li><li>Why perception is a big challenge for robotics (00:07:02)</li><li>Why low fault tolerance makes some skills extra hard to automate (00:12:29)</li><li>How might robot labour affect the job market? (00:17:19)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ddf44bbe/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/ddf44bbe/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks</title>
      <itunes:title>Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">624d0cf0-b24e-4f71-b561-482df8e15ba6</guid>
      <link>https://80000hours.org/podcast/episodes/ezra-karger-forecasting-existential-risks/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #200 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ezra-karger-forecasting-existential-risks/"><strong>Ezra Karger on what superforecasters and experts think about existential risks</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why we need forecasts about existential risks (00:00:26)</li><li>Headline estimates of existential and catastrophic risks (00:02:43)</li><li>What explains disagreements about AI risks? (00:06:18)</li><li>Learning more doesn't resolve disagreements about AI risks (00:08:59)</li><li>A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)</li><li>Cruxes about AI risks (00:15:17)</li><li>Is forecasting actually useful in the real world? (00:18:24)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #200 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ezra-karger-forecasting-existential-risks/"><strong>Ezra Karger on what superforecasters and experts think about existential risks</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why we need forecasts about existential risks (00:00:26)</li><li>Headline estimates of existential and catastrophic risks (00:02:43)</li><li>What explains disagreements about AI risks? (00:06:18)</li><li>Learning more doesn't resolve disagreements about AI risks (00:08:59)</li><li>A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)</li><li>Cruxes about AI risks (00:15:17)</li><li>Is forecasting actually useful in the real world? (00:18:24)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 18 Sep 2024 14:06:00 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/27dc4b77/8d5b66fe.mp3" length="16521875" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/bg0jD4ExD7RUtMtc7Ir-KEKRCdPBm_92J4nMkFIeuMA/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84YzBm/ZGRmN2ZkMDhmZjZh/NTAzY2JkNWIwYzY5/M2UxMS5qcGc.jpg"/>
      <itunes:duration>1374</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #200 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ezra-karger-forecasting-existential-risks/"><strong>Ezra Karger on what superforecasters and experts think about existential risks</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why we need forecasts about existential risks (00:00:26)</li><li>Headline estimates of existential and catastrophic risks (00:02:43)</li><li>What explains disagreements about AI risks? (00:06:18)</li><li>Learning more doesn't resolve disagreements about AI risks (00:08:59)</li><li>A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)</li><li>Cruxes about AI risks (00:15:17)</li><li>Is forecasting actually useful in the real world? (00:18:24)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/27dc4b77/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/27dc4b77/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy</title>
      <itunes:title>Highlights: #199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8874968b-9d74-43cc-8ac1-b6134d9f76d3</guid>
      <link>https://80000hours.org/podcast/episodes/nathan-calvin-sb-1047-california-ai-safety-bill/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #199 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-calvin-sb-1047-california-ai-safety-bill/"><strong>Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why we can't count on AI companies to self-regulate (00:00:21)</li><li>SB 1047's impact on open source models (00:04:24)</li><li>Why it's not "too early" for AI policies (00:07:54)</li><li>Why working on state-level policy could have an outsized impact (00:11:47)</li></ul><p><br></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #199 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-calvin-sb-1047-california-ai-safety-bill/"><strong>Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why we can't count on AI companies to self-regulate (00:00:21)</li><li>SB 1047's impact on open source models (00:04:24)</li><li>Why it's not "too early" for AI policies (00:07:54)</li><li>Why working on state-level policy could have an outsized impact (00:11:47)</li></ul><p><br></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 12 Sep 2024 17:35:49 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/dcf0f24c/44796d7a.mp3" length="11077298" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Y7Iat_G5gegdFAcWptf00SfwXuiCfkOtsOY7fJBqRLg/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lNjgz/YWI2YWNjNGQzODU5/OTAxZjg4Y2IwMzA1/ZmI0ZC5qcGVn.jpg"/>
      <itunes:duration>918</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #199 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-calvin-sb-1047-california-ai-safety-bill/"><strong>Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why we can't count on AI companies to self-regulate (00:00:21)</li><li>SB 1047's impact on open source models (00:04:24)</li><li>Why it's not "too early" for AI policies (00:07:54)</li><li>Why working on state-level policy could have an outsized impact (00:11:47)</li></ul><p><br></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/dcf0f24c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/dcf0f24c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #198 – Meghan Barrett on challenging our assumptions about insects</title>
      <itunes:title>Highlights: #198 – Meghan Barrett on challenging our assumptions about insects</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">99fe7a9f-759d-42d6-b7e8-c35c09b4525b</guid>
      <link>https://80000hours.org/podcast/episodes/meghan-barrett-insect-pain-consciousness-sentience/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #198 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/meghan-barrett-insect-pain-consciousness-sentience/"><strong>Meghan Barrett on challenging our assumptions about insects</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Size diversity (00:00:16)</li><li>Offspring, parental investment, and lifespan (00:03:18)</li><li>Headless cockroaches (00:06:13)</li><li>Is self-protective behaviour a reflex? (00:08:50)</li><li>If insects feel pain, is it mild or severe? (00:11:54)</li><li>Evolutionary perspective on insect sentience (00:16:53)</li><li>How likely is insect sentience? (00:20:25)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #198 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/meghan-barrett-insect-pain-consciousness-sentience/"><strong>Meghan Barrett on challenging our assumptions about insects</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Size diversity (00:00:16)</li><li>Offspring, parental investment, and lifespan (00:03:18)</li><li>Headless cockroaches (00:06:13)</li><li>Is self-protective behaviour a reflex? (00:08:50)</li><li>If insects feel pain, is it mild or severe? (00:11:54)</li><li>Evolutionary perspective on insect sentience (00:16:53)</li><li>How likely is insect sentience? (00:20:25)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 09 Sep 2024 15:15:12 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/1a474af4/470c64c0.mp3" length="17232409" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/sZRylGtFu1_l8_iM2UlQte2Xhjo7o0bqktLx3nWSJFU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9mNDM0/YzA4NzVjMjhhYzkx/NWJkODUyYzk3MGJm/YzhjYy5qcGVn.jpg"/>
      <itunes:duration>1433</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #198 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/meghan-barrett-insect-pain-consciousness-sentience/"><strong>Meghan Barrett on challenging our assumptions about insects</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Size diversity (00:00:16)</li><li>Offspring, parental investment, and lifespan (00:03:18)</li><li>Headless cockroaches (00:06:13)</li><li>Is self-protective behaviour a reflex? (00:08:50)</li><li>If insects feel pain, is it mild or severe? (00:11:54)</li><li>Evolutionary perspective on insect sentience (00:16:53)</li><li>How likely is insect sentience? (00:20:25)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1a474af4/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/1a474af4/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #197 – Nick Joseph on whether Anthropic’s AI safety policy is up to the task</title>
      <itunes:title>Highlights: #197 – Nick Joseph on whether Anthropic’s AI safety policy is up to the task</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9ab75afc-9121-490d-8e4a-4228257d138b</guid>
      <link>https://80000hours.org/podcast/episodes/nick-joseph-anthropic-safety-approach-responsible-scaling/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #197 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nick-joseph-anthropic-safety-approach-responsible-scaling/"><strong>Nick Joseph on whether Anthropic’s AI safety policy is up to the task</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Rob's intro (00:00:00)</li><li>What Anthropic's responsible scaling policy commits the company to doing (00:00:17)</li><li>Why Nick is a big fan of the RSP approach (00:02:13)</li><li>Are RSPs still valuable if the people using them aren't bought in? (00:05:07)</li><li>Nick's biggest reservations about the RSP approach (00:08:01)</li><li>Should Anthropic's RSP have wider safety buffers? (00:11:17)</li><li>Alternatives to RSPs (00:14:57)</li><li>Should concerned people be willing to take capabilities roles? (00:19:22)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #197 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nick-joseph-anthropic-safety-approach-responsible-scaling/"><strong>Nick Joseph on whether Anthropic’s AI safety policy is up to the task</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Rob's intro (00:00:00)</li><li>What Anthropic's responsible scaling policy commits the company to doing (00:00:17)</li><li>Why Nick is a big fan of the RSP approach (00:02:13)</li><li>Are RSPs still valuable if the people using them aren't bought in? (00:05:07)</li><li>Nick's biggest reservations about the RSP approach (00:08:01)</li><li>Should Anthropic's RSP have wider safety buffers? (00:11:17)</li><li>Alternatives to RSPs (00:14:57)</li><li>Should concerned people be willing to take capabilities roles? (00:19:22)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 05 Sep 2024 14:16:28 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/27797b4c/175537f1.mp3" length="31944366" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/z8EQDKY0lnRRZ0ISCC4MgJ1Kh_804vTpKWABwo_BKWU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kNWUy/NDc1NDM4MmQ2NTJk/Y2ExMGU4ZjhkNmI3/MDEyZi5qcGVn.jpg"/>
      <itunes:duration>1330</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #197 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nick-joseph-anthropic-safety-approach-responsible-scaling/"><strong>Nick Joseph on whether Anthropic’s AI safety policy is up to the task</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Highlights:</p><ul><li>Rob's intro (00:00:00)</li><li>What Anthropic's responsible scaling policy commits the company to doing (00:00:17)</li><li>Why Nick is a big fan of the RSP approach (00:02:13)</li><li>Are RSPs still valuable if the people using them aren't bought in? (00:05:07)</li><li>Nick's biggest reservations about the RSP approach (00:08:01)</li><li>Should Anthropic's RSP have wider safety buffers? (00:11:17)</li><li>Alternatives to RSPs (00:14:57)</li><li>Should concerned people be willing to take capabilities roles? (00:19:22)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/27797b4c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/27797b4c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #196 – Jonathan Birch on the edge cases of sentience and why they matter</title>
      <itunes:title>Highlights: #196 – Jonathan Birch on the edge cases of sentience and why they matter</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">93b80705-035f-4166-a895-f3666c3adfa7</guid>
      <link>https://80000hours.org/podcast/episodes/jonathan-birch-edge-sentience-uncertainty/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #196 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jonathan-birch-edge-sentience-uncertainty/"><strong>Jonathan Birch on the edge cases of sentience and why they matter</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The history of neonatal surgery without anaesthetic (00:00:23)</li><li>Overconfidence around disorders of consciousness (00:03:17)</li><li>Separating abortion from the issue of foetal sentience (00:07:26)</li><li>The cases for and against neural organoids (00:11:30)</li><li>Artificial sentience arising from whole brain emulations of roundworms and fruit flies (00:15:45)</li><li>Using citizens' assemblies to do policymaking (00:22:00)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #196 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jonathan-birch-edge-sentience-uncertainty/"><strong>Jonathan Birch on the edge cases of sentience and why they matter</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The history of neonatal surgery without anaesthetic (00:00:23)</li><li>Overconfidence around disorders of consciousness (00:03:17)</li><li>Separating abortion from the issue of foetal sentience (00:07:26)</li><li>The cases for and against neural organoids (00:11:30)</li><li>Artificial sentience arising from whole brain emulations of roundworms and fruit flies (00:15:45)</li><li>Using citizens' assemblies to do policymaking (00:22:00)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 30 Aug 2024 14:26:22 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/e5d93671/93771d32.mp3" length="18451730" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/nYZ7DIerx9jOJXwuraLFJzWzKymDiGTG7Vy3fp8EXLc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZmI1/NGI1NWRkY2Y4YjRh/YzAyNmZkYjYxNjhl/Y2EwOC5qcGc.jpg"/>
      <itunes:duration>1534</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #196 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jonathan-birch-edge-sentience-uncertainty/"><strong>Jonathan Birch on the edge cases of sentience and why they matter</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The history of neonatal surgery without anaesthetic (00:00:23)</li><li>Overconfidence around disorders of consciousness (00:03:17)</li><li>Separating abortion from the issue of foetal sentience (00:07:26)</li><li>The cases for and against neural organoids (00:11:30)</li><li>Artificial sentience arising from whole brain emulations of roundworms and fruit flies (00:15:45)</li><li>Using citizens' assemblies to do policymaking (00:22:00)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e5d93671/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/e5d93671/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them</title>
      <itunes:title>Highlights: #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7ff3f9d2-77bf-4e08-9deb-11ce0e2c3f20</guid>
      <link>https://80000hours.org/podcast/episodes/sella-nevo-securing-ai-model-weights/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #195 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/sella-nevo-securing-ai-model-weights/"><strong>Sella Nevo on who's trying to steal frontier AI models, and what they could do with them</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why protect model weights? (00:00:23)</li><li>SolarWinds hack (00:03:51)</li><li>Zero-days (00:08:16)</li><li>Side-channel attacks (00:11:45)</li><li>USB cables (00:15:11)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #195 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/sella-nevo-securing-ai-model-weights/"><strong>Sella Nevo on who's trying to steal frontier AI models, and what they could do with them</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why protect model weights? (00:00:23)</li><li>SolarWinds hack (00:03:51)</li><li>Zero-days (00:08:16)</li><li>Side-channel attacks (00:11:45)</li><li>USB cables (00:15:11)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 19 Aug 2024 18:56:28 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/40e27db4/4cce7853.mp3" length="13044127" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/vw11Gyczd_li-h6YBWjFs1S258eWf5L7LKdONYaiNJQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84MjQx/M2UwYjBiMWUyNzc2/YWRjNTUwZjdmYTEx/NWQwZS5wbmc.jpg"/>
      <itunes:duration>1083</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #195 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/sella-nevo-securing-ai-model-weights/"><strong>Sella Nevo on who's trying to steal frontier AI models, and what they could do with them</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Why protect model weights? (00:00:23)</li><li>SolarWinds hack (00:03:51)</li><li>Zero-days (00:08:16)</li><li>Side-channel attacks (00:11:45)</li><li>USB cables (00:15:11)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/40e27db4/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/40e27db4/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government</title>
      <itunes:title>Highlights: #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">76a6bd71-d3cd-4d5e-9d28-3394461de66e</guid>
      <link>https://80000hours.org/podcast/episodes/vitalik-buterin-techno-optimism</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #194 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/vitalik-buterin-techno-optimism/"><strong>Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Rob’s intro (00:00:00)</li><li>Vitalik's "d/acc" alternative (00:00:14)</li><li>Biodefence (00:05:31)</li><li>How much do people actually disagree? (00:09:49)</li><li>Distrust of authority is a big deal (00:15:09)</li><li>Info defence and X's Community Notes (00:19:35)</li><li>Quadratic voting and funding (00:26:22)</li><li>Vitalik's philosophy of half-assing everything (00:30:32)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #194 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/vitalik-buterin-techno-optimism/"><strong>Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Rob’s intro (00:00:00)</li><li>Vitalik's "d/acc" alternative (00:00:14)</li><li>Biodefence (00:05:31)</li><li>How much do people actually disagree? (00:09:49)</li><li>Distrust of authority is a big deal (00:15:09)</li><li>Info defence and X's Community Notes (00:19:35)</li><li>Quadratic voting and funding (00:26:22)</li><li>Vitalik's philosophy of half-assing everything (00:30:32)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 12 Aug 2024 14:54:47 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/e9dcc456/72a40751.mp3" length="50896352" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/ICJX2O2XUh1EQ0-BJnep51yCqKYCRYtvPbxmYh9QCmA/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xZWU4/YWYyMTBiM2YzMzg1/OGZlOWMyZjNlMmUx/ZTRkMC5wbmc.jpg"/>
      <itunes:duration>2119</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #194 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/vitalik-buterin-techno-optimism/"><strong>Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Rob’s intro (00:00:00)</li><li>Vitalik's "d/acc" alternative (00:00:14)</li><li>Biodefence (00:05:31)</li><li>How much do people actually disagree? (00:09:49)</li><li>Distrust of authority is a big deal (00:15:09)</li><li>Info defence and X's Community Notes (00:19:35)</li><li>Quadratic voting and funding (00:26:22)</li><li>Vitalik's philosophy of half-assing everything (00:30:32)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/e9dcc456/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/e9dcc456/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #193 – Sihao Huang on the risk that US–China AI competition leads to war</title>
      <itunes:title>Highlights: #193 – Sihao Huang on the risk that US–China AI competition leads to war</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">62f2aa88-ed49-4f73-8ab3-2f94ee7488c7</guid>
      <link>https://80000hours.org/podcast/episodes/sihao-huang-china-ai-capabilities/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #193 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/sihao-huang-china-ai-capabilities/"><strong>Sihao Huang on the risk that US–China AI competition leads to war</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How advanced is Chinese AI? (00:00:25)</li><li>Is China catching up to the US and UK? (00:05:14)</li><li>Could China be a source of catastrophic AI risk? (00:07:50)</li><li>AI enabling human rights abuses and undermining democracy (00:13:53)</li><li>China's attempts at indigenising its semiconductor supply chain (00:18:14)</li><li>How the US and UK might coordinate with China (00:20:32)</li></ul><p><br></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #193 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/sihao-huang-china-ai-capabilities/"><strong>Sihao Huang on the risk that US–China AI competition leads to war</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How advanced is Chinese AI? (00:00:25)</li><li>Is China catching up to the US and UK? (00:05:14)</li><li>Could China be a source of catastrophic AI risk? (00:07:50)</li><li>AI enabling human rights abuses and undermining democracy (00:13:53)</li><li>China's attempts at indigenising its semiconductor supply chain (00:18:14)</li><li>How the US and UK might coordinate with China (00:20:32)</li></ul><p><br></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 31 Jul 2024 13:43:23 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/713a4d17/27b755b8.mp3" length="17890862" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/s4NRuf-gDPPS27Q75a53QF2KBgPp1gDhPAtJ8vfp0uQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS80M2Ri/MjVlNTExZDM3OGQ1/NTQ0M2ExODYwNjBl/YWJlNi5qcGc.jpg"/>
      <itunes:duration>1488</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #193 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/sihao-huang-china-ai-capabilities/"><strong>Sihao Huang on the risk that US–China AI competition leads to war</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How advanced is Chinese AI? (00:00:25)</li><li>Is China catching up to the US and UK? (00:05:14)</li><li>Could China be a source of catastrophic AI risk? (00:07:50)</li><li>AI enabling human rights abuses and undermining democracy (00:13:53)</li><li>China's attempts at indigenising its semiconductor supply chain (00:18:14)</li><li>How the US and UK might coordinate with China (00:20:32)</li></ul><p><br></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/713a4d17/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/713a4d17/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US</title>
      <itunes:title>Highlights: #192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cf56873e-1269-4ef9-881e-609fc2a982d5</guid>
      <link>https://80000hours.org/podcast/episodes/annie-jacobsen-nuclear-catastrophe-escalation/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #192 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/annie-jacobsen-nuclear-catastrophe-escalation/"><strong>Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The minutes after an incoming nuclear attack is detected (00:00:22)</li><li>Deciding whether to retaliate (00:04:24)</li><li>Russian misperception of US counterattack (00:07:37)</li><li>The nuclear launch plans that would kill millions in neighbouring countries (00:11:38)</li><li>The war games that suggest escalation is inevitable (00:15:31)</li><li>A super-electromagnetic pulse (00:19:12)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #192 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/annie-jacobsen-nuclear-catastrophe-escalation/"><strong>Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The minutes after an incoming nuclear attack is detected (00:00:22)</li><li>Deciding whether to retaliate (00:04:24)</li><li>Russian misperception of US counterattack (00:07:37)</li><li>The nuclear launch plans that would kill millions in neighbouring countries (00:11:38)</li><li>The war games that suggest escalation is inevitable (00:15:31)</li><li>A super-electromagnetic pulse (00:19:12)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 25 Jul 2024 14:54:28 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/96d9b97c/bc8aa2fc.mp3" length="16993462" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Ap-oSZZ52kWFjiydCHEBbrWwSmJRXDNg63H4BQ5esXo/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84MTNi/OWNlZTI0NTQ3YmE2/NWNmYjExYmE0MTE5/NTBiZi5qcGc.jpg"/>
      <itunes:duration>1413</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #192 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/annie-jacobsen-nuclear-catastrophe-escalation/"><strong>Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>The minutes after an incoming nuclear attack is detected (00:00:22)</li><li>Deciding whether to retaliate (00:04:24)</li><li>Russian misperception of US counterattack (00:07:37)</li><li>The nuclear launch plans that would kill millions in neighbouring countries (00:11:38)</li><li>The war games that suggest escalation is inevitable (00:15:31)</li><li>A super-electromagnetic pulse (00:19:12)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/96d9b97c/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/96d9b97c/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Off the Clock #5: Leaving 80k with Maria Gutierrez Rojas</title>
      <itunes:title>Off the Clock #5: Leaving 80k with Maria Gutierrez Rojas</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5d3bdd28-ae92-4798-a685-b6e26c7ce561</guid>
      <link>https://share.transistor.fm/s/d2f49a44</link>
      <description>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://youtu.be/AUuEaYltONg"><strong>https://youtu.be/AUuEaYltONg</strong></a></p><p>Matt, Bella, and Cody sit down with Maria Gutierrez Rojas to discuss the 80k’s aesthetics, religion (again), bad billionaires, and why it’s hard to be an org that both gives advice and has opinions.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://youtu.be/AUuEaYltONg"><strong>https://youtu.be/AUuEaYltONg</strong></a></p><p>Matt, Bella, and Cody sit down with Maria Gutierrez Rojas to discuss the 80k’s aesthetics, religion (again), bad billionaires, and why it’s hard to be an org that both gives advice and has opinions.</p>]]>
      </content:encoded>
      <pubDate>Tue, 23 Jul 2024 17:14:03 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/d2f49a44/e779a487.mp3" length="80076308" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/4tj0hWcC5mt24LxVf2AYBHuvDPvF2CgqvMTZALa8gKE/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lNWI2/ZTk1NjE3ZDY0Y2Iz/MGI4MzUyMzExNDkz/OGY1MS5qcGc.jpg"/>
      <itunes:duration>5003</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://youtu.be/AUuEaYltONg"><strong>https://youtu.be/AUuEaYltONg</strong></a></p><p>Matt, Bella, and Cody sit down with Maria Gutierrez Rojas to discuss the 80k’s aesthetics, religion (again), bad billionaires, and why it’s hard to be an org that both gives advice and has opinions.</p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d2f49a44/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Highlights: #191 (Part 2) – Carl Shulman on government and society after AGI</title>
      <itunes:title>Highlights: #191 (Part 2) – Carl Shulman on government and society after AGI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">39877bfc-e556-4e11-95a1-f5646f1aedd2</guid>
      <link>https://80000hours.org/podcast/episodes/carl-shulman-society-agi/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #191 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/carl-shulman-society-agi/"><strong>Carl Shulman on government and society after AGI</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>How AI advisors could have saved us from COVID-19 (00:00:05)</li><li>Why Carl doesn't support enforced pauses on AI research (00:06:34)</li><li>Value lock-in (00:12:58)</li><li>How democracies avoid coups (00:17:11)</li><li>Building trust between adversaries about which models you can believe (00:24:00)</li><li>Opportunities for listeners (00:30:11)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #191 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/carl-shulman-society-agi/"><strong>Carl Shulman on government and society after AGI</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>How AI advisors could have saved us from COVID-19 (00:00:05)</li><li>Why Carl doesn't support enforced pauses on AI research (00:06:34)</li><li>Value lock-in (00:12:58)</li><li>How democracies avoid coups (00:17:11)</li><li>Building trust between adversaries about which models you can believe (00:24:00)</li><li>Opportunities for listeners (00:30:11)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 19 Jul 2024 15:33:45 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/249129a8/d218dbad.mp3" length="47688852" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/4ngkjb29S1QOj2oNs4nwByJrIT0j71mmNTR_kEOV49M/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xNjE1/NzYxZmU4Mzc0YmEz/ZjQyNjRmODYzNTk4/NzcwNi5qcGc.jpg"/>
      <itunes:duration>1986</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #191 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/carl-shulman-society-agi/"><strong>Carl Shulman on government and society after AGI</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>How AI advisors could have saved us from COVID-19 (00:00:05)</li><li>Why Carl doesn't support enforced pauses on AI research (00:06:34)</li><li>Value lock-in (00:12:58)</li><li>How democracies avoid coups (00:17:11)</li><li>Building trust between adversaries about which models you can believe (00:24:00)</li><li>Opportunities for listeners (00:30:11)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/249129a8/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/249129a8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #191 (Part 1) – Carl Shulman on the economy and national security after AGI</title>
      <itunes:title>Highlights: #191 (Part 1) – Carl Shulman on the economy and national security after AGI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">188a3e39-f554-479e-aece-23c771b0fa29</guid>
      <link>https://80000hours.org/podcast/episodes/carl-shulman-economy-agi/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #191 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/carl-shulman-economy-agi/"><strong>Carl Shulman on the economy and national security after AGI</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Intro (00:00:00)</li><li>Robot nannies (00:00:23)</li><li>Key transformations after an AI capabilities explosion (00:05:15)</li><li>Objection: Shouldn't we be seeing economic growth rates increasing today? (00:10:28)</li><li>Objection: Declining returns to increases in intelligence? (00:16:09)</li><li>Objection: Could we really see rates of construction go up a hundredfold or a thousandfold? (00:20:58)</li><li>Objection: "This sounds completely whack" (00:26:10)</li><li>Income and wealth distribution (00:30:02)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #191 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/carl-shulman-economy-agi/"><strong>Carl Shulman on the economy and national security after AGI</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Intro (00:00:00)</li><li>Robot nannies (00:00:23)</li><li>Key transformations after an AI capabilities explosion (00:05:15)</li><li>Objection: Shouldn't we be seeing economic growth rates increasing today? (00:10:28)</li><li>Objection: Declining returns to increases in intelligence? (00:16:09)</li><li>Objection: Could we really see rates of construction go up a hundredfold or a thousandfold? (00:20:58)</li><li>Objection: "This sounds completely whack" (00:26:10)</li><li>Income and wealth distribution (00:30:02)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 11 Jul 2024 13:30:20 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/bb38ef46/491f8c97.mp3" length="25383629" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/YIEPjVCWFcz__YkXySns9swLcKJUbqhXaGlgQM7tS_M/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9hNzk3/YmE1NTYxMjEyNmU5/N2E5NzBmMDIzYzZm/NWVkYi5qcGc.jpg"/>
      <itunes:duration>2112</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #191 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/carl-shulman-economy-agi/"><strong>Carl Shulman on the economy and national security after AGI</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Intro (00:00:00)</li><li>Robot nannies (00:00:23)</li><li>Key transformations after an AI capabilities explosion (00:05:15)</li><li>Objection: Shouldn't we be seeing economic growth rates increasing today? (00:10:28)</li><li>Objection: Declining returns to increases in intelligence? (00:16:09)</li><li>Objection: Could we really see rates of construction go up a hundredfold or a thousandfold? (00:20:58)</li><li>Objection: "This sounds completely whack" (00:26:10)</li><li>Income and wealth distribution (00:30:02)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/bb38ef46/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/bb38ef46/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #190 – Eric Schwitzgebel on whether the US is conscious</title>
      <itunes:title>Highlights: #190 – Eric Schwitzgebel on whether the US is conscious</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">254d347b-a526-4546-921d-e8506610b925</guid>
      <link>https://80000hours.org/podcast/episodes/eric-schwitzgebel-world-weird-us-consciousness/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #190 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/eric-schwitzgebel-world-weird-us-consciousness/"><strong>Eric Schwitzgebel on whether the US is conscious</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Can consciousness be nested? (00:00:18)</li><li>Are our intuitions useless for thinking about these things? (00:05:45)</li><li>Do small differences rule out consciousness? (00:09:43)</li><li>Overlapping consciousnesses (00:13:26)</li><li>Are we dreaming right now? (00:17:21)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #190 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/eric-schwitzgebel-world-weird-us-consciousness/"><strong>Eric Schwitzgebel on whether the US is conscious</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Can consciousness be nested? (00:00:18)</li><li>Are our intuitions useless for thinking about these things? (00:05:45)</li><li>Do small differences rule out consciousness? (00:09:43)</li><li>Overlapping consciousnesses (00:13:26)</li><li>Are we dreaming right now? (00:17:21)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 21 Jun 2024 14:08:10 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/29f7851a/71af5593.mp3" length="32337649" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/xVhH1n6IehP_-rxyrX3R3H4ApNIBN37PxuqF-cyekcs/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82YWM2/MGIxZDg2ZmRhMmQz/YTJmMjZlMmJhNWVi/NWI2My5qcGc.jpg"/>
      <itunes:duration>1342</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #190 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/eric-schwitzgebel-world-weird-us-consciousness/"><strong>Eric Schwitzgebel on whether the US is conscious</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>Can consciousness be nested? (00:00:18)</li><li>Are our intuitions useless for thinking about these things? (00:05:45)</li><li>Do small differences rule out consciousness? (00:09:43)</li><li>Overlapping consciousnesses (00:13:26)</li><li>Are we dreaming right now? (00:17:21)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/29f7851a/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/29f7851a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems</title>
      <itunes:title>Highlights: #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f0899a05-0a4b-4b5d-8ebd-78081fc62c6e</guid>
      <link>https://80000hours.org/podcast/episodes/rachel-glennerster-market-shaping-incentives/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #189 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/rachel-glennerster-market-shaping-incentives/"><strong>Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa's intro (00:00:00)</li><li>What is market shaping? (00:00:25)</li><li>Why some countries didn't have COVID vaccines sooner (00:05:04)</li><li>Designing incentives for pull mechanisms (00:09:12)</li><li>Using pull mechanisms to get a universal COVID vaccine (00:13:31)</li><li>Pull mechanisms to incentivise repurposing of generic drugs (00:18:20)</li><li>Specific interventions versus systemic reform in education (00:23:25)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #189 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/rachel-glennerster-market-shaping-incentives/"><strong>Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa's intro (00:00:00)</li><li>What is market shaping? (00:00:25)</li><li>Why some countries didn't have COVID vaccines sooner (00:05:04)</li><li>Designing incentives for pull mechanisms (00:09:12)</li><li>Using pull mechanisms to get a universal COVID vaccine (00:13:31)</li><li>Pull mechanisms to incentivise repurposing of generic drugs (00:18:20)</li><li>Specific interventions versus systemic reform in education (00:23:25)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 12 Jun 2024 13:53:08 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/d473cc57/1dcd4120.mp3" length="19069052" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/oqBREQ0kkhgVmZI2HBO-t9pZ_MsD3082EmahLICk9t0/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jZTg3/NDc3MTkzNTdhMzgx/YTgzM2MzYmIzMGMx/NDQzYi5qcGVn.jpg"/>
      <itunes:duration>1587</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #189 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/rachel-glennerster-market-shaping-incentives/"><strong>Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa's intro (00:00:00)</li><li>What is market shaping? (00:00:25)</li><li>Why some countries didn't have COVID vaccines sooner (00:05:04)</li><li>Designing incentives for pull mechanisms (00:09:12)</li><li>Using pull mechanisms to get a universal COVID vaccine (00:13:31)</li><li>Pull mechanisms to incentivise repurposing of generic drugs (00:18:20)</li><li>Specific interventions versus systemic reform in education (00:23:25)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d473cc57/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/d473cc57/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #188 – Matt Clancy on whether science is good</title>
      <itunes:title>Highlights: #188 – Matt Clancy on whether science is good</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cb93af33-a9f3-4fb2-8cb3-cf41331d674b</guid>
      <link>https://80000hours.org/podcast/episodes/matt-clancy-whether-science-is-good/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #188 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/matt-clancy-whether-science-is-good/"><strong>Matt Clancy on whether science is good</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How could scientific progress be net negative? (00:00:15)</li><li>Non-philosophical reasons to discount the far-future (00:03:42)</li><li>How technology generates huge benefits in our day-to-day lives (00:07:54)</li><li>Can science reduce extinction risk? (00:14:17)</li><li>Are we already too late to delay the time of perils? (00:18:48)</li><li>The omnipresent frictions that might prevent explosive economic growth (00:21:59)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #188 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/matt-clancy-whether-science-is-good/"><strong>Matt Clancy on whether science is good</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How could scientific progress be net negative? (00:00:15)</li><li>Non-philosophical reasons to discount the far-future (00:03:42)</li><li>How technology generates huge benefits in our day-to-day lives (00:07:54)</li><li>Can science reduce extinction risk? (00:14:17)</li><li>Are we already too late to delay the time of perils? (00:18:48)</li><li>The omnipresent frictions that might prevent explosive economic growth (00:21:59)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 06 Jun 2024 13:44:36 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/ddb483fa/43d08717.mp3" length="38139351" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Co2dgDKlxyS4jKhqOqxlIxKQOKB25SLN_40wgtfLXmc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kYzE1/ODc1NDE2YzAxYTgx/NDRjMTgzMzc4NTJm/NjgwMS5qcGVn.jpg"/>
      <itunes:duration>1588</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #188 of <a href="https://80000hours.org/podcast/"><em>The 80,000 Hours Podcast</em></a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/matt-clancy-whether-science-is-good/"><strong>Matt Clancy on whether science is good</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>Luisa’s intro (00:00:00)</li><li>How could scientific progress be net negative? (00:00:15)</li><li>Non-philosophical reasons to discount the far-future (00:03:42)</li><li>How technology generates huge benefits in our day-to-day lives (00:07:54)</li><li>Can science reduce extinction risk? (00:14:17)</li><li>Are we already too late to delay the time of perils? (00:18:48)</li><li>The omnipresent frictions that might prevent explosive economic growth (00:21:59)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/ddb483fa/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/ddb483fa/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Off the Clock #4 (fka Actually After Hours): One Boxing with Julian Hazell</title>
      <itunes:title>Off the Clock #4 (fka Actually After Hours): One Boxing with Julian Hazell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cd267710-9b07-47fd-87dc-93cce4daabfd</guid>
      <link>https://share.transistor.fm/s/36390b96</link>
      <description>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://www.youtube.com/watch?v=GFEb8ICWJQQ"><strong>https://youtu.be/R9VGbY7CgOI</strong></a></p><p>Matt, Bella, and Cody sit down with Julian Hazell to discuss the UK recession, religion, higher education, and whether being an amateur swordfighter should give you the right to vote.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://www.youtube.com/watch?v=GFEb8ICWJQQ"><strong>https://youtu.be/R9VGbY7CgOI</strong></a></p><p>Matt, Bella, and Cody sit down with Julian Hazell to discuss the UK recession, religion, higher education, and whether being an amateur swordfighter should give you the right to vote.</p>]]>
      </content:encoded>
      <pubDate>Mon, 03 Jun 2024 17:31:07 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/36390b96/93c595b1.mp3" length="109285368" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/GyHq_JVHnhq31tbqrPaGoGwRMfV1Ya9qUeaxBod_ajc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS82NDc2/M2M4ZjdmZmQ1Mjdh/N2E5NjY4Y2E0NTM5/MDlkNC5qcGc.jpg"/>
      <itunes:duration>4552</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://www.youtube.com/watch?v=GFEb8ICWJQQ"><strong>https://youtu.be/R9VGbY7CgOI</strong></a></p><p>Matt, Bella, and Cody sit down with Julian Hazell to discuss the UK recession, religion, higher education, and whether being an amateur swordfighter should give you the right to vote.</p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Highlights: #187 – Zach Weinersmith on how researching his book turned him from a space optimist into a “space bastard”</title>
      <itunes:title>Highlights: #187 – Zach Weinersmith on how researching his book turned him from a space optimist into a “space bastard”</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">67d8bbcd-68b7-49ef-85e2-40f7eac78a3c</guid>
      <link>https://80000hours.org/podcast/episodes/zach-weinersmith-space-settlement/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #187 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/zach-weinersmith-space-settlement/"><strong>Zach Weinersmith on how researching his book turned him from a space optimist into a “space bastard”</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>A potted history of space exploration (00:00:23)</li><li>Why space settlement (probably) won't make us rich (00:06:07)</li><li>What happens to human bodies in space (00:11:43)</li><li>The ethics of space babies (00:16:05)</li><li>Making babies in space (00:18:40)</li><li>A roadmap for settling space (00:22:42)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #187 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/zach-weinersmith-space-settlement/"><strong>Zach Weinersmith on how researching his book turned him from a space optimist into a “space bastard”</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>A potted history of space exploration (00:00:23)</li><li>Why space settlement (probably) won't make us rich (00:06:07)</li><li>What happens to human bodies in space (00:11:43)</li><li>The ethics of space babies (00:16:05)</li><li>Making babies in space (00:18:40)</li><li>A roadmap for settling space (00:22:42)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 28 May 2024 15:19:20 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/d484d2f8/be02507a.mp3" length="18827857" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/esgThBI55E1BVpB0rZqD88ra6U8ZkK0lBPyqVWm_1W4/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84YWI4/MjFiYTkzYTM5Mzk1/YzI1NjVmZTk5ZWM2/MzU2Zi5qcGVn.jpg"/>
      <itunes:duration>1566</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #187 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/zach-weinersmith-space-settlement/"><strong>Zach Weinersmith on how researching his book turned him from a space optimist into a “space bastard”</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing <a href="mailto:podcast@80000hours.org">podcast@80000hours.org</a>.</p><p>Chapters:</p><ul><li>A potted history of space exploration (00:00:23)</li><li>Why space settlement (probably) won't make us rich (00:06:07)</li><li>What happens to human bodies in space (00:11:43)</li><li>The ethics of space babies (00:16:05)</li><li>Making babies in space (00:18:40)</li><li>A roadmap for settling space (00:22:42)</li></ul><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d484d2f8/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/d484d2f8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives</title>
      <itunes:title>Highlights: #186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4ccb1535-9e9b-4ae4-9457-896f72adf21a</guid>
      <link>https://80000hours.org/podcast/episodes/dean-spears-neonatal-mortality-kangaroo-mother-care/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #186 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/dean-spears-neonatal-mortality-kangaroo-mother-care/"><strong>Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #186 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/dean-spears-neonatal-mortality-kangaroo-mother-care/"><strong>Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 15 May 2024 23:19:35 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/943f6a9e/d2fa9412.mp3" length="21499570" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/KncEJrHfe2DHgo79Q0IcnUjTOjZxHpY2VQhulvYtYoA/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81ZDEy/NWMwYzY2OThmZWRk/MTBiYTlhOGJhMDEy/ODNlNi5qcGc.jpg"/>
      <itunes:duration>895</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #186 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/dean-spears-neonatal-mortality-kangaroo-mother-care/"><strong>Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/943f6a9e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals</title>
      <itunes:title>Highlights: #185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ce5e260b-88d8-40d6-8072-a2708d0e8e95</guid>
      <link>https://80000hours.org/podcast/episodes/lewis-bollard-factory-farm-advocacy-gains/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #185 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lewis-bollard-factory-farm-advocacy-gains/"><strong>Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #185 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lewis-bollard-factory-farm-advocacy-gains/"><strong>Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 02 May 2024 18:25:50 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/690048ad/b1b7a1da.mp3" length="16293825" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/SMbTPO9DSIDKN-abg18UWStEgAs7xkYI5DxtC-h8kD0/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9kMDJj/ZjQ4OWJhYjIxNjcx/ZDBjMWY1NDkwMjZk/NjZkYS5qcGc.jpg"/>
      <itunes:duration>1356</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #185 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lewis-bollard-factory-farm-advocacy-gains/"><strong>Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/690048ad/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT</title>
      <itunes:title>Highlights: #184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f48b151f-8140-4053-b6a7-2f89e03f49d1</guid>
      <link>https://80000hours.org/podcast/episodes/zvi-mowshowitz-sleeper-agents-ai-updates/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #184 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/zvi-mowshowitz-sleeper-agents-ai-updates/"><strong>Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #184 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/zvi-mowshowitz-sleeper-agents-ai-updates/"><strong>Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 25 Apr 2024 19:19:01 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/b9f21554/a60f4300.mp3" length="42552932" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/ZiRfuhA4DniMJf6EHTbQp5AwqVl0kM9pX66p1TX7lxU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS84NzA3/ZTI0OTUyOTdlOWY5/NzViM2I4MmQ1Yzdm/N2E4MC5qcGVn.jpg"/>
      <itunes:duration>1771</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #184 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/zvi-mowshowitz-sleeper-agents-ai-updates/"><strong>Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/b9f21554/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Actually After Hours #3: Finding the Tail with Dwarkesh Patel</title>
      <itunes:title>Actually After Hours #3: Finding the Tail with Dwarkesh Patel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">94dabdf8-812e-4ff0-ad6d-a115b79ee6d7</guid>
      <link>https://share.transistor.fm/s/345f258b</link>
      <description>
        <![CDATA[<p>Matt Reardon, Arden Koehler, and Huon Porteous sit down with <a href="https://www.dwarkeshpatel.com/podcast"><strong>Dwarkesh Patel</strong></a> to find out how you become a world-famous (among tech intellectuals) podcast host at 23. We also discuss how 80k would have advised 21-year-old Dwarkesh and 80k strategy more broadly.</p><p>You can check out the video version of this episode on YouTube at <a href="https://youtu.be/H5px6CQTe8o">https://youtu.be/H5px6CQTe8o</a></p><p>Topics covered:</p><ul><li>How did Dwarkesh start landing world-class guests?</li><li>Why is Bryan Caplan such an easy get?</li><li>How does Dwarkesh think about ideological labels?</li><li>Dwarkesh explains his pivot towards AI</li><li>Do intellectuals matter for progress?</li><li>Was Microsoft or the Gates Foundation more impactful?</li><li>Do biographies ever matter more than their subjects?</li><li>How would 80k have advised young Dwarkesh?</li><li>What does motivate people in government and what should motivate people in government?</li><li>Should do-gooders seek power?</li><li>Should 80k advice always aim at the tails?</li><li>Are people just layering their simple political memes onto the AI debate?</li><li>How do you boost people’s agency?</li><li>How do we feel about self-perceived entrepreneurs?</li><li>What’s the tradeoff between having the right initiative and having the right ideas?</li><li>How does 80k’s advice deal with AI timelines?  </li><li>Are 80k users self-selected for not being the highest potential people?</li><li>Should you assume that everyone can make it to the extreme tail?</li><li>In how many areas should 80k have detailed advice?</li><li>What happened to the EA brand?</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Matt Reardon, Arden Koehler, and Huon Porteous sit down with <a href="https://www.dwarkeshpatel.com/podcast"><strong>Dwarkesh Patel</strong></a> to find out how you become a world-famous (among tech intellectuals) podcast host at 23. We also discuss how 80k would have advised 21-year-old Dwarkesh and 80k strategy more broadly.</p><p>You can check out the video version of this episode on YouTube at <a href="https://youtu.be/H5px6CQTe8o">https://youtu.be/H5px6CQTe8o</a></p><p>Topics covered:</p><ul><li>How did Dwarkesh start landing world-class guests?</li><li>Why is Bryan Caplan such an easy get?</li><li>How does Dwarkesh think about ideological labels?</li><li>Dwarkesh explains his pivot towards AI</li><li>Do intellectuals matter for progress?</li><li>Was Microsoft or the Gates Foundation more impactful?</li><li>Do biographies ever matter more than their subjects?</li><li>How would 80k have advised young Dwarkesh?</li><li>What does motivate people in government and what should motivate people in government?</li><li>Should do-gooders seek power?</li><li>Should 80k advice always aim at the tails?</li><li>Are people just layering their simple political memes onto the AI debate?</li><li>How do you boost people’s agency?</li><li>How do we feel about self-perceived entrepreneurs?</li><li>What’s the tradeoff between having the right initiative and having the right ideas?</li><li>How does 80k’s advice deal with AI timelines?  </li><li>Are 80k users self-selected for not being the highest potential people?</li><li>Should you assume that everyone can make it to the extreme tail?</li><li>In how many areas should 80k have detailed advice?</li><li>What happened to the EA brand?</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 23 Apr 2024 17:00:31 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/345f258b/89c80552.mp3" length="137476267" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/nS6r6DUMUgsS-Fjy5mej5joumhi906ZngdE9cEYnAJc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8xYTc4/ZWRhMGUwZDQ0NmRi/NjBiYzEwYmU0YmRj/MDYzOC5qcGc.jpg"/>
      <itunes:duration>5727</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Matt Reardon, Arden Koehler, and Huon Porteous sit down with <a href="https://www.dwarkeshpatel.com/podcast"><strong>Dwarkesh Patel</strong></a> to find out how you become a world-famous (among tech intellectuals) podcast host at 23. We also discuss how 80k would have advised 21-year-old Dwarkesh and 80k strategy more broadly.</p><p>You can check out the video version of this episode on YouTube at <a href="https://youtu.be/H5px6CQTe8o">https://youtu.be/H5px6CQTe8o</a></p><p>Topics covered:</p><ul><li>How did Dwarkesh start landing world-class guests?</li><li>Why is Bryan Caplan such an easy get?</li><li>How does Dwarkesh think about ideological labels?</li><li>Dwarkesh explains his pivot towards AI</li><li>Do intellectuals matter for progress?</li><li>Was Microsoft or the Gates Foundation more impactful?</li><li>Do biographies ever matter more than their subjects?</li><li>How would 80k have advised young Dwarkesh?</li><li>What does motivate people in government and what should motivate people in government?</li><li>Should do-gooders seek power?</li><li>Should 80k advice always aim at the tails?</li><li>Are people just layering their simple political memes onto the AI debate?</li><li>How do you boost people’s agency?</li><li>How do we feel about self-perceived entrepreneurs?</li><li>What’s the tradeoff between having the right initiative and having the right ideas?</li><li>How does 80k’s advice deal with AI timelines?  </li><li>Are 80k users self-selected for not being the highest potential people?</li><li>Should you assume that everyone can make it to the extreme tail?</li><li>In how many areas should 80k have detailed advice?</li><li>What happened to the EA brand?</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Robert Wright &amp; Rob Wiblin on the truth about effective altruism</title>
      <itunes:title>Robert Wright &amp; Rob Wiblin on the truth about effective altruism</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0104d92d-25cb-4a35-9084-31cb24b723ca</guid>
      <link>https://share.transistor.fm/s/cf301226</link>
      <description>
        <![CDATA[<p>This is a cross-post of an interview Rob Wiblin did on Robert Wright's Nonzero podcast in January 2024. You can get access to full episodes of that show by subscribing to the <a href="%20https://nonzero.substack.com/"><strong>Nonzero Newsletter</strong></a><strong>.<br> <br></strong>They talk about Sam Bankman-Fried, virtue ethics, the growing influence of longtermism, what role EA played in the OpenAI board drama, the culture of local effective altruism groups, where Rob thinks people get EA most seriously wrong, what Rob fears most about rogue AI, the double-edged sword of AI-empowered governments, and flattening the curve of AI's social disruption.<strong><br></strong><br>And if you enjoy this, you could also check out episode 101 of The 80,000 Hours Podcast: <a href="https://80000hours.org/podcast/episodes/robert-wright-cognitive-empathy/"><strong>Robert Wright on using cognitive empathy to save the world</strong></a><strong>. </strong></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a cross-post of an interview Rob Wiblin did on Robert Wright's Nonzero podcast in January 2024. You can get access to full episodes of that show by subscribing to the <a href="%20https://nonzero.substack.com/"><strong>Nonzero Newsletter</strong></a><strong>.<br> <br></strong>They talk about Sam Bankman-Fried, virtue ethics, the growing influence of longtermism, what role EA played in the OpenAI board drama, the culture of local effective altruism groups, where Rob thinks people get EA most seriously wrong, what Rob fears most about rogue AI, the double-edged sword of AI-empowered governments, and flattening the curve of AI's social disruption.<strong><br></strong><br>And if you enjoy this, you could also check out episode 101 of The 80,000 Hours Podcast: <a href="https://80000hours.org/podcast/episodes/robert-wright-cognitive-empathy/"><strong>Robert Wright on using cognitive empathy to save the world</strong></a><strong>. </strong></p>]]>
      </content:encoded>
      <pubDate>Thu, 04 Apr 2024 17:57:54 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/cf301226/d04e8966.mp3" length="122917092" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/c91Ah_nm0AStyaHosseBcFi2iRQe4i_FTaXmdXyD7hs/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jNTAy/ODMwZjA2YzgzNTU4/MzBhMTUwYzk1OTdh/YjQyMC5qcGc.jpg"/>
      <itunes:duration>7680</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a cross-post of an interview Rob Wiblin did on Robert Wright's Nonzero podcast in January 2024. You can get access to full episodes of that show by subscribing to the <a href="%20https://nonzero.substack.com/"><strong>Nonzero Newsletter</strong></a><strong>.<br> <br></strong>They talk about Sam Bankman-Fried, virtue ethics, the growing influence of longtermism, what role EA played in the OpenAI board drama, the culture of local effective altruism groups, where Rob thinks people get EA most seriously wrong, what Rob fears most about rogue AI, the double-edged sword of AI-empowered governments, and flattening the curve of AI's social disruption.<strong><br></strong><br>And if you enjoy this, you could also check out episode 101 of The 80,000 Hours Podcast: <a href="https://80000hours.org/podcast/episodes/robert-wright-cognitive-empathy/"><strong>Robert Wright on using cognitive empathy to save the world</strong></a><strong>. </strong></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/cf301226/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more</title>
      <itunes:title>Highlights: #183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">91f05279-e987-4419-b00a-0f440822380b</guid>
      <link>https://80000hours.org/podcast/episodes/spencer-greenberg-money-happiness-hype-value/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #183 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/spencer-greenberg-money-happiness-hype-value/"><strong>Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #183 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/spencer-greenberg-money-happiness-hype-value/"><strong>Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 29 Mar 2024 18:56:22 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/ec9498b0/52c903fc.mp3" length="15271203" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/7-VtqYmtFRkStAPgzIzDKOrN82jkLwmQbMzifO4r-T8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE4MTk1MzMv/MTcxMTczODUzMS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1266</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #183 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/spencer-greenberg-money-happiness-hype-value/"><strong>Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/ec9498b0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more</title>
      <itunes:title>Highlights: #182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">47fe3c49-5cf2-4361-9d7a-5f7c84e5852f</guid>
      <link>https://80000hours.org/podcast/episodes/bob-fischer-comparing-animal-welfare-moral-weight/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #182 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/bob-fischer-comparing-animal-welfare-moral-weight/"><strong>Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #182 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/bob-fischer-comparing-animal-welfare-moral-weight/"><strong>Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 26 Mar 2024 19:23:23 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/0a4aa587/8d0bfbff.mp3" length="31047576" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/5QXzDrPmcwizt9JojR1LP0tNlEWmLQcEyhHv1thwPmI/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE4MTI5Mjcv/MTcxMTQ4MDkzNC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1939</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #182 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/bob-fischer-comparing-animal-welfare-moral-weight/"><strong>Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/0a4aa587/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Christian Ruhl on why we're entering a new nuclear age — and how to reduce the risks</title>
      <itunes:title>Christian Ruhl on why we're entering a new nuclear age — and how to reduce the risks</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f830c8e0-496a-4643-a9dc-145619b40f50</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/christian-ruhl-nuclear-catastrophic-risks-philanthropy/?utm_campaign=podcast__christian-ruhl&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>"We really, really want to make sure that nuclear war never breaks out. But we also know — from all of the examples of the Cold War, all these close calls — that it very well could, as long as there are nuclear weapons in the world. So if it does, we want to have some ways of preventing that from turning into a civilisation-threatening, cataclysmic kind of war. And those kinds of interventions — war limitation, intrawar escalation management, civil defence — those are kind of the seatbelts and airbags of the nuclear world. So to borrow a phrase from one of my colleagues, right-of-boom is a class of interventions for when “<a href="https://www.founderspledge.com/research/when-shit-hits-the-fan">shit hits the fan</a>.” —Christian Ruhl</p><p><br>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Christian Ruhl discuss underrated best bets to <a href="https://80000hours.org/problem-profiles/civilisation-resilience/">avert civilisational collapse</a> from global catastrophic risks — things like <a href="https://80000hours.org/problem-profiles/great-power-conflict/">great power war</a>, frontier military technologies, and <a href="https://80000hours.org/problem-profiles/nuclear-security/">nuclear winter</a>.</p><p><a href="https://80k.info/cr"><strong>Links to learn more, summary, and full transcript.</strong></a></p><p>They cover:</p><ul><li>How the geopolitical situation has changed in recent years into a “three-body problem” between the US, Russia, and China.</li><li>How adding AI-enabled technologies into the mix makes things even more unstable and unpredictable.</li><li>Why Christian recommends many philanthropists focus on “right-of-boom” interventions — those that mitigate the damage <em>after </em>a catastrophe — over traditional preventative measures.</li><li>Concrete things policymakers should be considering to reduce the devastating effects of unthinkable tragedies.</li><li>And on a more personal note, Christian’s experience of having a stutter.</li></ul><p><strong>Who this episode is for:</strong></p><ul><li>People interested in the most cost-effective ways to prevent nuclear war, such as:<ul><li>Deescalating after accidental nuclear use.</li><li>Civil defence and war termination.</li><li>Mitigating nuclear winter.</li></ul></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People interested in the least cost-effective ways to prevent nuclear war, such as:<ul><li>Coating every nuclear weapon on Earth in solid gold so they’re no longer functional.</li><li>Creating a TV show called <em>The Real Housewives of Nuclear Winter</em> about the personal and professional lives of women in Beverly Hills after a nuclear holocaust.</li><li>A multibillion dollar programme to invent a laser beam that could write permanent messages on the Moon, and using it just once to spell out #nonukesnovember.</li></ul></li></ul><p><strong>Chapters:</strong></p><ul><li>The three-body problem (00:04:11)</li><li>Effect of AI (00:07:58)</li><li>What we have going for us, and not (00:13:32)</li><li>Right-of-boom interventions (00:17:50)</li><li>Deescalating after accidental nuclear use (00:24:23)</li><li>Civil defence and war termination (00:30:40)</li><li>Mitigating nuclear winter (00:37:07)</li><li>Planning for a postwar political environment (00:40:19)</li><li>Experience of having a stutter (00:53:52)</li><li>Christian’s archaeological excavation in Guatemala (01:09:51)</li></ul><p><br><em>Producer: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Ben Cordell and Milo McGuire<br>Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris<br>Transcriptions: Katy Moore</em></p><p>“<a href="https://soundcloud.com/jasonweinberger/gershwin-rhapsody-in-blue"><em>Gershwin – Rhapsody in Blue, original 1924 version</em></a><em>” by Jason Weinberger is licensed under creative commons</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>"We really, really want to make sure that nuclear war never breaks out. But we also know — from all of the examples of the Cold War, all these close calls — that it very well could, as long as there are nuclear weapons in the world. So if it does, we want to have some ways of preventing that from turning into a civilisation-threatening, cataclysmic kind of war. And those kinds of interventions — war limitation, intrawar escalation management, civil defence — those are kind of the seatbelts and airbags of the nuclear world. So to borrow a phrase from one of my colleagues, right-of-boom is a class of interventions for when “<a href="https://www.founderspledge.com/research/when-shit-hits-the-fan">shit hits the fan</a>.” —Christian Ruhl</p><p><br>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Christian Ruhl discuss underrated best bets to <a href="https://80000hours.org/problem-profiles/civilisation-resilience/">avert civilisational collapse</a> from global catastrophic risks — things like <a href="https://80000hours.org/problem-profiles/great-power-conflict/">great power war</a>, frontier military technologies, and <a href="https://80000hours.org/problem-profiles/nuclear-security/">nuclear winter</a>.</p><p><a href="https://80k.info/cr"><strong>Links to learn more, summary, and full transcript.</strong></a></p><p>They cover:</p><ul><li>How the geopolitical situation has changed in recent years into a “three-body problem” between the US, Russia, and China.</li><li>How adding AI-enabled technologies into the mix makes things even more unstable and unpredictable.</li><li>Why Christian recommends many philanthropists focus on “right-of-boom” interventions — those that mitigate the damage <em>after </em>a catastrophe — over traditional preventative measures.</li><li>Concrete things policymakers should be considering to reduce the devastating effects of unthinkable tragedies.</li><li>And on a more personal note, Christian’s experience of having a stutter.</li></ul><p><strong>Who this episode is for:</strong></p><ul><li>People interested in the most cost-effective ways to prevent nuclear war, such as:<ul><li>Deescalating after accidental nuclear use.</li><li>Civil defence and war termination.</li><li>Mitigating nuclear winter.</li></ul></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People interested in the least cost-effective ways to prevent nuclear war, such as:<ul><li>Coating every nuclear weapon on Earth in solid gold so they’re no longer functional.</li><li>Creating a TV show called <em>The Real Housewives of Nuclear Winter</em> about the personal and professional lives of women in Beverly Hills after a nuclear holocaust.</li><li>A multibillion dollar programme to invent a laser beam that could write permanent messages on the Moon, and using it just once to spell out #nonukesnovember.</li></ul></li></ul><p><strong>Chapters:</strong></p><ul><li>The three-body problem (00:04:11)</li><li>Effect of AI (00:07:58)</li><li>What we have going for us, and not (00:13:32)</li><li>Right-of-boom interventions (00:17:50)</li><li>Deescalating after accidental nuclear use (00:24:23)</li><li>Civil defence and war termination (00:30:40)</li><li>Mitigating nuclear winter (00:37:07)</li><li>Planning for a postwar political environment (00:40:19)</li><li>Experience of having a stutter (00:53:52)</li><li>Christian’s archaeological excavation in Guatemala (01:09:51)</li></ul><p><br><em>Producer: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Ben Cordell and Milo McGuire<br>Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris<br>Transcriptions: Katy Moore</em></p><p>“<a href="https://soundcloud.com/jasonweinberger/gershwin-rhapsody-in-blue"><em>Gershwin – Rhapsody in Blue, original 1924 version</em></a><em>” by Jason Weinberger is licensed under creative commons</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 21 Mar 2024 20:06:53 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/5985ba39/76871313.mp3" length="69708215" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/vmMhkjkMMgPIB65qDKqrIhoAqLL6P7N4EjHjjM-NNFM/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE4MDE0ODIv/MTcxMTAzMDkwNy1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>4354</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>"We really, really want to make sure that nuclear war never breaks out. But we also know — from all of the examples of the Cold War, all these close calls — that it very well could, as long as there are nuclear weapons in the world. So if it does, we want to have some ways of preventing that from turning into a civilisation-threatening, cataclysmic kind of war. And those kinds of interventions — war limitation, intrawar escalation management, civil defence — those are kind of the seatbelts and airbags of the nuclear world. So to borrow a phrase from one of my colleagues, right-of-boom is a class of interventions for when “<a href="https://www.founderspledge.com/research/when-shit-hits-the-fan">shit hits the fan</a>.” —Christian Ruhl</p><p><br>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Christian Ruhl discuss underrated best bets to <a href="https://80000hours.org/problem-profiles/civilisation-resilience/">avert civilisational collapse</a> from global catastrophic risks — things like <a href="https://80000hours.org/problem-profiles/great-power-conflict/">great power war</a>, frontier military technologies, and <a href="https://80000hours.org/problem-profiles/nuclear-security/">nuclear winter</a>.</p><p><a href="https://80k.info/cr"><strong>Links to learn more, summary, and full transcript.</strong></a></p><p>They cover:</p><ul><li>How the geopolitical situation has changed in recent years into a “three-body problem” between the US, Russia, and China.</li><li>How adding AI-enabled technologies into the mix makes things even more unstable and unpredictable.</li><li>Why Christian recommends many philanthropists focus on “right-of-boom” interventions — those that mitigate the damage <em>after </em>a catastrophe — over traditional preventative measures.</li><li>Concrete things policymakers should be considering to reduce the devastating effects of unthinkable tragedies.</li><li>And on a more personal note, Christian’s experience of having a stutter.</li></ul><p><strong>Who this episode is for:</strong></p><ul><li>People interested in the most cost-effective ways to prevent nuclear war, such as:<ul><li>Deescalating after accidental nuclear use.</li><li>Civil defence and war termination.</li><li>Mitigating nuclear winter.</li></ul></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People interested in the least cost-effective ways to prevent nuclear war, such as:<ul><li>Coating every nuclear weapon on Earth in solid gold so they’re no longer functional.</li><li>Creating a TV show called <em>The Real Housewives of Nuclear Winter</em> about the personal and professional lives of women in Beverly Hills after a nuclear holocaust.</li><li>A multibillion dollar programme to invent a laser beam that could write permanent messages on the Moon, and using it just once to spell out #nonukesnovember.</li></ul></li></ul><p><strong>Chapters:</strong></p><ul><li>The three-body problem (00:04:11)</li><li>Effect of AI (00:07:58)</li><li>What we have going for us, and not (00:13:32)</li><li>Right-of-boom interventions (00:17:50)</li><li>Deescalating after accidental nuclear use (00:24:23)</li><li>Civil defence and war termination (00:30:40)</li><li>Mitigating nuclear winter (00:37:07)</li><li>Planning for a postwar political environment (00:40:19)</li><li>Experience of having a stutter (00:53:52)</li><li>Christian’s archaeological excavation in Guatemala (01:09:51)</li></ul><p><br><em>Producer: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Ben Cordell and Milo McGuire<br>Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris<br>Transcriptions: Katy Moore</em></p><p>“<a href="https://soundcloud.com/jasonweinberger/gershwin-rhapsody-in-blue"><em>Gershwin – Rhapsody in Blue, original 1924 version</em></a><em>” by Jason Weinberger is licensed under creative commons</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5985ba39/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/5985ba39/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #181 – Laura Deming on the science that could keep us healthy in our 80s and beyond</title>
      <itunes:title>Highlights: #181 – Laura Deming on the science that could keep us healthy in our 80s and beyond</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">80e205f8-3a19-43e1-a957-0482a62934a9</guid>
      <link>https://80000hours.org/podcast/episodes/laura-deming-ending-ageing/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #181 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/laura-deming-ending-ageing/"><strong>Laura Deming on the science that could keep us healthy in our 80s and beyond</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #181 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/laura-deming-ending-ageing/"><strong>Laura Deming on the science that could keep us healthy in our 80s and beyond</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Mar 2024 18:08:32 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/53ff51ae/75442689.mp3" length="15322901" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/mhYHPDA5322TGa66_igDvv1H80BfIlYnQ23smILsYZ4/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3OTk1Njcv/MTcxMDk1ODA0Ny1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>955</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #181 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/laura-deming-ending-ageing/"><strong>Laura Deming on the science that could keep us healthy in our 80s and beyond</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/53ff51ae/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Actually After Hours #2: Coming to America with Joel Becker</title>
      <itunes:title>Actually After Hours #2: Coming to America with Joel Becker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f0b9ff83-8a0f-4135-9562-6897ec123ed7</guid>
      <link>https://share.transistor.fm/s/13ed330a</link>
      <description>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://www.youtube.com/watch?v=GFEb8ICWJQQ">https://www.youtube.com/watch?v=GFEb8ICWJQQ</a></p><p>In this episode of our new podcast project, Matt Reardon, Bella Forristal, and Arden Koehler sit down with Joel Becker to find out what’s great about America, what was not so great about FTX, and invite you to port back to the time when Arden might have been CEO.</p><p>You can learn more about Joel's current projects at <a href="https://joel-becker.com/">https://joel-becker.com/</a> and follow him on Twitter at <a href="https://twitter.com/joel_bkr">https://twitter.com/joel_bkr</a></p><p>Here's where to find episode #100 of <em>The 80,000 Hours Podcast</em> on dealing with anxiety, depression, and imposter syndrome: <a href="https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/">https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/</a></p><p>Matt also beseeches you to listen to Joe Carlsmith already: <a href="https://joecarlsmith.substack.com/archive">https://joecarlsmith.substack.com/archive</a></p><p>You can also find Joel’s best friend and Matt’s former flatmate Mr. Mushu at <a href="https://www.instagram.com/its_mr.mushu/">https://www.instagram.com/its_mr.mushu/</a></p><p>Further topics include:</p><ul><li>All possible reasons militate in favour of moving to America</li><li>American politics being justified in its savagery</li><li>Joel’s review of <a href="https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/">our famous podcast on depression and anxiety</a></li><li>The FTX fellows and visitors programme in the Bahamas</li><li>The [vampiric?] energy of America</li><li>Is sports net negative?</li><li>Living in filth and eschewing the typical mind fallacy</li><li>How much should we be working on our meta-preferences?</li><li>Arden on being CEO of 80,000 Hours</li><li>Ruminating on the powers of non-aphantastics</li><li>Why do kids want to draw?</li><li>Are people sleeping on crayons?</li><li>Our favourite foundational EA materials</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://www.youtube.com/watch?v=GFEb8ICWJQQ">https://www.youtube.com/watch?v=GFEb8ICWJQQ</a></p><p>In this episode of our new podcast project, Matt Reardon, Bella Forristal, and Arden Koehler sit down with Joel Becker to find out what’s great about America, what was not so great about FTX, and invite you to port back to the time when Arden might have been CEO.</p><p>You can learn more about Joel's current projects at <a href="https://joel-becker.com/">https://joel-becker.com/</a> and follow him on Twitter at <a href="https://twitter.com/joel_bkr">https://twitter.com/joel_bkr</a></p><p>Here's where to find episode #100 of <em>The 80,000 Hours Podcast</em> on dealing with anxiety, depression, and imposter syndrome: <a href="https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/">https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/</a></p><p>Matt also beseeches you to listen to Joe Carlsmith already: <a href="https://joecarlsmith.substack.com/archive">https://joecarlsmith.substack.com/archive</a></p><p>You can also find Joel’s best friend and Matt’s former flatmate Mr. Mushu at <a href="https://www.instagram.com/its_mr.mushu/">https://www.instagram.com/its_mr.mushu/</a></p><p>Further topics include:</p><ul><li>All possible reasons militate in favour of moving to America</li><li>American politics being justified in its savagery</li><li>Joel’s review of <a href="https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/">our famous podcast on depression and anxiety</a></li><li>The FTX fellows and visitors programme in the Bahamas</li><li>The [vampiric?] energy of America</li><li>Is sports net negative?</li><li>Living in filth and eschewing the typical mind fallacy</li><li>How much should we be working on our meta-preferences?</li><li>Arden on being CEO of 80,000 Hours</li><li>Ruminating on the powers of non-aphantastics</li><li>Why do kids want to draw?</li><li>Are people sleeping on crayons?</li><li>Our favourite foundational EA materials</li></ul>]]>
      </content:encoded>
      <pubDate>Mon, 18 Mar 2024 17:04:49 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/13ed330a/f3afab78.mp3" length="87173603" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/a_ySjzvagbB-6LClvo4xqXA-zv-noo_JzRH5s5hrY5Q/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3OTUyNTUv/MTcxMDc3NTI2My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>5446</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>You can check out the video version of this episode on YouTube at <a href="https://www.youtube.com/watch?v=GFEb8ICWJQQ">https://www.youtube.com/watch?v=GFEb8ICWJQQ</a></p><p>In this episode of our new podcast project, Matt Reardon, Bella Forristal, and Arden Koehler sit down with Joel Becker to find out what’s great about America, what was not so great about FTX, and invite you to port back to the time when Arden might have been CEO.</p><p>You can learn more about Joel's current projects at <a href="https://joel-becker.com/">https://joel-becker.com/</a> and follow him on Twitter at <a href="https://twitter.com/joel_bkr">https://twitter.com/joel_bkr</a></p><p>Here's where to find episode #100 of <em>The 80,000 Hours Podcast</em> on dealing with anxiety, depression, and imposter syndrome: <a href="https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/">https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/</a></p><p>Matt also beseeches you to listen to Joe Carlsmith already: <a href="https://joecarlsmith.substack.com/archive">https://joecarlsmith.substack.com/archive</a></p><p>You can also find Joel’s best friend and Matt’s former flatmate Mr. Mushu at <a href="https://www.instagram.com/its_mr.mushu/">https://www.instagram.com/its_mr.mushu/</a></p><p>Further topics include:</p><ul><li>All possible reasons militate in favour of moving to America</li><li>American politics being justified in its savagery</li><li>Joel’s review of <a href="https://80000hours.org/podcast/episodes/depression-anxiety-imposter-syndrome/">our famous podcast on depression and anxiety</a></li><li>The FTX fellows and visitors programme in the Bahamas</li><li>The [vampiric?] energy of America</li><li>Is sports net negative?</li><li>Living in filth and eschewing the typical mind fallacy</li><li>How much should we be working on our meta-preferences?</li><li>Arden on being CEO of 80,000 Hours</li><li>Ruminating on the powers of non-aphantastics</li><li>Why do kids want to draw?</li><li>Are people sleeping on crayons?</li><li>Our favourite foundational EA materials</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Highlights: #180 – Hugo Mercier on why gullibility and misinformation are overrated</title>
      <itunes:title>Highlights: #180 – Hugo Mercier on why gullibility and misinformation are overrated</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1e73fea6-4a2b-44b9-9da9-2f5ed578969a</guid>
      <link>https://80000hours.org/podcast/episodes/hugo-mercier-misinformation-mass-persuasion/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #180 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/hugo-mercier-misinformation-mass-persuasion/"><strong>Hugo Mercier on why gullibility and misinformation are overrated</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #180 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/hugo-mercier-misinformation-mass-persuasion/"><strong>Hugo Mercier on why gullibility and misinformation are overrated</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 11 Mar 2024 19:18:12 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/79a1f3ce/db403607.mp3" length="24248873" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/lDDba3bHQtd86r1NzO0lIM2GA_DmDUc9Z2iaHsxEuKQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3ODUyMTcv/MTcxMDE4NDY5Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1512</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #180 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/hugo-mercier-misinformation-mass-persuasion/"><strong>Hugo Mercier on why gullibility and misinformation are overrated</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/79a1f3ce/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety</title>
      <itunes:title>Highlights: #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">fabc99a1-1e98-41aa-bb9b-ab5ac89ea072</guid>
      <link>https://80000hours.org/podcast/episodes/randy-nesse-evolutionary-medicine-psychiatry/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #179 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/randy-nesse-evolutionary-medicine-psychiatry/"><strong>Randy Nesse on why evolution left us so vulnerable to depression and anxiety</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #179 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/randy-nesse-evolutionary-medicine-psychiatry/"><strong>Randy Nesse on why evolution left us so vulnerable to depression and anxiety</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 26 Feb 2024 20:27:01 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/387d00ce/63830007.mp3" length="22305203" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/GuGcpZYIUd7kQ62qqwrKd9VL2sey-O6LbrYrQWTWH7U/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3NTYyODcv/MTcwODk3OTE4Ni1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1392</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #179 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/randy-nesse-evolutionary-medicine-psychiatry/"><strong>Randy Nesse on why evolution left us so vulnerable to depression and anxiety</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/387d00ce/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Actually After Hours #1: Bean Counting with Chana Messinger</title>
      <itunes:title>Actually After Hours #1: Bean Counting with Chana Messinger</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0b3012ad-40f9-4f06-a571-41e4f6692f44</guid>
      <link>https://share.transistor.fm/s/d7b9bc44</link>
      <description>
        <![CDATA[<p>Matt, Bella, and Cody sit down with Chana Messinger to find out what she does, how to insulate ourselves from harsh realities, and whether pub trivia is complete waste of time.</p><p>You can find Chana’s long-defunct blog at: <a href="https://themerelyreal.wordpress.com/">https://themerelyreal.wordpress.com/</a> </p><p><em>Update February 20, 2024:</em> The 'steelman' bounty has been claimed!</p><p>You can also find a full video of this episode <a href="https://www.youtube.com/watch?v=zUTz5VFKo88">here</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Matt, Bella, and Cody sit down with Chana Messinger to find out what she does, how to insulate ourselves from harsh realities, and whether pub trivia is complete waste of time.</p><p>You can find Chana’s long-defunct blog at: <a href="https://themerelyreal.wordpress.com/">https://themerelyreal.wordpress.com/</a> </p><p><em>Update February 20, 2024:</em> The 'steelman' bounty has been claimed!</p><p>You can also find a full video of this episode <a href="https://www.youtube.com/watch?v=zUTz5VFKo88">here</a>.</p>]]>
      </content:encoded>
      <pubDate>Mon, 19 Feb 2024 07:52:16 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/d7b9bc44/d94d814d.mp3" length="195443357" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/vMfYF9A-clGHYlIQJfzLBTDCePfYbbskW-Cc7c3sVN8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3NDI0NTIv/MTcwODMyOTEzNi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>4885</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Matt, Bella, and Cody sit down with Chana Messinger to find out what she does, how to insulate ourselves from harsh realities, and whether pub trivia is complete waste of time.</p><p>You can find Chana’s long-defunct blog at: <a href="https://themerelyreal.wordpress.com/">https://themerelyreal.wordpress.com/</a> </p><p><em>Update February 20, 2024:</em> The 'steelman' bounty has been claimed!</p><p>You can also find a full video of this episode <a href="https://www.youtube.com/watch?v=zUTz5VFKo88">here</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Highlights: #178 – Emily Oster on what the evidence actually says about pregnancy and parenting</title>
      <itunes:title>Highlights: #178 – Emily Oster on what the evidence actually says about pregnancy and parenting</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6870f0ac-8b72-472e-a07b-c13f7ff3607e</guid>
      <link>https://80000hours.org/podcast/episodes/emily-oster-pregnancy-parenting-careers/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #178 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/emily-oster-pregnancy-parenting-careers/"><strong>Emily Oster on what the evidence actually says about pregnancy and parenting</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #178 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/emily-oster-pregnancy-parenting-careers/"><strong>Emily Oster on what the evidence actually says about pregnancy and parenting</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 15 Feb 2024 20:26:31 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/2d4c1d47/5d55e858.mp3" length="22949343" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/COEZQLCU-mPAmwe5OjAdVl1ntIjfeSpvNZlyY_-IonY/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3Mzg2MDQv/MTcwODAyODQyNC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1431</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #178 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/emily-oster-pregnancy-parenting-careers/"><strong>Emily Oster on what the evidence actually says about pregnancy and parenting</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/2d4c1d47/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps</title>
      <itunes:title>Highlights: #177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">346f3a10-2724-4569-9aff-593a5a51683a</guid>
      <link>https://80000hours.org/podcast/episodes/nathan-labenz-ai-breakthroughs-controversies/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #177 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-labenz-ai-breakthroughs-controversies/"><strong>Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #177 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-labenz-ai-breakthroughs-controversies/"><strong>Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 07 Feb 2024 20:29:18 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/b412a4be/3e4562eb.mp3" length="19578864" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Sv8-MELl7PUs6JZoKFX68ItwqYvz7n6NTbC8gBpSqlc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3MjI1MjMv/MTcwNzMzNzc1OC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1626</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #177 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-labenz-ai-breakthroughs-controversies/"><strong>Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/b412a4be/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #146 – Robert Long on why large language models like GPT (probably) aren’t conscious</title>
      <itunes:title>Highlights: #146 – Robert Long on why large language models like GPT (probably) aren’t conscious</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a8de7c72-4e5f-404a-bbfb-33893cc662be</guid>
      <link>https://80000hours.org/podcast/episodes/robert-long-artificial-sentience/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #146 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/robert-long-artificial-sentience/"><strong>Robert Long on why large language models like GPT (probably) aren’t conscious</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #146 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/robert-long-artificial-sentience/"><strong>Robert Long on why large language models like GPT (probably) aren’t conscious</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 25 Jan 2024 19:45:20 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/bda4700a/e1c8a884.mp3" length="14938463" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/xBGyBE6n9CLHiSRHUjhHkQRmlR9RoE8c1MjeOZlkFEE/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE3MDQyMzUv/MTcwNjIxMTkyMC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1241</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #146 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/robert-long-artificial-sentience/"><strong>Robert Long on why large language models like GPT (probably) aren’t conscious</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/bda4700a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #176 – Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models</title>
      <itunes:title>Highlights: #176 – Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aaf8e831-ac54-43f9-8730-5263edd3eb33</guid>
      <link>https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #176 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/"><strong>Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #176 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/"><strong>Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 15 Jan 2024 20:24:00 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/2b55bbcb/022b129b.mp3" length="24348339" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/zIQRgeQjF1cK67E0W31yldlvFjnDJdhPtPFThMyg1GQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2OTA5MTUv/MTcwNTM1MDEwMS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2026</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #176 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/"><strong>Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/2b55bbcb/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #175 – Lucia Coulter on preventing lead poisoning for $1.66 per child</title>
      <itunes:title>Highlights: #175 – Lucia Coulter on preventing lead poisoning for $1.66 per child</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e9a5f040-0718-4761-89fb-b5d6305e19d9</guid>
      <link>https://80000hours.org/podcast/episodes/lucia-coulter-lead-exposure-elimination-project/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #175 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lucia-coulter-lead-exposure-elimination-project/"><strong>Lucia Coulter on preventing lead poisoning for $1.66 per child</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #175 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lucia-coulter-lead-exposure-elimination-project/"><strong>Lucia Coulter on preventing lead poisoning for $1.66 per child</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 09 Jan 2024 20:06:44 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/207071b6/a0b11aa1.mp3" length="14532538" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/JQPZPaDIoA_S7kCykMPw1_xnQnMU6gtrkrTUPTvgwuU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2ODA5Nzkv/MTcwNDgzMDc3NS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1208</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #175 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lucia-coulter-lead-exposure-elimination-project/"><strong>Lucia Coulter on preventing lead poisoning for $1.66 per child</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/207071b6/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers</title>
      <itunes:title>Highlights: #174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8f4495a6-3e4f-46cc-95a2-8d2308333911</guid>
      <link>https://80000hours.org/podcast/episodes/nita-farahany-neurotechnology/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #174 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nita-farahany-neurotechnology/"><strong>Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #174 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nita-farahany-neurotechnology/"><strong>Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 03 Jan 2024 21:26:11 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/5386387f/8239147c.mp3" length="24125861" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/RhrJ6QI2S_bTW1UDuaNHKgMV_Kx3amkUb-t9WMH0vWk/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2NzE4OTMv/MTcwNDMxNzE0MC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1505</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #174 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/nita-farahany-neurotechnology/"><strong>Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/5386387f/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/5386387f/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe</title>
      <itunes:title>Highlights: #173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">00d1946b-1e2f-4751-82bf-253580530e11</guid>
      <link>https://80000hours.org/podcast/episodes/jeff-sebo-ethics-digital-minds/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #173 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jeff-sebo-ethics-digital-minds/"><strong>Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #173 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jeff-sebo-ethics-digital-minds/"><strong>Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 14 Dec 2023 18:04:14 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/511ec70f/7854c2dd.mp3" length="22472925" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/UB3R7mvjeVpLvQDT3MnXD2D5sOVe_zDZ2YuI8FmFkAo/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2NDU3OTcv/MTcwMjU3NzAwMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1870</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #173 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jeff-sebo-ethics-digital-minds/"><strong>Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/511ec70f/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Career review: AI safety technical research</title>
      <itunes:title>Career review: AI safety technical research</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">92c3dc74-05fa-409c-b8c4-b58692e0b3aa</guid>
      <link>https://80000hours.org/career-reviews/ai-safety-researcher/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Benjamin Hilton reads his AI safety technical research career review.</p><p><a href="https://80000hours.org/career-reviews/ai-safety-researcher/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out:</p><ul><li>The <a href="https://forum.effectivealtruism.org/posts/pbiGHk6AjRxdBPoD8/ai-safety-starter-pack">AI safety starter pack</a></li><li>Our <a href="https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/">problem profile on AI risk</a></li><li><a href="https://80000hours.org/podcast/on-artificial-intelligence/"><em>The 80,000 Hours Podcast on Artificial Intelligence</em></a> (a collection of 10 key AI episodes from our podcast)</li></ul><p><em>Edited by Simon Monsour.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Benjamin Hilton reads his AI safety technical research career review.</p><p><a href="https://80000hours.org/career-reviews/ai-safety-researcher/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out:</p><ul><li>The <a href="https://forum.effectivealtruism.org/posts/pbiGHk6AjRxdBPoD8/ai-safety-starter-pack">AI safety starter pack</a></li><li>Our <a href="https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/">problem profile on AI risk</a></li><li><a href="https://80000hours.org/podcast/on-artificial-intelligence/"><em>The 80,000 Hours Podcast on Artificial Intelligence</em></a> (a collection of 10 key AI episodes from our podcast)</li></ul><p><em>Edited by Simon Monsour.</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 12 Dec 2023 15:25:46 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/84757b29/b4ecd9bc.mp3" length="49370644" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Em0laayxzmAgrxnKeGn-v1vrUBw9r8he5SoBmFnGtu8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2Mzc4MjYv/MTcwMjA5NDMzMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3084</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Benjamin Hilton reads his AI safety technical research career review.</p><p><a href="https://80000hours.org/career-reviews/ai-safety-researcher/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out:</p><ul><li>The <a href="https://forum.effectivealtruism.org/posts/pbiGHk6AjRxdBPoD8/ai-safety-starter-pack">AI safety starter pack</a></li><li>Our <a href="https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/">problem profile on AI risk</a></li><li><a href="https://80000hours.org/podcast/on-artificial-intelligence/"><em>The 80,000 Hours Podcast on Artificial Intelligence</em></a> (a collection of 10 key AI episodes from our podcast)</li></ul><p><em>Edited by Simon Monsour.</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/84757b29/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #147 – Spencer Greenberg on stopping valueless papers from getting into top journals</title>
      <itunes:title>Highlights: #147 – Spencer Greenberg on stopping valueless papers from getting into top journals</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e0414029-3384-487f-a855-766243c757de</guid>
      <link>https://80000hours.org/podcast/episodes/spencer-greenberg-stopping-valueless-papers/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #147 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/spencer-greenberg-stopping-valueless-papers/"><strong>Spencer Greenberg on stopping valueless papers from getting into top journals</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #147 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/spencer-greenberg-stopping-valueless-papers/"><strong>Spencer Greenberg on stopping valueless papers from getting into top journals</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 07 Dec 2023 01:04:13 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/1e67f2fe/f990db73.mp3" length="18367651" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/rCVCK_skuYkR_sYsP8xT_-jYB00jYzJ9Nq8hyXAFmmM/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MzQyMDcv/MTcwMTkxMTA1My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1145</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #147 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/spencer-greenberg-stopping-valueless-papers/"><strong>Spencer Greenberg on stopping valueless papers from getting into top journals</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/1e67f2fe/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Benjamin Todd on the history of 80,000 Hours</title>
      <itunes:title>Benjamin Todd on the history of 80,000 Hours</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7866af73-0930-4baf-a6da-7f8e18e2402f</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/benjamin-todd-history-80k/?utm_campaign=podcast__benjamin-todd&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>"The very first office we had was just a balcony in an Oxford College dining hall. It was totally open to the dining hall, so every lunch and dinner time it would be super noisy because it'd be like 200 people all eating below us. And then I think we just had a bit where we just didn't have an office, so we worked out of the canteen in the library for at least three months or something. And then it was only after that we moved into this tiny, tiny room at the back of an estate agent off in St Clement’s in Oxford. One of our early donors came and we gave him a tour, and when he came into the office, his first reaction was, 'Is this legal?'" — Benjamin Todd</p><p>In this episode of <em>80k After Hours</em> — recorded in June 2022 — Rob Wiblin and Benjamin Todd discuss the history of 80,000 Hours.</p><p><a href="https://80k.info/bt23"><strong>Links to learn more.</strong></a></p><p>Chapters:<br>• Cold open (00:00:00)<br>• Rob's intro (00:00:44)<br>• Ben's origin story (00:04:07)<br>• The birth of 80k (00:12:23)<br>• The early vision for 80k (00:30:12)<br>• The general vibe in the EA community back then (00:37:35)<br>• How 80k evolved (00:48:46)<br>• Trips to Thailand and China (01:13:00)<br>• Setting up several programmes (01:17:48)<br>• Moving to California (01:24:54)<br>• 80k strengths (01:33:49)<br>• Why Ben left the CEO position (01:39:03)<br>• The future of 80,000 Hours (01:42:06)<br>• Rob's outro (01:45:09)</p><p><strong>Who this episode is for:</strong> </p><ul><li>People who work on or plan to work on promoting important ideas in a way that's similar to 80,000 Hours</li><li>People who work at organisations similar to 80,000 Hours</li><li>People who work at 80,000 Hours</li></ul><p><strong>Who this episode isn’t for:</strong> </p><ul><li>People who, if asked if they’d like to join a dinner at 80,000 Hours where the team reminisce on the good old days, would say, “Sorry, can’t make it — I’m washing my hair that night”</li></ul><p><em>Producer: Keiran Harris<br>Audio mastering: Ryan Kessler and Ben Cordell</em></p><p>"<a href="https://soundcloud.com/jasonweinberger/gershwin-rhapsody-in-blue"><em>Gershwin - Rhapsody in Blue, original 1924 version</em></a><em>" by Jason Weinberger is licensed under creative commons</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>"The very first office we had was just a balcony in an Oxford College dining hall. It was totally open to the dining hall, so every lunch and dinner time it would be super noisy because it'd be like 200 people all eating below us. And then I think we just had a bit where we just didn't have an office, so we worked out of the canteen in the library for at least three months or something. And then it was only after that we moved into this tiny, tiny room at the back of an estate agent off in St Clement’s in Oxford. One of our early donors came and we gave him a tour, and when he came into the office, his first reaction was, 'Is this legal?'" — Benjamin Todd</p><p>In this episode of <em>80k After Hours</em> — recorded in June 2022 — Rob Wiblin and Benjamin Todd discuss the history of 80,000 Hours.</p><p><a href="https://80k.info/bt23"><strong>Links to learn more.</strong></a></p><p>Chapters:<br>• Cold open (00:00:00)<br>• Rob's intro (00:00:44)<br>• Ben's origin story (00:04:07)<br>• The birth of 80k (00:12:23)<br>• The early vision for 80k (00:30:12)<br>• The general vibe in the EA community back then (00:37:35)<br>• How 80k evolved (00:48:46)<br>• Trips to Thailand and China (01:13:00)<br>• Setting up several programmes (01:17:48)<br>• Moving to California (01:24:54)<br>• 80k strengths (01:33:49)<br>• Why Ben left the CEO position (01:39:03)<br>• The future of 80,000 Hours (01:42:06)<br>• Rob's outro (01:45:09)</p><p><strong>Who this episode is for:</strong> </p><ul><li>People who work on or plan to work on promoting important ideas in a way that's similar to 80,000 Hours</li><li>People who work at organisations similar to 80,000 Hours</li><li>People who work at 80,000 Hours</li></ul><p><strong>Who this episode isn’t for:</strong> </p><ul><li>People who, if asked if they’d like to join a dinner at 80,000 Hours where the team reminisce on the good old days, would say, “Sorry, can’t make it — I’m washing my hair that night”</li></ul><p><em>Producer: Keiran Harris<br>Audio mastering: Ryan Kessler and Ben Cordell</em></p><p>"<a href="https://soundcloud.com/jasonweinberger/gershwin-rhapsody-in-blue"><em>Gershwin - Rhapsody in Blue, original 1924 version</em></a><em>" by Jason Weinberger is licensed under creative commons</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 01 Dec 2023 21:22:37 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/bd4a6037/e25cea83.mp3" length="106501430" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/JJP91_Zv13n-K7ENvOeLIwa_i8E8_MzeRVU6LsIXVAk/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjAwNTUv/MTcwMTM3NTMwMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>6654</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>"The very first office we had was just a balcony in an Oxford College dining hall. It was totally open to the dining hall, so every lunch and dinner time it would be super noisy because it'd be like 200 people all eating below us. And then I think we just had a bit where we just didn't have an office, so we worked out of the canteen in the library for at least three months or something. And then it was only after that we moved into this tiny, tiny room at the back of an estate agent off in St Clement’s in Oxford. One of our early donors came and we gave him a tour, and when he came into the office, his first reaction was, 'Is this legal?'" — Benjamin Todd</p><p>In this episode of <em>80k After Hours</em> — recorded in June 2022 — Rob Wiblin and Benjamin Todd discuss the history of 80,000 Hours.</p><p><a href="https://80k.info/bt23"><strong>Links to learn more.</strong></a></p><p>Chapters:<br>• Cold open (00:00:00)<br>• Rob's intro (00:00:44)<br>• Ben's origin story (00:04:07)<br>• The birth of 80k (00:12:23)<br>• The early vision for 80k (00:30:12)<br>• The general vibe in the EA community back then (00:37:35)<br>• How 80k evolved (00:48:46)<br>• Trips to Thailand and China (01:13:00)<br>• Setting up several programmes (01:17:48)<br>• Moving to California (01:24:54)<br>• 80k strengths (01:33:49)<br>• Why Ben left the CEO position (01:39:03)<br>• The future of 80,000 Hours (01:42:06)<br>• Rob's outro (01:45:09)</p><p><strong>Who this episode is for:</strong> </p><ul><li>People who work on or plan to work on promoting important ideas in a way that's similar to 80,000 Hours</li><li>People who work at organisations similar to 80,000 Hours</li><li>People who work at 80,000 Hours</li></ul><p><strong>Who this episode isn’t for:</strong> </p><ul><li>People who, if asked if they’d like to join a dinner at 80,000 Hours where the team reminisce on the good old days, would say, “Sorry, can’t make it — I’m washing my hair that night”</li></ul><p><em>Producer: Keiran Harris<br>Audio mastering: Ryan Kessler and Ben Cordell</em></p><p>"<a href="https://soundcloud.com/jasonweinberger/gershwin-rhapsody-in-blue"><em>Gershwin - Rhapsody in Blue, original 1924 version</em></a><em>" by Jason Weinberger is licensed under creative commons</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/bd4a6037/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/bd4a6037/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #172 – Bryan Caplan on why you should stop reading the news</title>
      <itunes:title>Highlights: #172 – Bryan Caplan on why you should stop reading the news</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">48d52837-9e16-426c-849e-0e7ebf776eb8</guid>
      <link>https://80000hours.org/podcast/episodes/bryan-caplan-stop-reading-the-news/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #172 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/bryan-caplan-stop-reading-the-news/"><strong>Bryan Caplan on why you should stop reading the news</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #172 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/bryan-caplan-stop-reading-the-news/"><strong>Bryan Caplan on why you should stop reading the news</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 30 Nov 2023 23:33:00 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/067b42e8/03f70a6c.mp3" length="25672491" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/cHtYRUFTxt_8Q568ezoXJNlGQEbFYLm-hDSW8ZDjauI/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MjAyOTgv/MTcwMTM4NzEzMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1602</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #172 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/bryan-caplan-stop-reading-the-news/"><strong>Bryan Caplan on why you should stop reading the news</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/067b42e8/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/067b42e8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t</title>
      <itunes:title>Highlights: #148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b8ff2314-7efd-4000-9fbf-881ec38336d9</guid>
      <link>https://80000hours.org/podcast/episodes/johannes-ackva-unfashionable-climate-interventions/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #148 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/johannes-ackva-unfashionable-climate-interventions/"><strong>Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #148 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/johannes-ackva-unfashionable-climate-interventions/"><strong>Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 27 Nov 2023 19:20:36 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/b708ca6a/c9c01b5e.mp3" length="12789248" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Gm1mWcjHaCpPHJpcXcRVcwXHC1b95XrVjlNqEWhiPSQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MTQxNDgv/MTcwMTExMjc3OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1061</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #148 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/johannes-ackva-unfashionable-climate-interventions/"><strong>Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/b708ca6a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures</title>
      <itunes:title>Highlights: #171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b0969a1b-bb26-4743-a6fe-596668b0dfd4</guid>
      <link>https://80000hours.org/podcast/episodes/alison-young-biosafety-lab-leaks/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #171 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/alison-young-biosafety-lab-leaks/"><strong>Alison Young on how top labs have jeopardised public health with repeated biosafety failures</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #171 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/alison-young-biosafety-lab-leaks/"><strong>Alison Young on how top labs have jeopardised public health with repeated biosafety failures</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 21 Nov 2023 18:58:56 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/2f8dcd8e/67dd5554.mp3" length="22005907" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/3DO-UnSJ8TVMzCSTR-FEmA0ZfheAfw1gOJFh5ox5pkQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE2MDc4Mjcv/MTcwMDU5MjA2NC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1374</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #171 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/alison-young-biosafety-lab-leaks/"><strong>Alison Young on how top labs have jeopardised public health with repeated biosafety failures</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/2f8dcd8e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down</title>
      <itunes:title>Highlights: #170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1e288c37-e3b0-49b0-9cc1-64aaf8609702</guid>
      <link>https://80000hours.org/podcast/episodes/santosh-harish-air-pollution/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #170 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/santosh-harish-air-pollution/"><strong>Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #170 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/santosh-harish-air-pollution/"><strong>Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 14 Nov 2023 20:05:54 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/96560a29/5b68c915.mp3" length="20429093" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/P76Ezx2tblb4X3KQltvzcjHLegurdovrrmLd8idxIcg/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1OTk3Nzcv/MTY5OTk5MjI4Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1699</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #170 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/santosh-harish-air-pollution/"><strong>Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/96560a29/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels</title>
      <itunes:title>Highlights: #169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9e3a24c9-bcf9-4fbf-a649-f24eed9dbffb</guid>
      <link>https://80000hours.org/podcast/episodes/paul-niehaus-cash-transfers/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #169 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/paul-niehaus-cash-transfers/"><strong>Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #169 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/paul-niehaus-cash-transfers/"><strong>Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 10 Nov 2023 19:21:41 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/6e1d1ebb/d2296b37.mp3" length="12608937" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/nJGtbGYgfyk35ERxFSSnDK8r6B6Kegov0y9T9oteiGs/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1OTM2ODEv/MTY5OTY0NDAwNi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1047</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #169 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/paul-niehaus-cash-transfers/"><strong>Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/6e1d1ebb/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #168 – Ian Morris on whether deep history says we’re heading for an intelligence explosion</title>
      <itunes:title>Highlights: #168 – Ian Morris on whether deep history says we’re heading for an intelligence explosion</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ef4ec790-c09f-4e81-923b-d2605c9f8ae5</guid>
      <link>https://80000hours.org/podcast/episodes/ian-morris-deep-history-intelligence-explosion/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #168 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ian-morris-deep-history-intelligence-explosion/"><strong>Ian Morris on whether deep history says we’re heading for an intelligence explosion</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #168 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ian-morris-deep-history-intelligence-explosion/"><strong>Ian Morris on whether deep history says we’re heading for an intelligence explosion</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Nov 2023 23:58:31 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/e3332918/f1c0cd70.mp3" length="26953709" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/14yBa2BXcooDDYC2Gky61SVXfgQUzWh6O3tW15_Touw/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1OTA4MzQv/MTY5OTQ4Nzg3Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1682</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #168 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ian-morris-deep-history-intelligence-explosion/"><strong>Ian Morris on whether deep history says we’re heading for an intelligence explosion</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/e3332918/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption</title>
      <itunes:title>Highlights: #167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ad861357-f3a0-4eac-a043-dfb2c2839be3</guid>
      <link>https://80000hours.org/podcast/episodes/seren-kell-alternative-proteins/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #167 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/seren-kell-alternative-proteins/"><strong>Seren Kell on the research gaps holding back alternative proteins from mass adoption</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #167 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/seren-kell-alternative-proteins/"><strong>Seren Kell on the research gaps holding back alternative proteins from mass adoption</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 06 Nov 2023 20:58:29 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/6b988c81/46e0984f.mp3" length="20274500" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/oSrefPZqC_hqncNP92HYwi2EpteYwn1gSRZqZUZ29GY/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1ODM3NjUv/MTY5OTMwNDIyMi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1265</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #167 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/seren-kell-alternative-proteins/"><strong>Seren Kell on the research gaps holding back alternative proteins from mass adoption</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/6b988c81/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere</title>
      <itunes:title>Highlights: #166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1409e3da-f71e-4afd-a7f2-13931a59bc60</guid>
      <link>https://80000hours.org/podcast/episodes/tantum-collins-ai-policy-insider/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #166 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/tantum-collins-ai-policy-insider/"><strong>Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #166 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/tantum-collins-ai-policy-insider/"><strong>Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Sat, 04 Nov 2023 00:45:15 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/91beb3f0/0fa7ca28.mp3" length="23080224" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/FjJh9XIWvcvqUttAvvsTeIE8DEX4rh4vLvl4RIgRxSk/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1ODE0Mzkv/MTY5OTA1ODYxOC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1440</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #166 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/tantum-collins-ai-policy-insider/"><strong>Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/91beb3f0/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe</title>
      <itunes:title>Highlights: #165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4e3177c0-92f2-4318-bdb5-a6b31a8ad67e</guid>
      <link>https://80000hours.org/podcast/episodes/anders-sandberg-best-things-possible-in-our-universe/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #165 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/anders-sandberg-best-things-possible-in-our-universe/"><strong>Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #165 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/anders-sandberg-best-things-possible-in-our-universe/"><strong>Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 01 Nov 2023 19:25:22 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/c6f80f86/01cd964e.mp3" length="20590779" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/XXqpiHRcnz7UGvfIAyO6QBxRmmAMDUmIGashzNX4fK8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NzY3NjMv/MTY5ODg2NjU4My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1711</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #165 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/anders-sandberg-best-things-possible-in-our-universe/"><strong>Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c6f80f86/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/c6f80f86/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives</title>
      <itunes:title>Highlights: #164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ae1e3900-2045-4b6d-ab38-d95003fa48a0</guid>
      <link>https://80000hours.org/podcast/episodes/kevin-esvelt-stealth-wildfire-pandemics/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #164 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/kevin-esvelt-stealth-wildfire-pandemics/"><strong>Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #164 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/kevin-esvelt-stealth-wildfire-pandemics/"><strong>Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 30 Oct 2023 22:36:32 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/e40de9f4/f1409dcf.mp3" length="19644291" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/M6uqXwIw0vU1jYFwtjJC3xGLEyyCs62pCbp_I_7FJXk/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NzQxMzAv/MTY5ODcwNTI4OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1634</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #164 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/kevin-esvelt-stealth-wildfire-pandemics/"><strong>Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/e40de9f4/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #163 – Toby Ord on the perils of maximising the good that you do</title>
      <itunes:title>Highlights: #163 – Toby Ord on the perils of maximising the good that you do</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4ee16229-e41e-4e16-818a-e27e1403855c</guid>
      <link>https://80000hours.org/podcast/episodes/toby-ord-perils-of-maximising-good/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #163 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/toby-ord-perils-of-maximising-good/"><strong>Toby Ord on the perils of maximising the good that you do</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #163 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/toby-ord-perils-of-maximising-good/"><strong>Toby Ord on the perils of maximising the good that you do</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 25 Oct 2023 22:06:16 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/1f7d841b/aab38d69.mp3" length="19752394" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/NTWFE9_L--SuZzi7wIKStlkRthswxqc98V3BX3RzBfg/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NjcyMjEv/MTY5ODI3MTU3Ni1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1643</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #163 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/toby-ord-perils-of-maximising-good/"><strong>Toby Ord on the perils of maximising the good that you do</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1f7d841b/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/1f7d841b/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI</title>
      <itunes:title>Highlights: #162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bd0549e7-6bce-4fe7-b669-62ecee624a67</guid>
      <link>https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #162 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/"><strong>Mustafa Suleyman on getting Washington and Silicon Valley to tame AI</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #162 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/"><strong>Mustafa Suleyman on getting Washington and Silicon Valley to tame AI</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 24 Oct 2023 01:50:38 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/cb12b0d7/5bd052a2.mp3" length="9864940" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/JuzDtAyvWvV3jseP3_y-R7atEJA6B5CtKozwgr2ySls/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NjExNDMv/MTY5ODExMjE0OS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>817</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #162 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/"><strong>Mustafa Suleyman on getting Washington and Silicon Valley to tame AI</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/cb12b0d7/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite</title>
      <itunes:title>Highlights: #161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">03be7955-cdd0-4eba-932a-fbf45596e40c</guid>
      <link>https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #161 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/"><strong>Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #161 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/"><strong>Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 19 Oct 2023 17:48:51 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/796e4bbd/fcd83d0e.mp3" length="22825880" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/KFPktQvFFviOYpy6p8Cx-YvC_Qg50DMHTDwAg6hLZr4/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NTQ1MDcv/MTY5NzczNzY1MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1899</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #161 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/"><strong>Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/796e4bbd/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #160 – Hannah Ritchie on why it makes sense to be optimistic about the environment</title>
      <itunes:title>Highlights: #160 – Hannah Ritchie on why it makes sense to be optimistic about the environment</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5a008c1b-c176-4370-8f74-fdd2bd37b292</guid>
      <link>https://80000hours.org/podcast/episodes/hannah-ritchie-environmental-optimism/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #160 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/hannah-ritchie-environmental-optimism/"><strong>Hannah Ritchie on why it makes sense to be optimistic about the environment</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #160 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/hannah-ritchie-environmental-optimism/"><strong>Hannah Ritchie on why it makes sense to be optimistic about the environment</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 17 Oct 2023 20:40:00 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/a6db4aee/3b309a5d.mp3" length="13777273" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/zw7ZaD_ekE5JwAP2ekCBXteZvPyeCMEXcCj23FlRN7s/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1NTA0NDMv/MTY5NzU3NTEzMS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1145</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #160 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/hannah-ritchie-environmental-optimism/"><strong>Hannah Ritchie on why it makes sense to be optimistic about the environment</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/a6db4aee/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/a6db4aee/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #159 – Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less</title>
      <itunes:title>Highlights: #159 – Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f88a77d3-a946-43f9-8ecf-b44d4db38d1a</guid>
      <link>https://80000hours.org/podcast/episodes/jan-leike-superalignment/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #159 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jan-leike-superalignment/"><strong>Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #159 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jan-leike-superalignment/"><strong>Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 09 Oct 2023 21:27:18 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/2430b582/9a7d06f8.mp3" length="21159573" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/S-VYanLmaRcyq3iLFefgLefk14YRegfv_w018D3VpHE/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1Mzk5MjAv/MTY5Njg4NjA4MC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1757</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #159 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/jan-leike-superalignment/"><strong>Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/2430b582/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #158 – Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk</title>
      <itunes:title>Highlights: #158 – Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">36a334dc-446f-4bcc-9d79-25e221ddc472</guid>
      <link>https://80000hours.org/podcast/episodes/holden-karnofsky-how-ai-could-take-over-the-world/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #158 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/holden-karnofsky-how-ai-could-take-over-the-world/"><strong>Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #158 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/holden-karnofsky-how-ai-could-take-over-the-world/"><strong>Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 06 Oct 2023 20:45:45 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/2f005dfd/c9dfb9bc.mp3" length="17331648" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/wkm6kjqrlLERXgonEYGjlscFQzIzGaPQc_lFqHhJebo/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1MzY4OTkv/MTY5NjYyNTEwMS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1439</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #158 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/holden-karnofsky-how-ai-could-take-over-the-world/"><strong>Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/2f005dfd/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #157 – Ezra Klein on existential risk from AI and what DC could do about it</title>
      <itunes:title>Highlights: #157 – Ezra Klein on existential risk from AI and what DC could do about it</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">76a5be28-02fb-40bd-a49d-09915ef326d2</guid>
      <link>https://80000hours.org/podcast/episodes/ezra-klein-ai-and-dc/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #157 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ezra-klein-ai-and-dc/"><strong>Ezra Klein on existential risk from AI and what DC could do about it</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #157 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ezra-klein-ai-and-dc/"><strong>Ezra Klein on existential risk from AI and what DC could do about it</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 26 Sep 2023 19:35:54 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/d87ebc11/e5a63255.mp3" length="14331850" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/nE8FVM2D8qpL2rlN0CQnyposqHshyvCks6OgKxG-jYY/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1MTk1MDIv/MTY5NTY3NzAyNC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1191</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #157 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ezra-klein-ai-and-dc/"><strong>Ezra Klein on existential risk from AI and what DC could do about it</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d87ebc11/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/d87ebc11/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #156 – Markus Anderljung on how to regulate cutting-edge AI models</title>
      <itunes:title>Highlights: #156 – Markus Anderljung on how to regulate cutting-edge AI models</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bf5572d1-67ef-4128-a2c5-52152f44e3b5</guid>
      <link>https://80000hours.org/podcast/episodes/markus-anderljung-regulating-cutting-edge-ai/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #156 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/markus-anderljung-regulating-cutting-edge-ai/"><strong>Markus Anderljung on how to regulate cutting-edge AI models</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #156 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/markus-anderljung-regulating-cutting-edge-ai/"><strong>Markus Anderljung on how to regulate cutting-edge AI models</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 22 Sep 2023 19:06:08 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/da534d94/f8c4eb03.mp3" length="16224878" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/LbTyKY8pTxBWyWbJB_UrFAckCtMdw5UAw_3_9phoNng/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1MTYyNDAv/MTY5NTQwOTQ4OC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1348</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #156 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/markus-anderljung-regulating-cutting-edge-ai/"><strong>Markus Anderljung on how to regulate cutting-edge AI models</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/da534d94/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #155 – Lennart Heim on the compute governance era and what has to come after</title>
      <itunes:title>Highlights: #155 – Lennart Heim on the compute governance era and what has to come after</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">42e822a1-9977-4a15-b428-d2db0a785af6</guid>
      <link>https://80000hours.org/podcast/episodes/lennart-heim-compute-governance/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #155 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lennart-heim-compute-governance/"><strong>Lennart Heim on the compute governance era and what has to come after</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #155 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lennart-heim-compute-governance/"><strong>Lennart Heim on the compute governance era and what has to come after</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 15 Sep 2023 19:20:51 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/c81b99d7/dd03ef2e.mp3" length="15566847" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/xYO6Nsp-o7wpB5qe3aiOL1Ql4FikFctYHlLDEK32rmU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1MDU5OTYv/MTY5NDgwNTU3NS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1293</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #155 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/lennart-heim-compute-governance/"><strong>Lennart Heim on the compute governance era and what has to come after</strong></a></p><p><strong><br></strong>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/c81b99d7/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters</title>
      <itunes:title>Highlights: #154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">dbb426d3-303e-49cc-8ce1-71305475cc11</guid>
      <link>https://80000hours.org/podcast/episodes/rohin-shah-deepmind-doomers-and-doubters/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #154 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/rohin-shah-deepmind-doomers-and-doubters/"><strong>Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #154 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/rohin-shah-deepmind-doomers-and-doubters/"><strong>Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 12 Sep 2023 00:45:29 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/f7a8545d/11048518.mp3" length="16503212" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Hvl28bNwr3Cp93v_VGR1yyM8iurmyZZ_a7Ej8EC-NQY/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE1MDA3ODEv/MTY5NDQ3OTQ3My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1365</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #154 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/rohin-shah-deepmind-doomers-and-doubters/"><strong>Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/f7a8545d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Alex Lawsen on avoiding 10 mistakes people make when pursuing a high-impact career</title>
      <itunes:title>Alex Lawsen on avoiding 10 mistakes people make when pursuing a high-impact career</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cd23fd78-a344-4392-8fb9-6423a5b1c385</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/alex-lawsen-10-career-mistakes/?utm_campaign=podcast__alex-lawsen-2&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>,  Luisa Rodriguez and Alex Lawsen discuss common mistakes people make when trying to do good with their careers, and advice on how to avoid them. </p><p><a href="https://80k.info/al23"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Taking 80,000 Hours’ rankings too seriously</li><li>Not trying hard enough to fail</li><li>Feeling like you need to optimise for having the most impact *now*</li><li>Feeling like you need to work directly on AI immediately</li><li>Not taking a role because you think you’ll be replaceable</li><li>Constantly considering other career options</li><li>Overthinking or over-optimising career choices</li><li>Being unwilling to think things through for yourself</li><li>Ignoring conventional career wisdom</li><li>Doing community work even if you’re not suited to it</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who want to pursue a high-impact career</li><li>People wondering how much AI progress should change their plans</li><li>People who take 80,000 Hours’ career advice seriously</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People not taking 80k's career advice seriously enough</li><li>People who've never made any career mistakes</li><li>People who don't want to hear Alex say "I said a bunch of stuff, maybe some of it's true" every time he's on the podcast</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer and editor: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Milo McGuire and Dominic Armstrong<br>Additional content editing: Luisa Rodriguez and Katy Moore<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>,  Luisa Rodriguez and Alex Lawsen discuss common mistakes people make when trying to do good with their careers, and advice on how to avoid them. </p><p><a href="https://80k.info/al23"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Taking 80,000 Hours’ rankings too seriously</li><li>Not trying hard enough to fail</li><li>Feeling like you need to optimise for having the most impact *now*</li><li>Feeling like you need to work directly on AI immediately</li><li>Not taking a role because you think you’ll be replaceable</li><li>Constantly considering other career options</li><li>Overthinking or over-optimising career choices</li><li>Being unwilling to think things through for yourself</li><li>Ignoring conventional career wisdom</li><li>Doing community work even if you’re not suited to it</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who want to pursue a high-impact career</li><li>People wondering how much AI progress should change their plans</li><li>People who take 80,000 Hours’ career advice seriously</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People not taking 80k's career advice seriously enough</li><li>People who've never made any career mistakes</li><li>People who don't want to hear Alex say "I said a bunch of stuff, maybe some of it's true" every time he's on the podcast</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer and editor: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Milo McGuire and Dominic Armstrong<br>Additional content editing: Luisa Rodriguez and Katy Moore<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 06 Sep 2023 23:27:30 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/f7e27390/e8c38acf.mp3" length="66072778" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/I5aVo84YPU6c_fju-ok8RHV-PeVewFPJIQRBlJwJVjU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0OTI5MjAv/MTY5NDA0MTk3OC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>8256</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>,  Luisa Rodriguez and Alex Lawsen discuss common mistakes people make when trying to do good with their careers, and advice on how to avoid them. </p><p><a href="https://80k.info/al23"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Taking 80,000 Hours’ rankings too seriously</li><li>Not trying hard enough to fail</li><li>Feeling like you need to optimise for having the most impact *now*</li><li>Feeling like you need to work directly on AI immediately</li><li>Not taking a role because you think you’ll be replaceable</li><li>Constantly considering other career options</li><li>Overthinking or over-optimising career choices</li><li>Being unwilling to think things through for yourself</li><li>Ignoring conventional career wisdom</li><li>Doing community work even if you’re not suited to it</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who want to pursue a high-impact career</li><li>People wondering how much AI progress should change their plans</li><li>People who take 80,000 Hours’ career advice seriously</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People not taking 80k's career advice seriously enough</li><li>People who've never made any career mistakes</li><li>People who don't want to hear Alex say "I said a bunch of stuff, maybe some of it's true" every time he's on the podcast</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer and editor: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Milo McGuire and Dominic Armstrong<br>Additional content editing: Luisa Rodriguez and Katy Moore<br>Transcriptions: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/f7e27390/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #153 – Elie Hassenfeld on two big picture critiques of GiveWell’s approach, and six lessons from their recent work</title>
      <itunes:title>Highlights: #153 – Elie Hassenfeld on two big picture critiques of GiveWell’s approach, and six lessons from their recent work</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bf49e9d3-9144-4edc-9f46-8dffd2fb40fb</guid>
      <link>https://80000hours.org/podcast/episodes/elie-hassenfeld-givewell-critiques-and-lessons/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #153 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/elie-hassenfeld-givewell-critiques-and-lessons/"><strong>Elie Hassenfeld on two big picture critiques of GiveWell’s approach, and six lessons from their recent work</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #153 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/elie-hassenfeld-givewell-critiques-and-lessons/"><strong>Elie Hassenfeld on two big picture critiques of GiveWell’s approach, and six lessons from their recent work</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 28 Aug 2023 19:30:14 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/b3e40dd2/b0633fc5.mp3" length="13770637" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/yK8xpljiZTvwYuu9yZqk7TV3GeuiJ7lI5pjo45M7ApM/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0ODAwNTQv/MTY5MzI1MDkzOC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1144</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #153 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/elie-hassenfeld-givewell-critiques-and-lessons/"><strong>Elie Hassenfeld on two big picture critiques of GiveWell’s approach, and six lessons from their recent work</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/b3e40dd2/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #152 – Joe Carlsmith on navigating serious philosophical confusion</title>
      <itunes:title>Highlights: #152 – Joe Carlsmith on navigating serious philosophical confusion</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b71a7fff-1143-486f-8595-2d27e26bdc49</guid>
      <link>https://80000hours.org/podcast/episodes/joe-carlsmith-navigating-serious-philosophical-confusion/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #152 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/joe-carlsmith-navigating-serious-philosophical-confusion/"><strong>Joe Carlsmith on navigating serious philosophical confusion</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #152 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/joe-carlsmith-navigating-serious-philosophical-confusion/"><strong>Joe Carlsmith on navigating serious philosophical confusion</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 08 Aug 2023 19:52:23 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/5822b85a/2db99c4c.mp3" length="18677110" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/7cBTrDAxNxE9OUElmiMKchkLFXIQE70jidCzpkYUJk8/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0NTQyNzYv/MTY5MTUyNDM0My1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>777</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #152 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/joe-carlsmith-navigating-serious-philosophical-confusion/"><strong>Joe Carlsmith on navigating serious philosophical confusion</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/5822b85a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #151 – Ajeya Cotra on accidentally teaching AI models to deceive us</title>
      <itunes:title>Highlights: #151 – Ajeya Cotra on accidentally teaching AI models to deceive us</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3630a964-38ea-4b3b-9f39-9e5a5e718292</guid>
      <link>https://80000hours.org/podcast/episodes/ajeya-cotra-accidentally-teaching-ai-to-deceive-us/</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #151 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ajeya-cotra-accidentally-teaching-ai-to-deceive-us/"><strong>Ajeya Cotra on accidentally teaching AI models to deceive us</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #151 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ajeya-cotra-accidentally-teaching-ai-to-deceive-us/"><strong>Ajeya Cotra on accidentally teaching AI models to deceive us</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 02 Aug 2023 19:07:32 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/1216d2b8/6464d84f.mp3" length="18357338" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/eJnyFpXGZfpcv5IFU-_vCmquAU_kZYzaACDAaW_K5iU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0NDM5Nzkv/MTY5MTAwMzE4NS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1525</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #151 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80000hours.org/podcast/episodes/ajeya-cotra-accidentally-teaching-ai-to-deceive-us/"><strong>Ajeya Cotra on accidentally teaching AI models to deceive us</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.</p><p><em>Highlights put together by Simon Monsour and Milo McGuire</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1216d2b8/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/1216d2b8/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #150 – Tom Davidson on how quickly AI could transform the world</title>
      <itunes:title>Highlights: #150 – Tom Davidson on how quickly AI could transform the world</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f6c7222b-6ab0-49a1-8eae-38fd744678c5</guid>
      <link>https://80000hours.org/podcast/episodes/tom-davidson-how-quickly-ai-could-transform-the-world/?utm_campaign=podcast__tom-davidson-highlights&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #150 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80k.info/tdhl"><strong>Tom Davidson on how quickly AI could transform the world</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the <a href="https://80k.info/tdhl"><strong>transcript</strong></a><strong>.</strong></p><p><em>Highlights put together by Milo McGuire and Simon Monsour</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #150 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80k.info/tdhl"><strong>Tom Davidson on how quickly AI could transform the world</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the <a href="https://80k.info/tdhl"><strong>transcript</strong></a><strong>.</strong></p><p><em>Highlights put together by Milo McGuire and Simon Monsour</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 25 Jul 2023 21:12:26 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/1fb35a98/d5446fd3.mp3" length="14504044" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/WLXFTJ2a2Gh-mi5T-EDZMTyzfMyCD_Ufbb0ycdSydBc/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0MzIzOTYv/MTY5MDMxOTA0Ny1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1801</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #150 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80k.info/tdhl"><strong>Tom Davidson on how quickly AI could transform the world</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the <a href="https://80k.info/tdhl"><strong>transcript</strong></a><strong>.</strong></p><p><em>Highlights put together by Milo McGuire and Simon Monsour</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/1fb35a98/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/1fb35a98/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Hannah Boettcher on the mental health challenges that come with trying to have a big impact</title>
      <itunes:title>Hannah Boettcher on the mental health challenges that come with trying to have a big impact</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">47632030-cb51-4c7a-8bc3-cee0a02f05a2</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/hannah-boettcher-mental-health-challenges/?utm_campaign=podcast__hannah-boettcher&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>,  Luisa Rodriguez and Hannah Boettcher discuss various approaches to therapy, and how to use them in practice — focusing specifically on people trying to have a big impact.</p><p><a href="https://80k.info/hb"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>The effectiveness of therapy, and tips for finding a therapist</li><li>Moral demandingness</li><li>Internal family systems-style therapy</li><li>Motivation and burnout</li><li>Exposure therapy</li><li>Grappling with world problems and x-risk</li><li>Perfectionism and imposter syndrome</li><li>And the risk of over-intellectualising</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>High-impact focused people who struggle with moral demandingness, perfectionism, or imposter syndrome</li><li>People who feel anxious thinking about the end of the world</li><li>80,000 Hours Podcast hosts with the initials LR</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People who aren’t focused on having a big impact</li><li>People who don’t struggle with any mental health issues</li><li>Founders of Scientology with the initials LRH</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Dominic Armstrong<br>Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>,  Luisa Rodriguez and Hannah Boettcher discuss various approaches to therapy, and how to use them in practice — focusing specifically on people trying to have a big impact.</p><p><a href="https://80k.info/hb"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>The effectiveness of therapy, and tips for finding a therapist</li><li>Moral demandingness</li><li>Internal family systems-style therapy</li><li>Motivation and burnout</li><li>Exposure therapy</li><li>Grappling with world problems and x-risk</li><li>Perfectionism and imposter syndrome</li><li>And the risk of over-intellectualising</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>High-impact focused people who struggle with moral demandingness, perfectionism, or imposter syndrome</li><li>People who feel anxious thinking about the end of the world</li><li>80,000 Hours Podcast hosts with the initials LR</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People who aren’t focused on having a big impact</li><li>People who don’t struggle with any mental health issues</li><li>Founders of Scientology with the initials LRH</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Dominic Armstrong<br>Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 19 Jul 2023 17:35:49 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/fbbb45ac/cfeb1f36.mp3" length="61360150" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/KkJqiQLwzX5eT6V4gWeytvknfryFA-s95cmA3LJvT6g/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0MjU0MTIv/MTY4OTc4Nzc2NC1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>7664</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>,  Luisa Rodriguez and Hannah Boettcher discuss various approaches to therapy, and how to use them in practice — focusing specifically on people trying to have a big impact.</p><p><a href="https://80k.info/hb"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>The effectiveness of therapy, and tips for finding a therapist</li><li>Moral demandingness</li><li>Internal family systems-style therapy</li><li>Motivation and burnout</li><li>Exposure therapy</li><li>Grappling with world problems and x-risk</li><li>Perfectionism and imposter syndrome</li><li>And the risk of over-intellectualising</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>High-impact focused people who struggle with moral demandingness, perfectionism, or imposter syndrome</li><li>People who feel anxious thinking about the end of the world</li><li>80,000 Hours Podcast hosts with the initials LR</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People who aren’t focused on having a big impact</li><li>People who don’t struggle with any mental health issues</li><li>Founders of Scientology with the initials LRH</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio Engineering Lead: Ben Cordell<br>Technical editing: Dominic Armstrong<br>Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris<br>Transcriptions: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/fbbb45ac/transcript.txt" type="text/plain"/>
      <podcast:chapters url="https://share.transistor.fm/s/fbbb45ac/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Highlights: #149 – Tim LeBon on how altruistic perfectionism is self-defeating</title>
      <itunes:title>Highlights: #149 – Tim LeBon on how altruistic perfectionism is self-defeating</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cb49b763-ae41-4304-a23c-cb1a4319190e</guid>
      <link>https://80000hours.org/podcast/episodes/tim-lebon-self-defeating-altruistic-perfectionism/?utm_campaign=podcast__tim-lebon-highlights&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>This is a selection of highlights from episode #149 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80k.info/tlh"><strong>Tim LeBon on how altruistic perfectionism is self-defeating</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the <a href="https://80k.info/tlh"><strong>transcript</strong></a><strong>.</strong></p><p><em>Highlights put together by Simon Monsour</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>This is a selection of highlights from episode #149 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80k.info/tlh"><strong>Tim LeBon on how altruistic perfectionism is self-defeating</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the <a href="https://80k.info/tlh"><strong>transcript</strong></a><strong>.</strong></p><p><em>Highlights put together by Simon Monsour</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 13 Jul 2023 00:55:13 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/1605af90/e5bf8f8f.mp3" length="10182794" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Ak8kX6ajK3vFa-5pvcnNnBcs76YhXwEo48wtlp_D8rA/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzE0MTc4NjAv/MTY4OTIwOTcxMy1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1267</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>This is a selection of highlights from episode #149 of <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:</p><p><a href="https://80k.info/tlh"><strong>Tim LeBon on how altruistic perfectionism is self-defeating</strong></a><strong></strong></p><p>And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.<strong></strong></p><p>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the <a href="https://80k.info/tlh"><strong>transcript</strong></a><strong>.</strong></p><p><em>Highlights put together by Simon Monsour</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/1605af90/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Operations management in high-impact organisations (Article)</title>
      <itunes:title>Operations management in high-impact organisations (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d9a5da34-3ed7-47b1-af16-dd323e807e74</guid>
      <link>https://80000hours.org/articles/operations-management/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd and Roman Duda's career review on operations management.</p><p><a href="https://80000hours.org/articles/operations-management/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our 80,000 Hours Podcast episodes with <a href="https://80000hours.org/podcast/episodes/tara-mac-aulay-operations-mindset/">Tara Mac Aulay</a> and <a href="https://80000hours.org/podcast/episodes/tanya-singh-operations-bottleneck/">Tanya Singh</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd and Roman Duda's career review on operations management.</p><p><a href="https://80000hours.org/articles/operations-management/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our 80,000 Hours Podcast episodes with <a href="https://80000hours.org/podcast/episodes/tara-mac-aulay-operations-mindset/">Tara Mac Aulay</a> and <a href="https://80000hours.org/podcast/episodes/tanya-singh-operations-bottleneck/">Tanya Singh</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 01 Jun 2023 18:53:37 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/5d91da5a/efbc19d2.mp3" length="19798517" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/AOo3v2tnwGrm96ij90GM0pjq1sWsac2XyE3hODWePIA/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzNjQ4ODUv/MTY4NTY0NTQ5Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2469</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd and Roman Duda's career review on operations management.</p><p><a href="https://80000hours.org/articles/operations-management/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our 80,000 Hours Podcast episodes with <a href="https://80000hours.org/podcast/episodes/tara-mac-aulay-operations-mindset/">Tara Mac Aulay</a> and <a href="https://80000hours.org/podcast/episodes/tanya-singh-operations-bottleneck/">Tanya Singh</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/5d91da5a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>What is social impact? A definition (Article)</title>
      <itunes:title>What is social impact? A definition (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0c24c961-237b-4823-a2f7-790052d49ec4</guid>
      <link>https://80000hours.org/articles/what-is-social-impact-definition/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd's article on social impact.</p><p><a href="https://80000hours.org/articles/what-is-social-impact-definition/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our article on <a href="https://80000hours.org/articles/harmful-career/">why not to take a harmful career even if you think it’ll do more good.</a></p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd's article on social impact.</p><p><a href="https://80000hours.org/articles/what-is-social-impact-definition/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our article on <a href="https://80000hours.org/articles/harmful-career/">why not to take a harmful career even if you think it’ll do more good.</a></p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 17 May 2023 20:35:16 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/9f6823fe/90faf84a.mp3" length="12534440" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/iu8NiMOonRMogJoOCVO2OBHDayny0oUFFheWgvSs7C0/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzNDIwNjgv/MTY4NDM1NTcxNi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1559</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd's article on social impact.</p><p><a href="https://80000hours.org/articles/what-is-social-impact-definition/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our article on <a href="https://80000hours.org/articles/harmful-career/">why not to take a harmful career even if you think it’ll do more good.</a></p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/9f6823fe/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>How to make tough career decisions (Article)</title>
      <itunes:title>How to make tough career decisions (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">efdba789-6c8c-436d-a7ce-5f93b2f26aaa</guid>
      <link>https://share.transistor.fm/s/c5868a20</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd's article on how to make tough career decisions.</p><p><a href="https://80000hours.org/career-decision/article/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our review of <a href="https://80000hours.org/career-guide/how-to-be-successful/">how to become happier and more productive in any job</a> (including how to network).</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd's article on how to make tough career decisions.</p><p><a href="https://80000hours.org/career-decision/article/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our review of <a href="https://80000hours.org/career-guide/how-to-be-successful/">how to become happier and more productive in any job</a> (including how to network).</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 03 May 2023 22:34:51 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/c5868a20/7725a641.mp3" length="10794854" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/DeRqZeEvofs0_LWUzjZndcoIUIA4iAcMa6vu7kaWTZ4/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzMTk2Nzcv/MTY4MzE1MzI5MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1336</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Todd's article on how to make tough career decisions.</p><p><a href="https://80000hours.org/career-decision/article/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our review of <a href="https://80000hours.org/career-guide/how-to-be-successful/">how to become happier and more productive in any job</a> (including how to network).</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/c5868a20/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame</title>
      <itunes:title>Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">21a0fe04-59e6-488b-b495-00e73d0113c3</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/luisa-keiran-free-will-guilt-shame/?utm_campaign=podcast__luisa-keiran&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.</p><p><a href="https://80k.link/LKFW"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Keiran’s views on free will, and how he came to hold them</li><li>What it’s like not experiencing sustained guilt, shame, and anger</li><li>Whether Luisa would become a worse person if she felt less guilt and shame, specifically whether she’d work fewer hours, or donate less money, or become a worse friend</li><li>Whether giving up guilt and shame also means giving up pride</li><li>The implications for love</li><li>The neurological condition ‘Jerk Syndrome’</li><li>And some practical advice on feeling less guilt, shame, and anger</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People sympathetic to the idea that free will is an illusion</li><li>People who experience tons of guilt, shame, or anger</li><li>People worried about what would happen if they stopped feeling tons of guilt, shame, or anger</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People strongly in favour of retributive justice</li><li>Philosophers who can’t stand random non-philosophers talking about philosophy</li><li>Non-philosophers who can’t stand random non-philosophers talking about philosophy</li></ul><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Milo McGuire<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.</p><p><a href="https://80k.link/LKFW"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Keiran’s views on free will, and how he came to hold them</li><li>What it’s like not experiencing sustained guilt, shame, and anger</li><li>Whether Luisa would become a worse person if she felt less guilt and shame, specifically whether she’d work fewer hours, or donate less money, or become a worse friend</li><li>Whether giving up guilt and shame also means giving up pride</li><li>The implications for love</li><li>The neurological condition ‘Jerk Syndrome’</li><li>And some practical advice on feeling less guilt, shame, and anger</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People sympathetic to the idea that free will is an illusion</li><li>People who experience tons of guilt, shame, or anger</li><li>People worried about what would happen if they stopped feeling tons of guilt, shame, or anger</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People strongly in favour of retributive justice</li><li>Philosophers who can’t stand random non-philosophers talking about philosophy</li><li>Non-philosophers who can’t stand random non-philosophers talking about philosophy</li></ul><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Milo McGuire<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Sat, 22 Apr 2023 01:47:04 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/e480cca9/c6cdc400.mp3" length="47075950" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/R-SnsqucG3WY2aGBa1O5V82VUAJgPED8IN46M6czNYI/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEzMDI4OTMv/MTY4MjEyNzc5Ni1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>5878</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.</p><p><a href="https://80k.link/LKFW"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Keiran’s views on free will, and how he came to hold them</li><li>What it’s like not experiencing sustained guilt, shame, and anger</li><li>Whether Luisa would become a worse person if she felt less guilt and shame, specifically whether she’d work fewer hours, or donate less money, or become a worse friend</li><li>Whether giving up guilt and shame also means giving up pride</li><li>The implications for love</li><li>The neurological condition ‘Jerk Syndrome’</li><li>And some practical advice on feeling less guilt, shame, and anger</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People sympathetic to the idea that free will is an illusion</li><li>People who experience tons of guilt, shame, or anger</li><li>People worried about what would happen if they stopped feeling tons of guilt, shame, or anger</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People strongly in favour of retributive justice</li><li>Philosophers who can’t stand random non-philosophers talking about philosophy</li><li>Non-philosophers who can’t stand random non-philosophers talking about philosophy</li></ul><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Milo McGuire<br>Transcriptions: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/e480cca9/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Career review: Journalism (Article) </title>
      <itunes:title>Career review: Journalism (Article) </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">46e23992-1e1b-4f18-9f44-05f3eb33ecc0</guid>
      <link>https://share.transistor.fm/s/e9bb523e</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Cody Fenwick's journalism career review.</p><p><a href="https://80000hours.org/career-reviews/journalism/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out historian Daniel Strieff's <a href="https://web.archive.org/web/20221013004001/https://gijn.org/2020/01/02/the-15-most-influential-journalism-stories-in-us-history/">compilation of more than 20 highly influential pieces of journalism in US history</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Cody Fenwick's journalism career review.</p><p><a href="https://80000hours.org/career-reviews/journalism/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out historian Daniel Strieff's <a href="https://web.archive.org/web/20221013004001/https://gijn.org/2020/01/02/the-15-most-influential-journalism-stories-in-us-history/">compilation of more than 20 highly influential pieces of journalism in US history</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 24 Mar 2023 22:21:06 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/e9bb523e/18d8015f.mp3" length="15762751" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/KLA9sGgDuUgCyNsdew8R5hTFO5USU0KkAd7oNlF4LQI/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyNjM0OTAv/MTY3OTY5NjM0Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1963</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Cody Fenwick's journalism career review.</p><p><a href="https://80000hours.org/career-reviews/journalism/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out historian Daniel Strieff's <a href="https://web.archive.org/web/20221013004001/https://gijn.org/2020/01/02/the-15-most-influential-journalism-stories-in-us-history/">compilation of more than 20 highly influential pieces of journalism in US history</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/e9bb523e/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Luisa and Robert Long on how to make independent research more fun</title>
      <itunes:title>Luisa and Robert Long on how to make independent research more fun</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">8bde300b-686f-4c21-b62e-4fb83adf5d69</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/luisa-rob-long-independent-research/?utm_campaign=podcast__luisa-rob-long&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Robert Long have an honest conversation about the challenges of independent research.</p><p><a href="https://80k.link/LRL"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Assigning probabilities when you’re really uncertain</li><li>Struggles around self-belief and imposter syndrome</li><li>The importance of sharing work even when it feels terrible</li><li>Balancing impact and fun in a job</li><li>And some mistakes researchers often make</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People pursuing independent research</li><li>People who struggle with self-belief</li><li>People who feel a pull towards pursuing a career they don’t actually want</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People convinced that their research is perfect</li><li>Angus Hübelschmidt — the president and sole member of the Rob Wiblin Fan Club who refuses to listen to another Rob speak</li></ul><p><br>You can find their longer conversation on <a href="https://80k.link/RL"><strong>why large language models like GPT (probably) aren't conscious</strong></a> over on the original <em>80,000 Hours Podcast</em> feed.</p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell and Milo McGuire<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Robert Long have an honest conversation about the challenges of independent research.</p><p><a href="https://80k.link/LRL"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Assigning probabilities when you’re really uncertain</li><li>Struggles around self-belief and imposter syndrome</li><li>The importance of sharing work even when it feels terrible</li><li>Balancing impact and fun in a job</li><li>And some mistakes researchers often make</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People pursuing independent research</li><li>People who struggle with self-belief</li><li>People who feel a pull towards pursuing a career they don’t actually want</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People convinced that their research is perfect</li><li>Angus Hübelschmidt — the president and sole member of the Rob Wiblin Fan Club who refuses to listen to another Rob speak</li></ul><p><br>You can find their longer conversation on <a href="https://80k.link/RL"><strong>why large language models like GPT (probably) aren't conscious</strong></a> over on the original <em>80,000 Hours Podcast</em> feed.</p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell and Milo McGuire<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 14 Mar 2023 05:44:09 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/f9dcbcf9/e3fc73a6.mp3" length="21105386" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/eMy4Tl4rAKmGJs0vvQjZxlysKR9IvEIRelbUqbfesq4/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyNDUxNjcv/MTY3ODc3MTM0Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2633</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Luisa Rodriguez and Robert Long have an honest conversation about the challenges of independent research.</p><p><a href="https://80k.link/LRL"><strong>Links to learn more, highlights and full transcript.</strong></a></p><p>They cover:</p><ul><li>Assigning probabilities when you’re really uncertain</li><li>Struggles around self-belief and imposter syndrome</li><li>The importance of sharing work even when it feels terrible</li><li>Balancing impact and fun in a job</li><li>And some mistakes researchers often make</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People pursuing independent research</li><li>People who struggle with self-belief</li><li>People who feel a pull towards pursuing a career they don’t actually want</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People convinced that their research is perfect</li><li>Angus Hübelschmidt — the president and sole member of the Rob Wiblin Fan Club who refuses to listen to another Rob speak</li></ul><p><br>You can find their longer conversation on <a href="https://80k.link/RL"><strong>why large language models like GPT (probably) aren't conscious</strong></a> over on the original <em>80,000 Hours Podcast</em> feed.</p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell and Milo McGuire<br>Transcriptions: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/f9dcbcf9/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>China-related AI safety and governance paths (Article)</title>
      <itunes:title>China-related AI safety and governance paths (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">dfa55574-03e5-4961-8cb6-ff663206b39f</guid>
      <link>https://share.transistor.fm/s/c8081ccd</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads our career review of China-related AI safety and governance paths.</p><p><a href="https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out Benjamin Todd and Brian Tse's article on <a href="https://80000hours.org/career-reviews/china-specialist/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast">Improving China-Western coordination on global catastrophic risks</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads our career review of China-related AI safety and governance paths.</p><p><a href="https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out Benjamin Todd and Brian Tse's article on <a href="https://80000hours.org/career-reviews/china-specialist/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast">Improving China-Western coordination on global catastrophic risks</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 23 Feb 2023 22:25:01 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/c8081ccd/add868fc.mp3" length="22987448" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/dgnWzI0BS1FjVxvEuBo0NH-ZbHE2xbzUQLCHTfPTFRQ/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEyMTk3NjAv/MTY3NzE5MTEwMS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>2862</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads our career review of China-related AI safety and governance paths.</p><p><a href="https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out Benjamin Todd and Brian Tse's article on <a href="https://80000hours.org/career-reviews/china-specialist/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast">Improving China-Western coordination on global catastrophic risks</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/c8081ccd/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? (Article)</title>
      <itunes:title>Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">70656993-b17e-476b-8053-c8da0b77b4b7</guid>
      <link>https://share.transistor.fm/s/2b8e4832</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads 11 anonymous responses to the question: "if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?"</p><p><a href="https://80000hours.org/articles/ai-capabilities/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our <a href="https://80000hours.org/articles/anonymous-answers/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast">original anonymous answers series</a>, the highlights of which <a href="https://open.spotify.com/episode/1J5BUzj0Gm52L52f37wEf1">were released on the original 80,000 Hours Podcast feed</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads 11 anonymous responses to the question: "if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?"</p><p><a href="https://80000hours.org/articles/ai-capabilities/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our <a href="https://80000hours.org/articles/anonymous-answers/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast">original anonymous answers series</a>, the highlights of which <a href="https://open.spotify.com/episode/1J5BUzj0Gm52L52f37wEf1">were released on the original 80,000 Hours Podcast feed</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Fri, 20 Jan 2023 21:33:48 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/2b8e4832/0aa223ac.mp3" length="14639886" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:duration>1824</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads 11 anonymous responses to the question: "if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?"</p><p><a href="https://80000hours.org/articles/ai-capabilities/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our <a href="https://80000hours.org/articles/anonymous-answers/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast">original anonymous answers series</a>, the highlights of which <a href="https://open.spotify.com/episode/1J5BUzj0Gm52L52f37wEf1">were released on the original 80,000 Hours Podcast feed</a>.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/2b8e4832/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Is climate change the greatest threat facing humanity today? (Article)</title>
      <itunes:title>Is climate change the greatest threat facing humanity today? (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">55537623-7bf7-4c84-9ba4-51f23fd42b8e</guid>
      <link>https://share.transistor.fm/s/d961162d</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Hilton's problem profile on climate change.</p><p><a href="https://80000hours.org/problem-profiles/climate-change/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out <a href="https://www.carbonbrief.org/">CarbonBrief</a>, which has a range of excellent content and updates on climate change.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Benjamin Hilton's problem profile on climate change.</p><p><a href="https://80000hours.org/problem-profiles/climate-change/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out <a href="https://www.carbonbrief.org/">CarbonBrief</a>, which has a range of excellent content and updates on climate change.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Jan 2023 22:06:50 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/d961162d/9d840ec0.mp3" length="30076438" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/0ssvTT9M2Vtoy3jWunf3FDAHhLsmHUX7JfIQtD5BLCk/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzExNjQ3NjIv/MTY3MzQ3NDY4Mi1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3740</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Perrin Walker reads Benjamin Hilton's problem profile on climate change.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Perrin Walker reads Benjamin Hilton's problem profile on climate change.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/d961162d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Marcus Davis on Rethink Priorities</title>
      <itunes:title>Marcus Davis on Rethink Priorities</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b4ab206e-ce34-4710-bb73-f435323f1209</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/marcus-davis-rethink-priorities/?utm_campaign=podcast__marcus-davis&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin interviews Marcus Davis about <a href="https://rethinkpriorities.org/">Rethink Priorities</a>.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/marcus-davis-rethink-priorities/?utm_campaign=podcast__marcus-davis&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>They cover:</p><ul><li>Interventions to help wild animal</li><li>Aquatic noise</li><li>Rethink Priorities strategy</li><li>Mistakes that RP has made since it was founded</li><li>Careers in global priorities research</li><li>And the most surprising thing Marcus has learned at RP</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who want to learn about Rethink Priorities</li><li>People interested in a career in global priorities research</li><li>People open to novel ways to help wild animals<p></p></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who think global priorities research sounds boring</li><li>People who want to host very loud concerts under the sea<p></p></li></ul><p><em>Click </em><a href="https://forum.effectivealtruism.org/posts/rxojcFfpN88YNwGop/rethink-priorities-leadership-statement-on-the-ftx-situation"><em>here</em></a><em> to read Rethink Priorities’ Leadership Statement on the FTX situation.</em></p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Milo McGuire and Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin interviews Marcus Davis about <a href="https://rethinkpriorities.org/">Rethink Priorities</a>.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/marcus-davis-rethink-priorities/?utm_campaign=podcast__marcus-davis&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>They cover:</p><ul><li>Interventions to help wild animal</li><li>Aquatic noise</li><li>Rethink Priorities strategy</li><li>Mistakes that RP has made since it was founded</li><li>Careers in global priorities research</li><li>And the most surprising thing Marcus has learned at RP</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who want to learn about Rethink Priorities</li><li>People interested in a career in global priorities research</li><li>People open to novel ways to help wild animals<p></p></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who think global priorities research sounds boring</li><li>People who want to host very loud concerts under the sea<p></p></li></ul><p><em>Click </em><a href="https://forum.effectivealtruism.org/posts/rxojcFfpN88YNwGop/rethink-priorities-leadership-statement-on-the-ftx-situation"><em>here</em></a><em> to read Rethink Priorities’ Leadership Statement on the FTX situation.</em></p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Milo McGuire and Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 12 Dec 2022 23:02:52 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/497a68d6/7e98d097.mp3" length="29296934" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/zLrqpyYLLos3UEqRubkpQgk0Vi7Plqs4OqxXRfbqvEU/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzExMzQwODMv/MTY3MDg4Mzk5Ny1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3654</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin interviews Marcus Davis about <a href="https://rethinkpriorities.org/">Rethink Priorities</a>.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/marcus-davis-rethink-priorities/?utm_campaign=podcast__marcus-davis&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>They cover:</p><ul><li>Interventions to help wild animal</li><li>Aquatic noise</li><li>Rethink Priorities strategy</li><li>Mistakes that RP has made since it was founded</li><li>Careers in global priorities research</li><li>And the most surprising thing Marcus has learned at RP</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who want to learn about Rethink Priorities</li><li>People interested in a career in global priorities research</li><li>People open to novel ways to help wild animals<p></p></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who think global priorities research sounds boring</li><li>People who want to host very loud concerts under the sea<p></p></li></ul><p><em>Click </em><a href="https://forum.effectivealtruism.org/posts/rxojcFfpN88YNwGop/rethink-priorities-leadership-statement-on-the-ftx-situation"><em>here</em></a><em> to read Rethink Priorities’ Leadership Statement on the FTX situation.</em></p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Milo McGuire and Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/497a68d6/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Preventing catastrophic pandemics (Article)</title>
      <itunes:title>Preventing catastrophic pandemics (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">64b3972d-5455-4c5a-9f3c-e01e15b13ae0</guid>
      <link>https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Arden Koehler and Benjamin Hilton's problem profile on preventing catastrophic pandemics.</p><p><a href="https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our full profile on <a href="https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/full-report/">Reducing global catastrophic biological risks</a>, by Gregory Lewis.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Perrin Walker reads Arden Koehler and Benjamin Hilton's problem profile on preventing catastrophic pandemics.</p><p><a href="https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong>.</a></p><p>You might also want to check out our full profile on <a href="https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/full-report/">Reducing global catastrophic biological risks</a>, by Gregory Lewis.</p><p><br></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Editing and narration: Perrin Walker<br>Audio proofing: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Tue, 25 Oct 2022 17:17:17 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/61be2168/5b1a7243.mp3" length="12213286" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/00vPnZeAA72L4lH74aSKlVSHpPcYYpE_VvwQ0jbLkUY/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEwNzYwNjMv/MTY2NjcxODIzNy1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>1519</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Perrin Walker reads Arden Koehler and Benjamin Hilton's problem profile on preventing catastrophic pandemics.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Perrin Walker reads Arden Koehler and Benjamin Hilton's problem profile on preventing catastrophic pandemics.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/61be2168/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Kuhan Jeyapragasan on effective altruism university groups</title>
      <itunes:title>Kuhan Jeyapragasan on effective altruism university groups</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e566643f-f594-4968-8a2a-c524156600ef</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/kuhan-jeyapragasan-effective-altruism-university-groups/?utm_campaign=podcast__kuhan-jeyapragasan&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Rob Wiblin interviews Kuhan Jeyapragasan about effective altruism university groups.</p><p>From 2015 to 2020, Kuhan did an undergrad and then a master's in maths and computer science at Stanford — and did a lot to organise and improve the EA group on campus.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/kuhan-jeyapragasan-effective-altruism-university-groups/?utm_campaign=podcast__kuhan-jeyapragasan&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>Rob and Kuhan cover:</p><ul><li>The challenges of making a group appealing and accepting of everyone</li><li>The concrete things Kuhan did to grow the successful Stanford EA group</li><li>Whether local groups are turning off some people who should be interested in effective altruism, and what they could do differently</li><li>Lessons Kuhan learned from Stanford EA</li><li>The Stanford Existential Risks Initiative (SERI)</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People already involved in EA university groups</li><li>People interested in getting involved in EA university groups</li></ul><p><br></p><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who’ve never heard of ‘effective altruism groups’</li><li>People who’ve never heard of ‘effective altruism’</li><li>People who’ve never heard of ‘university’<p></p></li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Ryan Kessler<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of <em>80k After Hours</em>, Rob Wiblin interviews Kuhan Jeyapragasan about effective altruism university groups.</p><p>From 2015 to 2020, Kuhan did an undergrad and then a master's in maths and computer science at Stanford — and did a lot to organise and improve the EA group on campus.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/kuhan-jeyapragasan-effective-altruism-university-groups/?utm_campaign=podcast__kuhan-jeyapragasan&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>Rob and Kuhan cover:</p><ul><li>The challenges of making a group appealing and accepting of everyone</li><li>The concrete things Kuhan did to grow the successful Stanford EA group</li><li>Whether local groups are turning off some people who should be interested in effective altruism, and what they could do differently</li><li>Lessons Kuhan learned from Stanford EA</li><li>The Stanford Existential Risks Initiative (SERI)</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People already involved in EA university groups</li><li>People interested in getting involved in EA university groups</li></ul><p><br></p><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who’ve never heard of ‘effective altruism groups’</li><li>People who’ve never heard of ‘effective altruism’</li><li>People who’ve never heard of ‘university’<p></p></li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Ryan Kessler<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 21 Sep 2022 21:02:24 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/302f9952/50f3e5e5.mp3" length="29013022" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/hCGRZSEDr1YNKijvaw1f0NNPpaqyNoHQSMqXmMlJlTk/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEwMzMxOTcv/MTY2Mzc5MzQ4MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>3619</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Rob Wiblin interviews Kuhan Jeyapragasan about effective altruism university groups.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Rob Wiblin interviews Kuhan Jeyapragasan about effective altruism university groups.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/302f9952/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Career review: Founder of new projects tackling top problems (Article)</title>
      <itunes:title>Career review: Founder of new projects tackling top problems (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">740282d9-2852-4205-b4d8-07162a4be466</guid>
      <link>https://80000hours.org/career-reviews/founder-impactful-organisations/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Habiba Islam reads Benjamin Todd's career review on founding new projects tackling top problems.</p><p><a href="https://80000hours.org/career-reviews/founder-impactful-organisations/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong></a>.</p><p>You might also want to check out these relevant pieces:</p><ul><li><a href="https://80000hours.org/2021/11/growth-of-effective-altruism/">The growth of effective altruism: what does it mean for our priorities and level of ambition?</a> — Benjamin Todd</li><li><a href="https://80000hours.org/2020/09/good-judgement/">Notes on good judgement and how to develop it</a> — Benjamin Todd</li><li><a href="https://forum.effectivealtruism.org/posts/z56YFpphrQDTSPLqi/what-we-learned-from-a-year-incubating-longtermist">What we learned from a year incubating longtermist entrepreneurship</a> — Rebecca Kagan, Jade Leung, Ben Clifford</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Beppe Rådvik and Ben Cordell</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Habiba Islam reads Benjamin Todd's career review on founding new projects tackling top problems.</p><p><a href="https://80000hours.org/career-reviews/founder-impactful-organisations/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong></a>.</p><p>You might also want to check out these relevant pieces:</p><ul><li><a href="https://80000hours.org/2021/11/growth-of-effective-altruism/">The growth of effective altruism: what does it mean for our priorities and level of ambition?</a> — Benjamin Todd</li><li><a href="https://80000hours.org/2020/09/good-judgement/">Notes on good judgement and how to develop it</a> — Benjamin Todd</li><li><a href="https://forum.effectivealtruism.org/posts/z56YFpphrQDTSPLqi/what-we-learned-from-a-year-incubating-longtermist">What we learned from a year incubating longtermist entrepreneurship</a> — Rebecca Kagan, Jade Leung, Ben Clifford</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Beppe Rådvik and Ben Cordell</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 12 Sep 2022 18:30:36 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/126f590a/b45fe462.mp3" length="7949300" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/Hb3jGusXDAp8RhnH_nQqEfqx_BYXC_jX3r1AEcd89CM/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEwMjE1MzYv/MTY2MzAwNzA0MS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>988</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Habiba Islam reads Benjamin Todd's career review on founding new projects tackling top problems.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Habiba Islam reads Benjamin Todd's career review on founding new projects tackling top problems.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/126f590a/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Andrés Jiménez Zorrilla on the Shrimp Welfare Project</title>
      <itunes:title>Andrés Jiménez Zorrilla on the Shrimp Welfare Project</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">97d51ca0-d492-44af-b9e3-44495ec04d58</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/andres-jimenez-zorrilla-shrimp-welfare-project/?utm_campaign=podcast__andres-zorrilla&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the <a href="https://www.shrimpwelfareproject.org/">Shrimp Welfare Project</a>, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically and now has six full-time staff.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/andres-jimenez-zorrilla-shrimp-welfare-project/?utm_campaign=podcast__andres-zorrilla&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>They cover:</p><ul><li>The evidence for shrimp sentience</li><li>How farmers and the public feel about shrimp</li><li>The scale of the problem</li><li>What shrimp farming looks like</li><li>The killing process, and other welfare issues</li><li>Shrimp Welfare Project’s strategy</li><li>History of shrimp welfare work</li><li>What it’s like working in India and Vietnam</li><li>How to help</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who care about animal welfare</li><li>People interested in new and unusual problems</li><li>People open to shrimp sentience<p></p></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who think shrimp couldn’t possibly be sentient</li><li>People who got called ‘shrimp’ a lot in high school and get anxious when they hear the word over and over again<p></p></li></ul><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell and Ryan Kessler<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the <a href="https://www.shrimpwelfareproject.org/">Shrimp Welfare Project</a>, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically and now has six full-time staff.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/andres-jimenez-zorrilla-shrimp-welfare-project/?utm_campaign=podcast__andres-zorrilla&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>They cover:</p><ul><li>The evidence for shrimp sentience</li><li>How farmers and the public feel about shrimp</li><li>The scale of the problem</li><li>What shrimp farming looks like</li><li>The killing process, and other welfare issues</li><li>Shrimp Welfare Project’s strategy</li><li>History of shrimp welfare work</li><li>What it’s like working in India and Vietnam</li><li>How to help</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People who care about animal welfare</li><li>People interested in new and unusual problems</li><li>People open to shrimp sentience<p></p></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who think shrimp couldn’t possibly be sentient</li><li>People who got called ‘shrimp’ a lot in high school and get anxious when they hear the word over and over again<p></p></li></ul><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell and Ryan Kessler<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 05 Sep 2022 20:59:10 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/f6dca81d/5ba26e8e.mp3" length="35830879" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/y_91SJovZsrepta8RSd_X7VP1uwOv6gv607SYYictBg/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzEwMTQyNzcv/MTY2MjQxMTQwOS1h/cnR3b3JrLmpwZw.jpg"/>
      <itunes:duration>4475</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:chapters url="https://share.transistor.fm/s/f6dca81d/chapters.json" type="application/json+chapters"/>
    </item>
    <item>
      <title>Space governance (Article)</title>
      <itunes:title>Space governance (Article)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">512c9479-4539-40de-9891-c8c0451434f3</guid>
      <link>https://80000hours.org/problem-profiles/space-governance/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Fin Moorhouse reads his problem profile on space governance.</p><p><a href="https://80000hours.org/problem-profiles/space-governance/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong></a>.</p><p>If you want to hear more from Fin, you should check out his podcast <a href="https://www.hearthisidea.com/">Hear This Idea</a>, which showcases new thinking in philosophy, the social sciences, and <a href="https://www.effectivealtruism.org/">effective altruism</a>. And you can see a bunch of other things he's up to on his <a href="https://www.finmoorhouse.com/">website</a>.</p><p>You might also want to check out these relevant pieces:</p><ul><li><a href="https://forum.effectivealtruism.org/posts/QkRq6aRA84vv4xsu9/space-governance-is-important-tractable-and-neglected">Space governance is important, tractable and neglected</a> — Tobias Baumann</li><li><a href="https://forum.effectivealtruism.org/posts/xxcroGWRieSQjCw2N/an-informal-review-of-space-exploration">An Informal Review of Space Exploration</a></li><li><a href="https://globalprioritiesinstitute.org/how-cost-effective-are-efforts-to-detect-near-earth-objects-toby-newberry-future-of-humanity-institute-university-of-oxford/">How cost-effective are efforts to detect near-Earth-objects?</a> — Toby Newberry</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Beppe Rådvik and Ben Cordell</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Fin Moorhouse reads his problem profile on space governance.</p><p><a href="https://80000hours.org/problem-profiles/space-governance/?utm_campaign=podcast__80kah&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Here’s the original piece if you’d like to learn more</strong></a>.</p><p>If you want to hear more from Fin, you should check out his podcast <a href="https://www.hearthisidea.com/">Hear This Idea</a>, which showcases new thinking in philosophy, the social sciences, and <a href="https://www.effectivealtruism.org/">effective altruism</a>. And you can see a bunch of other things he's up to on his <a href="https://www.finmoorhouse.com/">website</a>.</p><p>You might also want to check out these relevant pieces:</p><ul><li><a href="https://forum.effectivealtruism.org/posts/QkRq6aRA84vv4xsu9/space-governance-is-important-tractable-and-neglected">Space governance is important, tractable and neglected</a> — Tobias Baumann</li><li><a href="https://forum.effectivealtruism.org/posts/xxcroGWRieSQjCw2N/an-informal-review-of-space-exploration">An Informal Review of Space Exploration</a></li><li><a href="https://globalprioritiesinstitute.org/how-cost-effective-are-efforts-to-detect-near-earth-objects-toby-newberry-future-of-humanity-institute-university-of-oxford/">How cost-effective are efforts to detect near-Earth-objects?</a> — Toby Newberry</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Beppe Rådvik and Ben Cordell</em></p>]]>
      </content:encoded>
      <pubDate>Thu, 30 Jun 2022 14:08:05 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/aeb9876d/6a522940.mp3" length="22486367" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/i44VEbXpDOTmUEFcZ6eDFo0gbnKrHUmOXlTOKRwqEPE/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzkzNDYzMC8x/NjU2NTk4MDg1LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>2810</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Fin Moorhouse reads his problem profile on space governance.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Fin Moorhouse reads his problem profile on space governance.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Clay Graubard and Robert de Neufville on forecasting the war in Ukraine</title>
      <itunes:title>Clay Graubard and Robert de Neufville on forecasting the war in Ukraine</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">2955c84c-de74-46ac-9e0b-ea3cb3f3b7af</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/clay-graubard-robert-de-neufville-forecasting-ukraine/?utm_campaign=podcast__clay-robert&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin interviews Clay Graubard and Robert de Neufville about forecasting the war between Russia and Ukraine.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/clay-graubard-robert-de-neufville-forecasting-ukraine/?utm_campaign=podcast__clay-robert&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>They cover:</p><ul><li>Their early predictions for the war</li><li>The performance of the Russian military</li><li>The risk of use of nuclear weapons</li><li>The most interesting remaining topics on Russia and Ukraine</li><li>General lessons we can take from the war</li><li>The evolution of the forecasting space</li><li>What Robert and Clay were reading back in February</li><li>Forecasters vs. subject matter experts</li><li>Ways to get involved with the forecasting community</li><li>Impressive past predictions</li><li>And more</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People interested in forecasting</li><li>People interested in the war in Ukraine</li><li>People who prefer to know how likely they are to die in a nuclear war<p></p></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who’d hate it if a friend said they were 65% likely to come out for drinks</li><li>People who’d prefer if their death from nuclear war was a total surprise<p></p></li></ul><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin interviews Clay Graubard and Robert de Neufville about forecasting the war between Russia and Ukraine.</p><p><a href="https://80000hours.org/after-hours-podcast/episodes/clay-graubard-robert-de-neufville-forecasting-ukraine/?utm_campaign=podcast__clay-robert&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong></a><strong></strong></p><p>They cover:</p><ul><li>Their early predictions for the war</li><li>The performance of the Russian military</li><li>The risk of use of nuclear weapons</li><li>The most interesting remaining topics on Russia and Ukraine</li><li>General lessons we can take from the war</li><li>The evolution of the forecasting space</li><li>What Robert and Clay were reading back in February</li><li>Forecasters vs. subject matter experts</li><li>Ways to get involved with the forecasting community</li><li>Impressive past predictions</li><li>And more</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>People interested in forecasting</li><li>People interested in the war in Ukraine</li><li>People who prefer to know how likely they are to die in a nuclear war<p></p></li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who’d hate it if a friend said they were 65% likely to come out for drinks</li><li>People who’d prefer if their death from nuclear war was a total surprise<p></p></li></ul><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app. Or read the transcript below.</strong></p><p><br><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 25 May 2022 22:03:49 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/c70ff4c5/798de385.mp3" length="57466566" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/ILl4QSkdA2vpm2Q_PQUQnDLvDOvAwVY8Gln8PdexA88/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzg5OTg3NS8x/NjUzNDk4MzI4LWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>7175</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Rob Wiblin interviews Clay Graubard and Robert de Neufville about forecasting the war between Russia and Ukraine.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Rob Wiblin interviews Clay Graubard and Robert de Neufville about forecasting the war between Russia and Ukraine.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Michelle and Habiba on what they’d tell their younger selves, and the impact of the 1-1 team</title>
      <itunes:title>Michelle and Habiba on what they’d tell their younger selves, and the impact of the 1-1 team</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7f8ad303-b69d-4c4d-92fd-044abd268b94</guid>
      <link>https://80000hours.org/after-hours-podcast/episodes/michelle-habiba-advice-for-younger-selves/?utm_campaign=podcast__michelle-habiba&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours — Rob continues to interview his 80,000 Hours colleagues Michelle Hutchinson and Habiba Islam about the 1-1 team.</p><p><a href="https://80000hours.org/podcast/episodes/80k-after-hours-michelle-habiba-advice-for-younger-selves/?utm_campaign=podcast__michelle-habiba&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong> </a></p><p>This is the second of a two-part interview. You can find the <a href="https://80000hours.org/podcast/episodes/michelle-hutchinson-habiba-islam-themes-from-careers-advising"><strong>first part</strong></a> on the original 80,000 Hours Podcast feed.</p><p>In this part, they cover:</p><ul><li>Whether just encouraging someone young to aspire to more than they currently are is one of the most impactful ways to spend half an hour</li><li>How much impact the one-on-one team has, the biggest challenges they face as a group, and different paths they could have gone down</li><li>Whether giving general advice is a doomed enterprise</li><li>And more</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>Young people interested in 80,000 Hours</li><li>People curious about the inner-workings of the 1-1 team</li><li>People who left the first part wanting more</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People who left the first part wanting less</li><li>People who like up-to-date movie recommendations</li></ul><p><em>Want to get free one-on-one advice from our team? We're here to help.</em></p><p><em> </em></p><p><em>We’ve helped thousands of people formulate their plans and put them in touch with mentors.</em></p><p><br></p><p><em>We've expanded our ability to deliver one-on-one meetings so are keen to help more people than ever before. </em><strong><em>If you're a regular listener to the show we're especially likely to want to speak with you. </em></strong></p><p><br></p><p><a href="https://80000hours.org/speak-with-us/?int_campaign=podcast__michelle-habiba"><strong><em>Learn about and apply for advising.</em></strong></a></p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours — Rob continues to interview his 80,000 Hours colleagues Michelle Hutchinson and Habiba Islam about the 1-1 team.</p><p><a href="https://80000hours.org/podcast/episodes/80k-after-hours-michelle-habiba-advice-for-younger-selves/?utm_campaign=podcast__michelle-habiba&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript.</strong> </a></p><p>This is the second of a two-part interview. You can find the <a href="https://80000hours.org/podcast/episodes/michelle-hutchinson-habiba-islam-themes-from-careers-advising"><strong>first part</strong></a> on the original 80,000 Hours Podcast feed.</p><p>In this part, they cover:</p><ul><li>Whether just encouraging someone young to aspire to more than they currently are is one of the most impactful ways to spend half an hour</li><li>How much impact the one-on-one team has, the biggest challenges they face as a group, and different paths they could have gone down</li><li>Whether giving general advice is a doomed enterprise</li><li>And more</li></ul><p><br><strong>Who this episode is for:</strong></p><ul><li>Young people interested in 80,000 Hours</li><li>People curious about the inner-workings of the 1-1 team</li><li>People who left the first part wanting more</li></ul><p><br><strong>Who this episode isn’t for:</strong></p><ul><li>People who left the first part wanting less</li><li>People who like up-to-date movie recommendations</li></ul><p><em>Want to get free one-on-one advice from our team? We're here to help.</em></p><p><em> </em></p><p><em>We’ve helped thousands of people formulate their plans and put them in touch with mentors.</em></p><p><br></p><p><em>We've expanded our ability to deliver one-on-one meetings so are keen to help more people than ever before. </em><strong><em>If you're a regular listener to the show we're especially likely to want to speak with you. </em></strong></p><p><br></p><p><a href="https://80000hours.org/speak-with-us/?int_campaign=podcast__michelle-habiba"><strong><em>Learn about and apply for advising.</em></strong></a></p><p><br><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 09 Mar 2022 17:42:43 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/8aae4b08/c782b592.mp3" length="31154840" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/FP6f1W_laq0MEYXoYq8-D1NtdNoD7DvTyEHC4xkxhzs/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzgyNzE0MS8x/NjQ2ODQ3MjkxLWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>3894</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours — Rob continues to interview his 80,000 Hours colleagues Michelle Hutchinson and Habiba Islam about the 1-1 team.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours — Rob continues to interview his 80,000 Hours colleagues Michelle Hutchinson and Habiba Islam about the 1-1 team.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Alex Lawsen on his advice for students</title>
      <itunes:title>Alex Lawsen on his advice for students</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">43c8529d-f9c4-4890-8588-fa6808883572</guid>
      <link>https://80000hours.org/podcast/episodes/80k-after-hours-alex-lawsen-advice-for-students/?utm_campaign=podcast__Alex-Lawsen&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Keiran Harris interviews 80,000 Hours advisor (and former high school teacher) Alex Lawsen about his advice for students.</p><p><a href="https://80000hours.org/podcast/episodes/80k-after-hours-alex-lawsen-advice-for-students/?utm_campaign=podcast__Alex-Lawsen&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript</strong></a><strong>.</strong></p><p>They cover:</p><ul><li>When half-assing something is a good idea</li><li>When you should actually learn things vs. just trying to seem smart</li><li>Why you should shift your focus over the academic year</li><li>Novel tips for preparing for exams</li><li>What to do if you struggle with motivation</li><li>What to do when you have bad teachers</li><li>How students should think about exploring and experimenting</li><li>Bad approaches to learning</li><li>How to think about personal goals </li><li>When to start thinking about your career seriously </li><li>And more. </li></ul><p><strong>Who this episode is for:</strong></p><ul><li>Students, parents, and teachers  </li><li>People who know a student, parent, or teacher  </li><li>People with an interest in improving education </li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People with no interest in improving education </li><li>People who get mad when an 80,000 Hours podcast doesn’t feature Rob Wiblin</li></ul><p><em>Our 1-1 team can talk to you about career options, make introductions in your chosen fields, and help you work out next steps on a free careers call. </em><a href="https://80000hours.org/speak-with-us/?utm_source=Podcast&amp;utm_campaign=2022-podcast-alex-advice&amp;utm_medium=podcast"><strong><em>Apply now</em></strong><em>.</em></a><em></em></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ryan Kessler<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Keiran Harris interviews 80,000 Hours advisor (and former high school teacher) Alex Lawsen about his advice for students.</p><p><a href="https://80000hours.org/podcast/episodes/80k-after-hours-alex-lawsen-advice-for-students/?utm_campaign=podcast__Alex-Lawsen&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript</strong></a><strong>.</strong></p><p>They cover:</p><ul><li>When half-assing something is a good idea</li><li>When you should actually learn things vs. just trying to seem smart</li><li>Why you should shift your focus over the academic year</li><li>Novel tips for preparing for exams</li><li>What to do if you struggle with motivation</li><li>What to do when you have bad teachers</li><li>How students should think about exploring and experimenting</li><li>Bad approaches to learning</li><li>How to think about personal goals </li><li>When to start thinking about your career seriously </li><li>And more. </li></ul><p><strong>Who this episode is for:</strong></p><ul><li>Students, parents, and teachers  </li><li>People who know a student, parent, or teacher  </li><li>People with an interest in improving education </li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People with no interest in improving education </li><li>People who get mad when an 80,000 Hours podcast doesn’t feature Rob Wiblin</li></ul><p><em>Our 1-1 team can talk to you about career options, make introductions in your chosen fields, and help you work out next steps on a free careers call. </em><a href="https://80000hours.org/speak-with-us/?utm_source=Podcast&amp;utm_campaign=2022-podcast-alex-advice&amp;utm_medium=podcast"><strong><em>Apply now</em></strong><em>.</em></a><em></em></p><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.</strong></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ryan Kessler<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 28 Feb 2022 22:16:26 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/485d9b24/0ef2f697.mp3" length="69492905" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/oIALvlbv5T_DyS2R9tMgdBvAdT98MGoDzwPYuIAldjI/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzgxODYxMy8x/NjQ2MDcyODAxLWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>8686</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Keiran Harris interviews 80,000 Hours advisor (and former high school teacher) Alex Lawsen about his advice for students.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Keiran Harris interviews 80,000 Hours advisor (and former high school teacher) Alex Lawsen about his advice for students.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Rob and Keiran on the philosophy of The 80,000 Hours Podcast</title>
      <itunes:title>Rob and Keiran on the philosophy of The 80,000 Hours Podcast</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e82fc16f-1caf-4943-8f13-3c9fef34240e</guid>
      <link>https://80000hours.org/podcast/episodes/80k-after-hours-philosophy-of-the-80000-hours-podcast/?utm_campaign=podcast__Rob-Keiran&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin and Keiran Harris are interviewed by Kearney Capuano and ​​Aaron Bergman of the new podcast ‘All Good’ about what goes on behind-the-scenes at the 80,000 Hours Podcast.</p><p><a href="https://80000hours.org/podcast/episodes/80k-after-hours-philosophy-of-the-80000-hours-podcast/?utm_campaign=podcast__Rob-Keiran&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript</strong></a><strong>.</strong></p><p>We cover:</p><ul><li>The history and philosophy of The 80,000 Hours Podcast</li><li>The nuts and bolts of how we make the show</li><li>Rob’s bad habits as an interviewer</li><li>Topics we try to avoid</li><li>Critiques of the show</li><li>The pros and cons of podcasting vs. other mediums</li><li>Our position in the effective altruism community</li><li>Whether there’s an optimism bias in the EA community</li><li>Unifying themes of Rob and Keiran’s careers</li><li>Advice for other podcasters</li><li>And more</li></ul><p><strong>Who this episode is for:</strong></p><ul><li>Fans of The 80,000 Hours Podcast  </li><li>New podcasters</li><li>Two 80,000 Hours employees who love the sound of their own voice</li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who’ve never heard of The 80,000 Hours Podcast </li><li>People who only want to learn about more important topics </li><li>People who hate podcasts</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.<br></strong><br></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode of 80k After Hours, Rob Wiblin and Keiran Harris are interviewed by Kearney Capuano and ​​Aaron Bergman of the new podcast ‘All Good’ about what goes on behind-the-scenes at the 80,000 Hours Podcast.</p><p><a href="https://80000hours.org/podcast/episodes/80k-after-hours-philosophy-of-the-80000-hours-podcast/?utm_campaign=podcast__Rob-Keiran&amp;utm_source=80kah&amp;utm_medium=podcast"><strong>Links to learn more, highlights and full transcript</strong></a><strong>.</strong></p><p>We cover:</p><ul><li>The history and philosophy of The 80,000 Hours Podcast</li><li>The nuts and bolts of how we make the show</li><li>Rob’s bad habits as an interviewer</li><li>Topics we try to avoid</li><li>Critiques of the show</li><li>The pros and cons of podcasting vs. other mediums</li><li>Our position in the effective altruism community</li><li>Whether there’s an optimism bias in the EA community</li><li>Unifying themes of Rob and Keiran’s careers</li><li>Advice for other podcasters</li><li>And more</li></ul><p><strong>Who this episode is for:</strong></p><ul><li>Fans of The 80,000 Hours Podcast  </li><li>New podcasters</li><li>Two 80,000 Hours employees who love the sound of their own voice</li></ul><p><strong>Who this episode isn’t for:</strong></p><ul><li>People who’ve never heard of The 80,000 Hours Podcast </li><li>People who only want to learn about more important topics </li><li>People who hate podcasts</li></ul><p><strong>Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.<br></strong><br></p><p><em>Producer: Keiran Harris<br>Audio mastering: Ben Cordell<br>Transcriptions: Katy Moore</em></p>]]>
      </content:encoded>
      <pubDate>Mon, 28 Feb 2022 22:14:52 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/f69947e4/12c8351d.mp3" length="54503028" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:image href="https://img.transistor.fm/TpBzbIyS3jpnpgtuq8_W_IH-pgvbhFf6VPJj1FbyvNA/rs:fill:0:0:1/w:3000/h:3000/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9lcGlz/b2RlLzgxODYwOC8x/NjQ2MDcyNTYwLWFy/dHdvcmsuanBn.jpg"/>
      <itunes:duration>6812</itunes:duration>
      <itunes:summary>In this episode of 80k After Hours, Rob Wiblin and Keiran Harris are interviewed by Kearney Capuano and ​​Aaron Bergman of the new podcast ‘All Good’ about what goes on behind-the-scenes at the 80,000 Hours Podcast.</itunes:summary>
      <itunes:subtitle>In this episode of 80k After Hours, Rob Wiblin and Keiran Harris are interviewed by Kearney Capuano and ​​Aaron Bergman of the new podcast ‘All Good’ about what goes on behind-the-scenes at the 80,000 Hours Podcast.</itunes:subtitle>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Introducing 80k After Hours</title>
      <itunes:title>Introducing 80k After Hours</itunes:title>
      <itunes:episodeType>trailer</itunes:episodeType>
      <guid isPermaLink="false">031e42db-2080-4a05-965c-fcc7d1f92016</guid>
      <link>https://80000hours.org/after-hours-podcast/?utm_campaign=podcast__introducing-80kah&amp;utm_source=80kah&amp;utm_medium=podcast</link>
      <description>
        <![CDATA[<p><em>80k After Hours</em> is a podcast by the team that brings you <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>Here Rob Wiblin and Keiran Harris briefly explain what to expect from <a href="https://80000hours.org/after-hours-podcast/?utm_campaign=podcast__introducing-80kah&amp;utm_source=80kah&amp;utm_medium=podcast">the new show</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><em>80k After Hours</em> is a podcast by the team that brings you <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>Here Rob Wiblin and Keiran Harris briefly explain what to expect from <a href="https://80000hours.org/after-hours-podcast/?utm_campaign=podcast__introducing-80kah&amp;utm_source=80kah&amp;utm_medium=podcast">the new show</a>.</p>]]>
      </content:encoded>
      <pubDate>Thu, 24 Feb 2022 23:47:37 +0000</pubDate>
      <author>The 80,000 Hours team</author>
      <enclosure url="https://media.transistor.fm/724ead45/5028e787.mp3" length="6535866" type="audio/mpeg"/>
      <itunes:author>The 80,000 Hours team</itunes:author>
      <itunes:duration>810</itunes:duration>
      <itunes:summary>
        <![CDATA[<p><em>80k After Hours</em> is a podcast by the team that brings you <a href="https://80000hours.org/podcast/">The 80,000 Hours Podcast</a>.</p><p>Here Rob Wiblin and Keiran Harris briefly explain what to expect from <a href="https://80000hours.org/after-hours-podcast/?utm_campaign=podcast__introducing-80kah&amp;utm_source=80kah&amp;utm_medium=podcast">the new show</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>effective altruism, 80000 hours, impactful careers, existential risk</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
  </channel>
</rss>
