<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/stylesheet.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link rel="self" type="application/atom+xml" href="https://feeds.transistor.fm/human-centered-security" title="MP3 Audio"/>
    <atom:link rel="hub" href="https://pubsubhubbub.appspot.com/"/>
    <podcast:podping usesPodping="true"/>
    <title>Human-Centered Security</title>
    <generator>Transistor (https://transistor.fm)</generator>
    <itunes:new-feed-url>https://feeds.transistor.fm/human-centered-security</itunes:new-feed-url>
    <description>Cybersecurity is complex. Its user experience doesn’t have to be. Heidi Trost interviews information security experts about how we can make it easier for people—and their organizations—to stay secure.</description>
    <copyright>2020 Voice+Code</copyright>
    <podcast:guid>c0f6fd1b-815a-508d-aae6-244e2ce6de68</podcast:guid>
    <podcast:locked owner="sales@voiceandcode.com">no</podcast:locked>
    <language>en</language>
    <pubDate>Mon, 25 Aug 2025 05:00:11 -0400</pubDate>
    <lastBuildDate>Tue, 02 Dec 2025 17:03:20 -0500</lastBuildDate>
    
    <itunes:category text="Technology"/>
    <itunes:category text="Business"/>
    <itunes:type>episodic</itunes:type>
    <itunes:author>Voice+Code</itunes:author>
    <itunes:image href="https://img.transistor.fm/nSBo9KzIYbI2tf2VwAzZdkD2MTG5Ilh8jzkwLvzWkfw/rs:fill:0:0:1/w:1400/h:1400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9zaG93/LzE2NzkxLzE2MDc0/NDYzMzctYXJ0d29y/ay5qcGc.jpg"/>
    <itunes:summary>Cybersecurity is complex. Its user experience doesn’t have to be. Heidi Trost interviews information security experts about how we can make it easier for people—and their organizations—to stay secure.</itunes:summary>
    <itunes:subtitle>Cybersecurity is complex.</itunes:subtitle>
    <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
    <itunes:owner>
      <itunes:name>Heidi Trost</itunes:name>
    </itunes:owner>
    <itunes:complete>No</itunes:complete>
    <itunes:explicit>Yes</itunes:explicit>
    <item>
      <title>No Threat Intel Team? No Problem. Let’s Pretend You Do! with Mike Kosak</title>
      <itunes:episode>59</itunes:episode>
      <podcast:episode>59</podcast:episode>
      <itunes:title>No Threat Intel Team? No Problem. Let’s Pretend You Do! with Mike Kosak</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">987a2d52-cc14-4512-8501-70f40b24366f</guid>
      <link>https://share.transistor.fm/s/2247b841</link>
      <description>
        <![CDATA[<p>In this episode, Mike Kosak explains what threat intelligence really is (Mike’s former boss said you have to “rub some thinking on it.”), how to define priority intelligence requirements (PIRs), how to treat model, where to find threat intel, and how to keep in actionable with tight feedback loops—not panic. </p><p>Key takeaways:</p><ul><li><strong>Threat intel ≠ data.</strong> It’s analyzed info focused “<strong>walls-out</strong>” (what’s outside your org), then shared clearly so people can act.</li><li><strong>Start with PIRs.</strong> Ask: <em>What are we protecting? What is most valuable to our company? What might threat actors want? How do they operate? What do we need to know to defend?</em> Do this with a broad set of stakeholders, not just the security team.</li><li><strong>Communicate clearly and with context.</strong> Intelligence is only valuable if it’s shared in a way others can understand and act on. Avoid overwhelming people with raw data or inducing panic — provide <em>actionable insights</em> that are right-sized for the audience. <ul><li>Mike’s advice: “As a threat intelligence analyst, if you’re doing your job right, when somebody hears from you they know they need to act on it. You don’t want to be the chicken little where you make everybody freak out about everything.”</li></ul></li><li><strong>Start small and iterate.</strong> Even if you’re a one-person team, you can make a big impact. Use free resources (like MITRE ATT&amp;CK, open-source feeds, or even vendor reports), summarize what’s relevant, and push that out. Then refine based on feedback—treat it as a continuous cycle, not a one-and-done project. <ul><li>Mike admits, “I always say it’s like painting the Golden Gate Bridge. As soon as you get done, you gotta start back at the other end. That’s basically what it is.”</li></ul></li></ul><p>Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including “Setting Up a Threat Intelligence Program From Scratch.” <a href="https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language">https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language</a></p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Mike Kosak explains what threat intelligence really is (Mike’s former boss said you have to “rub some thinking on it.”), how to define priority intelligence requirements (PIRs), how to treat model, where to find threat intel, and how to keep in actionable with tight feedback loops—not panic. </p><p>Key takeaways:</p><ul><li><strong>Threat intel ≠ data.</strong> It’s analyzed info focused “<strong>walls-out</strong>” (what’s outside your org), then shared clearly so people can act.</li><li><strong>Start with PIRs.</strong> Ask: <em>What are we protecting? What is most valuable to our company? What might threat actors want? How do they operate? What do we need to know to defend?</em> Do this with a broad set of stakeholders, not just the security team.</li><li><strong>Communicate clearly and with context.</strong> Intelligence is only valuable if it’s shared in a way others can understand and act on. Avoid overwhelming people with raw data or inducing panic — provide <em>actionable insights</em> that are right-sized for the audience. <ul><li>Mike’s advice: “As a threat intelligence analyst, if you’re doing your job right, when somebody hears from you they know they need to act on it. You don’t want to be the chicken little where you make everybody freak out about everything.”</li></ul></li><li><strong>Start small and iterate.</strong> Even if you’re a one-person team, you can make a big impact. Use free resources (like MITRE ATT&amp;CK, open-source feeds, or even vendor reports), summarize what’s relevant, and push that out. Then refine based on feedback—treat it as a continuous cycle, not a one-and-done project. <ul><li>Mike admits, “I always say it’s like painting the Golden Gate Bridge. As soon as you get done, you gotta start back at the other end. That’s basically what it is.”</li></ul></li></ul><p>Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including “Setting Up a Threat Intelligence Program From Scratch.” <a href="https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language">https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language</a></p><p><br></p>]]>
      </content:encoded>
      <pubDate>Mon, 25 Aug 2025 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/2247b841/fcbefd79.mp3" length="48318102" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>3019</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Mike Kosak explains what threat intelligence really is (Mike’s former boss said you have to “rub some thinking on it.”), how to define priority intelligence requirements (PIRs), how to treat model, where to find threat intel, and how to keep in actionable with tight feedback loops—not panic. </p><p>Key takeaways:</p><ul><li><strong>Threat intel ≠ data.</strong> It’s analyzed info focused “<strong>walls-out</strong>” (what’s outside your org), then shared clearly so people can act.</li><li><strong>Start with PIRs.</strong> Ask: <em>What are we protecting? What is most valuable to our company? What might threat actors want? How do they operate? What do we need to know to defend?</em> Do this with a broad set of stakeholders, not just the security team.</li><li><strong>Communicate clearly and with context.</strong> Intelligence is only valuable if it’s shared in a way others can understand and act on. Avoid overwhelming people with raw data or inducing panic — provide <em>actionable insights</em> that are right-sized for the audience. <ul><li>Mike’s advice: “As a threat intelligence analyst, if you’re doing your job right, when somebody hears from you they know they need to act on it. You don’t want to be the chicken little where you make everybody freak out about everything.”</li></ul></li><li><strong>Start small and iterate.</strong> Even if you’re a one-person team, you can make a big impact. Use free resources (like MITRE ATT&amp;CK, open-source feeds, or even vendor reports), summarize what’s relevant, and push that out. Then refine based on feedback—treat it as a continuous cycle, not a one-and-done project. <ul><li>Mike admits, “I always say it’s like painting the Golden Gate Bridge. As soon as you get done, you gotta start back at the other end. That’s basically what it is.”</li></ul></li></ul><p>Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including “Setting Up a Threat Intelligence Program From Scratch.” <a href="https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language">https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language</a></p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>We Regret to Inform You: Your Phishing Training Did Nothing with Ariana Mirian</title>
      <itunes:episode>58</itunes:episode>
      <podcast:episode>58</podcast:episode>
      <itunes:title>We Regret to Inform You: Your Phishing Training Did Nothing with Ariana Mirian</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4ef7f2c2-e44d-41e8-80fd-821db95c5b11</guid>
      <link>https://share.transistor.fm/s/de7eeadb</link>
      <description>
        <![CDATA[<p>You click on a link in an email—as one does. Suddenly you see a message from your organization, “You’ve been phished! Now you need some training!” What do you do next? If you’re like most busy humans, you skip it and move on.</p><p><br></p><p>Researcher Ariana Mirian (and co-authors Grant Ho, Elisa Luo, Khang Tong, Euyhyun Lee, Lin Liu, Christopher A. Longhurst, Christian Dameff, Stefan Savage, Geoffrey M. Voelker) uncovered similar results in their study <a href="https://www.computer.org/csdl/proceedings-article/sp/2025/223600a076/21B7RjYyG9q">“Understanding the Efficacy of Phishing Training in Practice.”</a> The solution? Ariana suggests focusing on a more effective fix: designing safer systems.</p><p>In the episode we talk about:</p><ul><li>Annual cybersecurity awareness training doesn’t reduce the likelihood of clicking on phishing links, even if completed recently. Employees who finished training recently show similar phishing failure rates to those who completed it months ago. The study notes, “Employees who recently completed such training, which has significant focus on social engineering and phishing defenses, have similar phishing failure rates compared to other employees who completed awareness training many months ago.”</li><li>Phishing simulations combined with training (where companies send out fake phishing emails to employees and, for those who click on the links, lead those employees through training) had little impact on whether participants would click phishing links in the future. </li><li>Ariana was hopeful about interactive training but found that too few participants engaged with it to draw meaningful conclusions. </li><li>The type of phishing lure (e.g., password reset vs. vacation policy change) influenced whether users clicked. Ariana warned that certain lures could artificially lower click rates.</li><li>Ultimately, Ariana suggests focusing on designing safer systems—where the burden is taken off the end users. She recommends two-factor authentication, using phishing-resistant hardware keys (like YubiKeys), and blocking phishing emails before they reach users.</li></ul><p><br></p><p>This quote from the study stood out to me: “Our results suggest that organizations like ours should not expect training, as commonly deployed today, to substantially protect against phishing attacks—the magnitude of protection afforded is simply too small and employees remain susceptible even after repeated training.”</p><p><br></p><p>This highlights the need for safer system design, especially for critical services like email, which—and this is important—inherently relies on users clicking links.</p><p><br></p><p>Ariana Mirian is a senior security researcher at Censys. She completed her PhD at UC San Diego and co-authored the paper, “Understanding the Efficacy of Phishing Training in Practice.”</p><p><br></p><p>G. Ho et al., <a href="https://www.computer.org/csdl/proceedings-article/sp/2025/223600a076/21B7RjYyG9q">"Understanding the Efficacy of Phishing Training in Practice,"</a> in <em>2025 IEEE Symposium on Security and Privacy</em> (SP), San Francisco, CA, 2025, pp. 37-54, doi: 10.1109/SP61157.2025.00076.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>You click on a link in an email—as one does. Suddenly you see a message from your organization, “You’ve been phished! Now you need some training!” What do you do next? If you’re like most busy humans, you skip it and move on.</p><p><br></p><p>Researcher Ariana Mirian (and co-authors Grant Ho, Elisa Luo, Khang Tong, Euyhyun Lee, Lin Liu, Christopher A. Longhurst, Christian Dameff, Stefan Savage, Geoffrey M. Voelker) uncovered similar results in their study <a href="https://www.computer.org/csdl/proceedings-article/sp/2025/223600a076/21B7RjYyG9q">“Understanding the Efficacy of Phishing Training in Practice.”</a> The solution? Ariana suggests focusing on a more effective fix: designing safer systems.</p><p>In the episode we talk about:</p><ul><li>Annual cybersecurity awareness training doesn’t reduce the likelihood of clicking on phishing links, even if completed recently. Employees who finished training recently show similar phishing failure rates to those who completed it months ago. The study notes, “Employees who recently completed such training, which has significant focus on social engineering and phishing defenses, have similar phishing failure rates compared to other employees who completed awareness training many months ago.”</li><li>Phishing simulations combined with training (where companies send out fake phishing emails to employees and, for those who click on the links, lead those employees through training) had little impact on whether participants would click phishing links in the future. </li><li>Ariana was hopeful about interactive training but found that too few participants engaged with it to draw meaningful conclusions. </li><li>The type of phishing lure (e.g., password reset vs. vacation policy change) influenced whether users clicked. Ariana warned that certain lures could artificially lower click rates.</li><li>Ultimately, Ariana suggests focusing on designing safer systems—where the burden is taken off the end users. She recommends two-factor authentication, using phishing-resistant hardware keys (like YubiKeys), and blocking phishing emails before they reach users.</li></ul><p><br></p><p>This quote from the study stood out to me: “Our results suggest that organizations like ours should not expect training, as commonly deployed today, to substantially protect against phishing attacks—the magnitude of protection afforded is simply too small and employees remain susceptible even after repeated training.”</p><p><br></p><p>This highlights the need for safer system design, especially for critical services like email, which—and this is important—inherently relies on users clicking links.</p><p><br></p><p>Ariana Mirian is a senior security researcher at Censys. She completed her PhD at UC San Diego and co-authored the paper, “Understanding the Efficacy of Phishing Training in Practice.”</p><p><br></p><p>G. Ho et al., <a href="https://www.computer.org/csdl/proceedings-article/sp/2025/223600a076/21B7RjYyG9q">"Understanding the Efficacy of Phishing Training in Practice,"</a> in <em>2025 IEEE Symposium on Security and Privacy</em> (SP), San Francisco, CA, 2025, pp. 37-54, doi: 10.1109/SP61157.2025.00076.</p>]]>
      </content:encoded>
      <pubDate>Wed, 16 Jul 2025 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/de7eeadb/993c1bb3.mp3" length="45015740" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2812</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>You click on a link in an email—as one does. Suddenly you see a message from your organization, “You’ve been phished! Now you need some training!” What do you do next? If you’re like most busy humans, you skip it and move on.</p><p><br></p><p>Researcher Ariana Mirian (and co-authors Grant Ho, Elisa Luo, Khang Tong, Euyhyun Lee, Lin Liu, Christopher A. Longhurst, Christian Dameff, Stefan Savage, Geoffrey M. Voelker) uncovered similar results in their study <a href="https://www.computer.org/csdl/proceedings-article/sp/2025/223600a076/21B7RjYyG9q">“Understanding the Efficacy of Phishing Training in Practice.”</a> The solution? Ariana suggests focusing on a more effective fix: designing safer systems.</p><p>In the episode we talk about:</p><ul><li>Annual cybersecurity awareness training doesn’t reduce the likelihood of clicking on phishing links, even if completed recently. Employees who finished training recently show similar phishing failure rates to those who completed it months ago. The study notes, “Employees who recently completed such training, which has significant focus on social engineering and phishing defenses, have similar phishing failure rates compared to other employees who completed awareness training many months ago.”</li><li>Phishing simulations combined with training (where companies send out fake phishing emails to employees and, for those who click on the links, lead those employees through training) had little impact on whether participants would click phishing links in the future. </li><li>Ariana was hopeful about interactive training but found that too few participants engaged with it to draw meaningful conclusions. </li><li>The type of phishing lure (e.g., password reset vs. vacation policy change) influenced whether users clicked. Ariana warned that certain lures could artificially lower click rates.</li><li>Ultimately, Ariana suggests focusing on designing safer systems—where the burden is taken off the end users. She recommends two-factor authentication, using phishing-resistant hardware keys (like YubiKeys), and blocking phishing emails before they reach users.</li></ul><p><br></p><p>This quote from the study stood out to me: “Our results suggest that organizations like ours should not expect training, as commonly deployed today, to substantially protect against phishing attacks—the magnitude of protection afforded is simply too small and employees remain susceptible even after repeated training.”</p><p><br></p><p>This highlights the need for safer system design, especially for critical services like email, which—and this is important—inherently relies on users clicking links.</p><p><br></p><p>Ariana Mirian is a senior security researcher at Censys. She completed her PhD at UC San Diego and co-authored the paper, “Understanding the Efficacy of Phishing Training in Practice.”</p><p><br></p><p>G. Ho et al., <a href="https://www.computer.org/csdl/proceedings-article/sp/2025/223600a076/21B7RjYyG9q">"Understanding the Efficacy of Phishing Training in Practice,"</a> in <em>2025 IEEE Symposium on Security and Privacy</em> (SP), San Francisco, CA, 2025, pp. 37-54, doi: 10.1109/SP61157.2025.00076.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/de7eeadb/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Trust Me Maybe: Building Trust in Human-AI Partnerships in Security</title>
      <itunes:episode>57</itunes:episode>
      <podcast:episode>57</podcast:episode>
      <itunes:title>Trust Me Maybe: Building Trust in Human-AI Partnerships in Security</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">feb98593-8426-4a9c-9d01-fde733264de9</guid>
      <link>https://share.transistor.fm/s/906ce70d</link>
      <description>
        <![CDATA[<p>In this episode, I speak with three guests from diverse backgrounds who share a common goal: Building trust in human-AI partnerships in security. We originally came together for a panel at the Institute of Electrical and Electronics Engineers (IEEE) <em>Conference on AI </em>in May 2025, and this episode recaps that discussion.</p><p><br></p><p>Key takeaways:</p><ul><li><strong>Security practitioners tend to be natural-born skeptics </strong>(can you blame them?!). They struggle to trust and adopt AI-powered security products, especially in higher-risk scenarios with overly simplified decision-making processes.</li><li><strong>AI can be a tool for threat actors and a threat vector itself</strong>, and its non-deterministic nature makes it unpredictable and vulnerable to manipulation.</li><li><strong>All AI models are biased, but not all bias is negative. </strong>Recognized and carefully managed bias can provide actionable insights. Purposefully biased (opinionated) models should be transparent.</li><li><strong>Clearer standards and expectations are needed for “human-in-the-loop” and human oversight.</strong> What does the human actually do, are they qualified, and do they have the right experience and information?</li><li><strong>What happens when today’s graduates are tomorrow’s security practitioners? </strong>On one end of the spectrum we have a lot of skepticism, on the other end not enough. We talk about over-reliance on AI, de-skilling, and loss of situational awareness.</li></ul><p><br></p><p><strong>Dr. Margaret Cunningham </strong>is the<strong> </strong>Technical Director, Security &amp; AI Strategy at Darktrace. Margaret was formerly Principal Product Manager at Forcepoint and Senior Staff Behavioral Engineer at Robinhood.</p><p><br></p><p><strong>Dr. Divya Ramjee</strong> is an Assistant Professor at Rochester Institute of Technology (RIT). She also leads RIT’s Technology and Policy Lab, analyzing security, AI policy, and privacy challenges. She previously held senior roles in US government across various agencies.</p><p><br></p><p><strong>Dr. Matthew Canham</strong> is the Executive Director, Cognitive Security Institute. He is a former FBI Supervisory Special Agent, with over twenty years of research in cognitive security.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, I speak with three guests from diverse backgrounds who share a common goal: Building trust in human-AI partnerships in security. We originally came together for a panel at the Institute of Electrical and Electronics Engineers (IEEE) <em>Conference on AI </em>in May 2025, and this episode recaps that discussion.</p><p><br></p><p>Key takeaways:</p><ul><li><strong>Security practitioners tend to be natural-born skeptics </strong>(can you blame them?!). They struggle to trust and adopt AI-powered security products, especially in higher-risk scenarios with overly simplified decision-making processes.</li><li><strong>AI can be a tool for threat actors and a threat vector itself</strong>, and its non-deterministic nature makes it unpredictable and vulnerable to manipulation.</li><li><strong>All AI models are biased, but not all bias is negative. </strong>Recognized and carefully managed bias can provide actionable insights. Purposefully biased (opinionated) models should be transparent.</li><li><strong>Clearer standards and expectations are needed for “human-in-the-loop” and human oversight.</strong> What does the human actually do, are they qualified, and do they have the right experience and information?</li><li><strong>What happens when today’s graduates are tomorrow’s security practitioners? </strong>On one end of the spectrum we have a lot of skepticism, on the other end not enough. We talk about over-reliance on AI, de-skilling, and loss of situational awareness.</li></ul><p><br></p><p><strong>Dr. Margaret Cunningham </strong>is the<strong> </strong>Technical Director, Security &amp; AI Strategy at Darktrace. Margaret was formerly Principal Product Manager at Forcepoint and Senior Staff Behavioral Engineer at Robinhood.</p><p><br></p><p><strong>Dr. Divya Ramjee</strong> is an Assistant Professor at Rochester Institute of Technology (RIT). She also leads RIT’s Technology and Policy Lab, analyzing security, AI policy, and privacy challenges. She previously held senior roles in US government across various agencies.</p><p><br></p><p><strong>Dr. Matthew Canham</strong> is the Executive Director, Cognitive Security Institute. He is a former FBI Supervisory Special Agent, with over twenty years of research in cognitive security.</p>]]>
      </content:encoded>
      <pubDate>Mon, 30 Jun 2025 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/906ce70d/48d7f3dc.mp3" length="42230449" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2638</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, I speak with three guests from diverse backgrounds who share a common goal: Building trust in human-AI partnerships in security. We originally came together for a panel at the Institute of Electrical and Electronics Engineers (IEEE) <em>Conference on AI </em>in May 2025, and this episode recaps that discussion.</p><p><br></p><p>Key takeaways:</p><ul><li><strong>Security practitioners tend to be natural-born skeptics </strong>(can you blame them?!). They struggle to trust and adopt AI-powered security products, especially in higher-risk scenarios with overly simplified decision-making processes.</li><li><strong>AI can be a tool for threat actors and a threat vector itself</strong>, and its non-deterministic nature makes it unpredictable and vulnerable to manipulation.</li><li><strong>All AI models are biased, but not all bias is negative. </strong>Recognized and carefully managed bias can provide actionable insights. Purposefully biased (opinionated) models should be transparent.</li><li><strong>Clearer standards and expectations are needed for “human-in-the-loop” and human oversight.</strong> What does the human actually do, are they qualified, and do they have the right experience and information?</li><li><strong>What happens when today’s graduates are tomorrow’s security practitioners? </strong>On one end of the spectrum we have a lot of skepticism, on the other end not enough. We talk about over-reliance on AI, de-skilling, and loss of situational awareness.</li></ul><p><br></p><p><strong>Dr. Margaret Cunningham </strong>is the<strong> </strong>Technical Director, Security &amp; AI Strategy at Darktrace. Margaret was formerly Principal Product Manager at Forcepoint and Senior Staff Behavioral Engineer at Robinhood.</p><p><br></p><p><strong>Dr. Divya Ramjee</strong> is an Assistant Professor at Rochester Institute of Technology (RIT). She also leads RIT’s Technology and Policy Lab, analyzing security, AI policy, and privacy challenges. She previously held senior roles in US government across various agencies.</p><p><br></p><p><strong>Dr. Matthew Canham</strong> is the Executive Director, Cognitive Security Institute. He is a former FBI Supervisory Special Agent, with over twenty years of research in cognitive security.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>XDR, EDR, SIEM, SOAR…Snooze: Cybersecurity Marketing Real Talk with Gianna Whitver</title>
      <itunes:episode>56</itunes:episode>
      <podcast:episode>56</podcast:episode>
      <itunes:title>XDR, EDR, SIEM, SOAR…Snooze: Cybersecurity Marketing Real Talk with Gianna Whitver</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3ccdac27-abb2-47df-abb7-964dde6de213</guid>
      <link>https://share.transistor.fm/s/8beb8286</link>
      <description>
        <![CDATA[<p>You're a founder with a great cybersecurity product—but no one knows or cares. Or you're a marketer drowning in jargon (hey, customers hate acronyms, too), trying to figure out what works and what doesn’t. Gianna Whitver, co-founder of the Cybersecurity Marketing Society, breaks down what the cybersecurity industry is getting wrong—and right—about marketing.</p><p><br></p><p><strong>In this episode, we talk about:</strong></p><ul><li>Cyber marketing is hard (but you knew that already). It requires deep product knowledge, empathy for stressed buyers, and clear, no-FUD messaging.</li><li>Building authentic, value-driven communities leads to stronger cybersecurity marketing impact.</li><li>Don’t copy the marketing strategies of big enterprises. Instead, focus on clarity, founder stories, and product-market fit.</li><li>Founder-led marketing works. Early-stage founders can break through noise by sharing personal stories.</li><li>Think twice before listening to the advice of “influencer” marketers. This advice is often overly generic. Or, you’re following advice of marketers marketing to marketers (try saying that ten times fast). In other words, their advice is probably not going to apply to cybersecurity.</li></ul><p>Gianna Whitver is the co-founder and CEO of the <a href="https://www.cybersecuritymarketingsociety.com/">Cybersecurity Marketing Society</a>, a community for marketers in cybersecurity to connect and share insights. She is also the podcast co-host of <em>Breaking Through in Cybersecurity Marketing</em> podcast, and founder of LeaseHoney, a place for beekeepers to find land.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>You're a founder with a great cybersecurity product—but no one knows or cares. Or you're a marketer drowning in jargon (hey, customers hate acronyms, too), trying to figure out what works and what doesn’t. Gianna Whitver, co-founder of the Cybersecurity Marketing Society, breaks down what the cybersecurity industry is getting wrong—and right—about marketing.</p><p><br></p><p><strong>In this episode, we talk about:</strong></p><ul><li>Cyber marketing is hard (but you knew that already). It requires deep product knowledge, empathy for stressed buyers, and clear, no-FUD messaging.</li><li>Building authentic, value-driven communities leads to stronger cybersecurity marketing impact.</li><li>Don’t copy the marketing strategies of big enterprises. Instead, focus on clarity, founder stories, and product-market fit.</li><li>Founder-led marketing works. Early-stage founders can break through noise by sharing personal stories.</li><li>Think twice before listening to the advice of “influencer” marketers. This advice is often overly generic. Or, you’re following advice of marketers marketing to marketers (try saying that ten times fast). In other words, their advice is probably not going to apply to cybersecurity.</li></ul><p>Gianna Whitver is the co-founder and CEO of the <a href="https://www.cybersecuritymarketingsociety.com/">Cybersecurity Marketing Society</a>, a community for marketers in cybersecurity to connect and share insights. She is also the podcast co-host of <em>Breaking Through in Cybersecurity Marketing</em> podcast, and founder of LeaseHoney, a place for beekeepers to find land.</p>]]>
      </content:encoded>
      <pubDate>Thu, 29 May 2025 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/8beb8286/50660b12.mp3" length="32801128" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2049</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>You're a founder with a great cybersecurity product—but no one knows or cares. Or you're a marketer drowning in jargon (hey, customers hate acronyms, too), trying to figure out what works and what doesn’t. Gianna Whitver, co-founder of the Cybersecurity Marketing Society, breaks down what the cybersecurity industry is getting wrong—and right—about marketing.</p><p><br></p><p><strong>In this episode, we talk about:</strong></p><ul><li>Cyber marketing is hard (but you knew that already). It requires deep product knowledge, empathy for stressed buyers, and clear, no-FUD messaging.</li><li>Building authentic, value-driven communities leads to stronger cybersecurity marketing impact.</li><li>Don’t copy the marketing strategies of big enterprises. Instead, focus on clarity, founder stories, and product-market fit.</li><li>Founder-led marketing works. Early-stage founders can break through noise by sharing personal stories.</li><li>Think twice before listening to the advice of “influencer” marketers. This advice is often overly generic. Or, you’re following advice of marketers marketing to marketers (try saying that ten times fast). In other words, their advice is probably not going to apply to cybersecurity.</li></ul><p>Gianna Whitver is the co-founder and CEO of the <a href="https://www.cybersecuritymarketingsociety.com/">Cybersecurity Marketing Society</a>, a community for marketers in cybersecurity to connect and share insights. She is also the podcast co-host of <em>Breaking Through in Cybersecurity Marketing</em> podcast, and founder of LeaseHoney, a place for beekeepers to find land.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/8beb8286/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Here Comes the Sludge with Kelly Shortridge and Josiah Dykstra</title>
      <itunes:episode>55</itunes:episode>
      <podcast:episode>55</podcast:episode>
      <itunes:title>Here Comes the Sludge with Kelly Shortridge and Josiah Dykstra</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">aef61ce1-7033-4356-a3a4-9fd50a96a5c6</guid>
      <link>https://share.transistor.fm/s/da4f9da9</link>
      <description>
        <![CDATA[<p>Users, threat actors, and the system design all influence—and are influenced by—one another. To design safer systems, we first need to understand the players who operate within those systems. Kelly Shortridge and Josiah Dykstra exemplify this human-centered approach in their work. In this episode we talk about:</p><ul><li>The vital role of human factors in cyber-resilience—how Josiah and Kelly apply a behavioral-economics mindset every day to design safer, more adaptable systems.</li><li>Key cognitive biases that undermine incident response (like action bias and opportunity costs) and simple heuristics to counter them.</li><li>The “sludge” strategy: deliberately introducing friction to attacker workflows to increase time, effort, and financial costs—as Kelly says, “disrupt their economics.”</li><li>Why moving from a security culture of shame and blame to one of open learning and continuous improvement is essential for true cybersecurity resilience.</li></ul><p>Kelly Shortridge is VP, Security Products at Fastly, formerly VP of Product Management and Product Strategy at Capsule8. She is the author of <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em>.</p><p><br></p><p>Josiah Dykstra is the owner of Designer Security, human-centered security advocate, cybersecurity researcher, and former Director of Strategic Initiatives at Trail of Bits. He also worked at the NSA as Technical Director, Critical Networks and Systems. Josiah is the author of <em>Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail Us</em>.</p><p><br></p><p>During this episode, we reference:</p><p><br></p><p>Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Sludge for Good: Slowing and Imposing Costs on Cyber Attackers,” <em>arXiv preprint arXiv:2211.16626</em> (2022).</p><p><br></p><p>Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Opportunity Cost of Action Bias in Cybersecurity Incident Response,” <em>Proceedings of the Human Factors and Ergonomics Society Annual Meeting</em>, 66, Issue 1 (2022): 1116-1120.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Users, threat actors, and the system design all influence—and are influenced by—one another. To design safer systems, we first need to understand the players who operate within those systems. Kelly Shortridge and Josiah Dykstra exemplify this human-centered approach in their work. In this episode we talk about:</p><ul><li>The vital role of human factors in cyber-resilience—how Josiah and Kelly apply a behavioral-economics mindset every day to design safer, more adaptable systems.</li><li>Key cognitive biases that undermine incident response (like action bias and opportunity costs) and simple heuristics to counter them.</li><li>The “sludge” strategy: deliberately introducing friction to attacker workflows to increase time, effort, and financial costs—as Kelly says, “disrupt their economics.”</li><li>Why moving from a security culture of shame and blame to one of open learning and continuous improvement is essential for true cybersecurity resilience.</li></ul><p>Kelly Shortridge is VP, Security Products at Fastly, formerly VP of Product Management and Product Strategy at Capsule8. She is the author of <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em>.</p><p><br></p><p>Josiah Dykstra is the owner of Designer Security, human-centered security advocate, cybersecurity researcher, and former Director of Strategic Initiatives at Trail of Bits. He also worked at the NSA as Technical Director, Critical Networks and Systems. Josiah is the author of <em>Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail Us</em>.</p><p><br></p><p>During this episode, we reference:</p><p><br></p><p>Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Sludge for Good: Slowing and Imposing Costs on Cyber Attackers,” <em>arXiv preprint arXiv:2211.16626</em> (2022).</p><p><br></p><p>Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Opportunity Cost of Action Bias in Cybersecurity Incident Response,” <em>Proceedings of the Human Factors and Ergonomics Society Annual Meeting</em>, 66, Issue 1 (2022): 1116-1120.</p>]]>
      </content:encoded>
      <pubDate>Thu, 15 May 2025 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/da4f9da9/518b1257.mp3" length="41664110" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2603</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Users, threat actors, and the system design all influence—and are influenced by—one another. To design safer systems, we first need to understand the players who operate within those systems. Kelly Shortridge and Josiah Dykstra exemplify this human-centered approach in their work. In this episode we talk about:</p><ul><li>The vital role of human factors in cyber-resilience—how Josiah and Kelly apply a behavioral-economics mindset every day to design safer, more adaptable systems.</li><li>Key cognitive biases that undermine incident response (like action bias and opportunity costs) and simple heuristics to counter them.</li><li>The “sludge” strategy: deliberately introducing friction to attacker workflows to increase time, effort, and financial costs—as Kelly says, “disrupt their economics.”</li><li>Why moving from a security culture of shame and blame to one of open learning and continuous improvement is essential for true cybersecurity resilience.</li></ul><p>Kelly Shortridge is VP, Security Products at Fastly, formerly VP of Product Management and Product Strategy at Capsule8. She is the author of <em>Security Chaos Engineering: Sustaining Resilience in Software and Systems</em>.</p><p><br></p><p>Josiah Dykstra is the owner of Designer Security, human-centered security advocate, cybersecurity researcher, and former Director of Strategic Initiatives at Trail of Bits. He also worked at the NSA as Technical Director, Critical Networks and Systems. Josiah is the author of <em>Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail Us</em>.</p><p><br></p><p>During this episode, we reference:</p><p><br></p><p>Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Sludge for Good: Slowing and Imposing Costs on Cyber Attackers,” <em>arXiv preprint arXiv:2211.16626</em> (2022).</p><p><br></p><p>Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Opportunity Cost of Action Bias in Cybersecurity Incident Response,” <em>Proceedings of the Human Factors and Ergonomics Society Annual Meeting</em>, 66, Issue 1 (2022): 1116-1120.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Human-Centered Security In the Wild: Jordan Girman and Mike Kosak On Security and Product Team Collaboration at Lastpass</title>
      <itunes:episode>54</itunes:episode>
      <podcast:episode>54</podcast:episode>
      <itunes:title>Human-Centered Security In the Wild: Jordan Girman and Mike Kosak On Security and Product Team Collaboration at Lastpass</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">55dcfb70-4705-4380-8c26-1da3bcde8a45</guid>
      <link>https://share.transistor.fm/s/c62c06f6</link>
      <description>
        <![CDATA[<p>Imagine a world where product teams collaborate with security teams. Where product designers can shadow their security peers. A place where security team members believe communication is one of the most important skillsets they have. These are key attributes of human-centered security—the type of dynamics Jordan Girman and Mike Kosak are fostering at Lastpass.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What cross-disciplinary collaboration looks like at Lastpass (for example, a product designer is shadowing the security team).</li><li>A set of principles for designing for usable security and privacy.</li><li>Why intentional friction might be counterintuitive to designers but, used carefully, is critical to designing for security.</li><li>When it comes to improving security outcomes, the words you use matter. Mike explains how the Lastpass Threat Intelligence team thinks about communicating what they learn to a variety of audiences.</li><li>How to build a threat intelligence program within your organization--even if you have limited resources.</li></ul><p>Jordan Girman is the VP of User Experience at <a href="https://www.lastpass.com">Lastpass</a>. Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including <a href="https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language">“Setting Up a Threat Intelligence Program From Scratch.”</a> </p><p><br></p><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Imagine a world where product teams collaborate with security teams. Where product designers can shadow their security peers. A place where security team members believe communication is one of the most important skillsets they have. These are key attributes of human-centered security—the type of dynamics Jordan Girman and Mike Kosak are fostering at Lastpass.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What cross-disciplinary collaboration looks like at Lastpass (for example, a product designer is shadowing the security team).</li><li>A set of principles for designing for usable security and privacy.</li><li>Why intentional friction might be counterintuitive to designers but, used carefully, is critical to designing for security.</li><li>When it comes to improving security outcomes, the words you use matter. Mike explains how the Lastpass Threat Intelligence team thinks about communicating what they learn to a variety of audiences.</li><li>How to build a threat intelligence program within your organization--even if you have limited resources.</li></ul><p>Jordan Girman is the VP of User Experience at <a href="https://www.lastpass.com">Lastpass</a>. Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including <a href="https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language">“Setting Up a Threat Intelligence Program From Scratch.”</a> </p><p><br></p><p><br></p>]]>
      </content:encoded>
      <pubDate>Mon, 07 Apr 2025 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/c62c06f6/8636e4fc.mp3" length="38490322" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2404</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Imagine a world where product teams collaborate with security teams. Where product designers can shadow their security peers. A place where security team members believe communication is one of the most important skillsets they have. These are key attributes of human-centered security—the type of dynamics Jordan Girman and Mike Kosak are fostering at Lastpass.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What cross-disciplinary collaboration looks like at Lastpass (for example, a product designer is shadowing the security team).</li><li>A set of principles for designing for usable security and privacy.</li><li>Why intentional friction might be counterintuitive to designers but, used carefully, is critical to designing for security.</li><li>When it comes to improving security outcomes, the words you use matter. Mike explains how the Lastpass Threat Intelligence team thinks about communicating what they learn to a variety of audiences.</li><li>How to build a threat intelligence program within your organization--even if you have limited resources.</li></ul><p>Jordan Girman is the VP of User Experience at <a href="https://www.lastpass.com">Lastpass</a>. Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including <a href="https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language">“Setting Up a Threat Intelligence Program From Scratch.”</a> </p><p><br></p><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/c62c06f6/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Dear Security Vendors, Here’s What Security Teams Want You to Know with Paul Robinson</title>
      <itunes:episode>53</itunes:episode>
      <podcast:episode>53</podcast:episode>
      <itunes:title>Dear Security Vendors, Here’s What Security Teams Want You to Know with Paul Robinson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7e2e5859-2dc6-4459-b19d-7af559f5c05b</guid>
      <link>https://share.transistor.fm/s/7a719395</link>
      <description>
        <![CDATA[<p>Where are security tools failing security teams? What are security teams looking for when they visit a security vendor marketing website? Paul Robinson, security expert and founder of Tempus Network, says, “Over-promising and under-delivering is a major factor in these tools. The tool can look great in a demo—proof of concepts are great, but often the security vendor is just putting their best foot forward. It's not really the reality of the situation.”</p><p><br></p><p>Paul’s advice for how can security vendors do better? </p><ul><li>Start by admitting security isn’t just a switch you flip—it’s a journey. </li><li>Security teams aren’t fooled by glitz and glamour on your marketing website. They want to see how you addressed real problems.</li><li>Incredible customer service can make a small, scrappy cybersecurity product stand out from larger, slower-moving vendors.</li><li>Cybersecurity vendors need to get onboarding right (it’s a make or break aspect of the user experience). There are more variables than you think—not only technology but also getting buy-in from employees, leadership, and other stakeholders.</li><li>Think about the user experience not only of the person using the security product, but the people at the organization who will be impacted by the product.</li></ul><p>Looking for a cybersecurity-related movie that is just a tad too plausible? Paul recommends <em>Leave the World Behind</em> on Netflix.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Where are security tools failing security teams? What are security teams looking for when they visit a security vendor marketing website? Paul Robinson, security expert and founder of Tempus Network, says, “Over-promising and under-delivering is a major factor in these tools. The tool can look great in a demo—proof of concepts are great, but often the security vendor is just putting their best foot forward. It's not really the reality of the situation.”</p><p><br></p><p>Paul’s advice for how can security vendors do better? </p><ul><li>Start by admitting security isn’t just a switch you flip—it’s a journey. </li><li>Security teams aren’t fooled by glitz and glamour on your marketing website. They want to see how you addressed real problems.</li><li>Incredible customer service can make a small, scrappy cybersecurity product stand out from larger, slower-moving vendors.</li><li>Cybersecurity vendors need to get onboarding right (it’s a make or break aspect of the user experience). There are more variables than you think—not only technology but also getting buy-in from employees, leadership, and other stakeholders.</li><li>Think about the user experience not only of the person using the security product, but the people at the organization who will be impacted by the product.</li></ul><p>Looking for a cybersecurity-related movie that is just a tad too plausible? Paul recommends <em>Leave the World Behind</em> on Netflix.</p>]]>
      </content:encoded>
      <pubDate>Wed, 19 Feb 2025 11:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/7a719395/ee7dc01b.mp3" length="35229724" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2201</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Where are security tools failing security teams? What are security teams looking for when they visit a security vendor marketing website? Paul Robinson, security expert and founder of Tempus Network, says, “Over-promising and under-delivering is a major factor in these tools. The tool can look great in a demo—proof of concepts are great, but often the security vendor is just putting their best foot forward. It's not really the reality of the situation.”</p><p><br></p><p>Paul’s advice for how can security vendors do better? </p><ul><li>Start by admitting security isn’t just a switch you flip—it’s a journey. </li><li>Security teams aren’t fooled by glitz and glamour on your marketing website. They want to see how you addressed real problems.</li><li>Incredible customer service can make a small, scrappy cybersecurity product stand out from larger, slower-moving vendors.</li><li>Cybersecurity vendors need to get onboarding right (it’s a make or break aspect of the user experience). There are more variables than you think—not only technology but also getting buy-in from employees, leadership, and other stakeholders.</li><li>Think about the user experience not only of the person using the security product, but the people at the organization who will be impacted by the product.</li></ul><p>Looking for a cybersecurity-related movie that is just a tad too plausible? Paul recommends <em>Leave the World Behind</em> on Netflix.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>From Tools to Teammates: (Dis)Trust in AI for Cybersecurity with Neele Roch</title>
      <itunes:episode>52</itunes:episode>
      <podcast:episode>52</podcast:episode>
      <itunes:title>From Tools to Teammates: (Dis)Trust in AI for Cybersecurity with Neele Roch</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">86efa901-d4f7-4c26-bc65-dfd359628412</guid>
      <link>https://share.transistor.fm/s/f0f94431</link>
      <description>
        <![CDATA[<p>When we collaborate with people, we build trust over time. In many ways, this relationship building is similar to how we work with tools that leverage AI. </p><p><br></p><p>As usable security and privacy researcher Neele Roch found, “on the one hand, when you ask the [security] experts directly, they are very rational and they explain that AI is a tool. AI is based on algorithms and it's mathematical. And while that is true, when you ask them about how they're building trust or how they're granting autonomy and how that changes over time, they have this really strong anthropomorphization of AI. They describe the trust building relationship as if it were, for example, a new employee.” </p><p><br></p><p>Neele is a doctoral student at the Professorship for Security, Privacy and Society at ETH Zurich. Neele (and co-authors Hannah Sievers, Lorin Schöni, and Verena Zimmermann) recently published a paper, <a href="https://www.usenix.org/conference/soups2024/presentation/roch"><em>“Navigating Autonomy: Unveiling Security Experts’ Perspective on Augmented Intelligence and Cybersecurity,”</em> presented at the 2024 Symposium on Usable Privacy and Security. </a></p><p><br></p><p>In this episode, we talk to Neele about:</p><ul><li>How security experts’ risk–benefit assessments drive the level of AI autonomy they’re comfortable with.</li><li>How experts initially view AI: the tension between AI-as-tool vs. AI-as-“teammate.”</li><li>The importance of <em>recalibrating trust</em> after AI errors—and how good system design can help users recover from errors without losing their trust in it.</li><li>Ensuring AI-driven cybersecurity tools provide just the right amount of transparency and control.</li><li>Why enabling security practitioners to identify, correct, and learn from AI errors is critical for sustained engagement.</li></ul><p><br></p><p>Roch, Neele, Hannah Sievers, Lorin Schöni, and Verena Zimmermann. "Navigating Autonomy: Unveiling Security Experts' Perspectives on Augmented Intelligence in Cybersecurity." In <em>Twentieth Symposium on Usable Privacy and Security (SOUPS 2024)</em>, pp. 41-60. 2024.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>When we collaborate with people, we build trust over time. In many ways, this relationship building is similar to how we work with tools that leverage AI. </p><p><br></p><p>As usable security and privacy researcher Neele Roch found, “on the one hand, when you ask the [security] experts directly, they are very rational and they explain that AI is a tool. AI is based on algorithms and it's mathematical. And while that is true, when you ask them about how they're building trust or how they're granting autonomy and how that changes over time, they have this really strong anthropomorphization of AI. They describe the trust building relationship as if it were, for example, a new employee.” </p><p><br></p><p>Neele is a doctoral student at the Professorship for Security, Privacy and Society at ETH Zurich. Neele (and co-authors Hannah Sievers, Lorin Schöni, and Verena Zimmermann) recently published a paper, <a href="https://www.usenix.org/conference/soups2024/presentation/roch"><em>“Navigating Autonomy: Unveiling Security Experts’ Perspective on Augmented Intelligence and Cybersecurity,”</em> presented at the 2024 Symposium on Usable Privacy and Security. </a></p><p><br></p><p>In this episode, we talk to Neele about:</p><ul><li>How security experts’ risk–benefit assessments drive the level of AI autonomy they’re comfortable with.</li><li>How experts initially view AI: the tension between AI-as-tool vs. AI-as-“teammate.”</li><li>The importance of <em>recalibrating trust</em> after AI errors—and how good system design can help users recover from errors without losing their trust in it.</li><li>Ensuring AI-driven cybersecurity tools provide just the right amount of transparency and control.</li><li>Why enabling security practitioners to identify, correct, and learn from AI errors is critical for sustained engagement.</li></ul><p><br></p><p>Roch, Neele, Hannah Sievers, Lorin Schöni, and Verena Zimmermann. "Navigating Autonomy: Unveiling Security Experts' Perspectives on Augmented Intelligence in Cybersecurity." In <em>Twentieth Symposium on Usable Privacy and Security (SOUPS 2024)</em>, pp. 41-60. 2024.</p>]]>
      </content:encoded>
      <pubDate>Thu, 02 Jan 2025 05:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/f0f94431/e7907661.mp3" length="35330879" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2207</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>When we collaborate with people, we build trust over time. In many ways, this relationship building is similar to how we work with tools that leverage AI. </p><p><br></p><p>As usable security and privacy researcher Neele Roch found, “on the one hand, when you ask the [security] experts directly, they are very rational and they explain that AI is a tool. AI is based on algorithms and it's mathematical. And while that is true, when you ask them about how they're building trust or how they're granting autonomy and how that changes over time, they have this really strong anthropomorphization of AI. They describe the trust building relationship as if it were, for example, a new employee.” </p><p><br></p><p>Neele is a doctoral student at the Professorship for Security, Privacy and Society at ETH Zurich. Neele (and co-authors Hannah Sievers, Lorin Schöni, and Verena Zimmermann) recently published a paper, <a href="https://www.usenix.org/conference/soups2024/presentation/roch"><em>“Navigating Autonomy: Unveiling Security Experts’ Perspective on Augmented Intelligence and Cybersecurity,”</em> presented at the 2024 Symposium on Usable Privacy and Security. </a></p><p><br></p><p>In this episode, we talk to Neele about:</p><ul><li>How security experts’ risk–benefit assessments drive the level of AI autonomy they’re comfortable with.</li><li>How experts initially view AI: the tension between AI-as-tool vs. AI-as-“teammate.”</li><li>The importance of <em>recalibrating trust</em> after AI errors—and how good system design can help users recover from errors without losing their trust in it.</li><li>Ensuring AI-driven cybersecurity tools provide just the right amount of transparency and control.</li><li>Why enabling security practitioners to identify, correct, and learn from AI errors is critical for sustained engagement.</li></ul><p><br></p><p>Roch, Neele, Hannah Sievers, Lorin Schöni, and Verena Zimmermann. "Navigating Autonomy: Unveiling Security Experts' Perspectives on Augmented Intelligence in Cybersecurity." In <em>Twentieth Symposium on Usable Privacy and Security (SOUPS 2024)</em>, pp. 41-60. 2024.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/f0f94431/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Introducing Human-Centered Security: The Book</title>
      <itunes:episode>51</itunes:episode>
      <podcast:episode>51</podcast:episode>
      <itunes:title>Introducing Human-Centered Security: The Book</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">691881ac-8f3f-404d-9f98-b257414eb9f7</guid>
      <link>https://share.transistor.fm/s/4651c13b</link>
      <description>
        <![CDATA[<p>In this episode, Heidi gets a taste of her own medicine and is interviewed by co-host John Robertson about her newly-released book <em>Human-Centered Security: How to Design Systems That Are Both Safe and Usable</em>. We talk about:</p><ul><li>Why Heidi’s experience as a UX researcher prompted her to write <em>Human-Centered Security</em>.</li><li>Places in the user journey where security impacts users the most.</li><li>Why cross-disciplinary collaboration is important—find your security UX allies (people in security, legal, privacy, engineering, product managers, to name a few).</li><li>Practical security UX tips like secure by default, guiding the user along the safe path, and being really careful about the words you use.</li><li>Technical users—IT admins, engineers, security analysts—are users, too and why it’s so important to thoughtfully design the security user experience for them. (Spoiler: they help keep the rest of us safe!)</li></ul><p><br></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, Heidi gets a taste of her own medicine and is interviewed by co-host John Robertson about her newly-released book <em>Human-Centered Security: How to Design Systems That Are Both Safe and Usable</em>. We talk about:</p><ul><li>Why Heidi’s experience as a UX researcher prompted her to write <em>Human-Centered Security</em>.</li><li>Places in the user journey where security impacts users the most.</li><li>Why cross-disciplinary collaboration is important—find your security UX allies (people in security, legal, privacy, engineering, product managers, to name a few).</li><li>Practical security UX tips like secure by default, guiding the user along the safe path, and being really careful about the words you use.</li><li>Technical users—IT admins, engineers, security analysts—are users, too and why it’s so important to thoughtfully design the security user experience for them. (Spoiler: they help keep the rest of us safe!)</li></ul><p><br></p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Dec 2024 10:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/4651c13b/37e3c098.mp3" length="30864089" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1928</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, Heidi gets a taste of her own medicine and is interviewed by co-host John Robertson about her newly-released book <em>Human-Centered Security: How to Design Systems That Are Both Safe and Usable</em>. We talk about:</p><ul><li>Why Heidi’s experience as a UX researcher prompted her to write <em>Human-Centered Security</em>.</li><li>Places in the user journey where security impacts users the most.</li><li>Why cross-disciplinary collaboration is important—find your security UX allies (people in security, legal, privacy, engineering, product managers, to name a few).</li><li>Practical security UX tips like secure by default, guiding the user along the safe path, and being really careful about the words you use.</li><li>Technical users—IT admins, engineers, security analysts—are users, too and why it’s so important to thoughtfully design the security user experience for them. (Spoiler: they help keep the rest of us safe!)</li></ul><p><br></p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/4651c13b/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Threat Actors Leverage Behavioral Science; Security Teams Should, Too with Matt Wallaert</title>
      <itunes:episode>50</itunes:episode>
      <podcast:episode>50</podcast:episode>
      <itunes:title>Threat Actors Leverage Behavioral Science; Security Teams Should, Too with Matt Wallaert</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">db4a1446-3aed-4913-a88f-df215f9c44dc</guid>
      <link>https://share.transistor.fm/s/d139fa93</link>
      <description>
        <![CDATA[<p>The cybersecurity industry often fixates on “behavior change,” expecting users to take on unrealistic tasks instead of designing safer, smarter systems. </p><p><br>Matt Wallaert (founder of BeSci.io and author of <em>Start at the End: How to Build Products that Create Change</em>) explains behavioral science isn't about forcing behavior change. Instead, it's about understanding people so <strong>a thoughtfully-designed system can influence more secure outcomes.<br></strong><br></p><p><br></p><p>Whether you’re a UX designer, a security engineer, or a CISO, you influence security behaviors. Here’s how you can move towards more secure outcomes:</p><ul><li><strong>Stay Ahead of Threat Actors</strong>: Cybercriminals use behavioral science to their advantage. People designing the security user experience must not only catch up but outpace them.</li><li><strong>Define Clear Outcomes</strong>: Don’t just say “we want users to be secure.” Know exactly what behaviors you want and why. Vague goals lead to vague results.(as Matt explains, saying things like “I want people to be more secure” isn’t helpful. In fact, many people don’t know what “more secure” means in the context of their product or organization).</li><li><strong>Ask Better Questions</strong>: Use tools like the “sufficiency test.” For example, sure, it might be nice if users created complex passwords—but users don’t necessarily have to be the ones doing it. Why can’t the system create a complex password for them (as password managers do)?</li><li><strong>Understand promoting and inhibiting pressures</strong>. These concepts will help you design systems that are more resilient because they are built with people in mind. There are reasons people do and do not do things—when you understand why, you can develop systems that will be more effective in encouraging the behaviors you want. </li><li><strong>Security practitioners: </strong>tired of being perceived as the “department of no”? Matt explains how behavioral science can help you better collaborate with cross-disciplinary teams.</li></ul><p>Bonus: UX designers, after this episode you may never create another persona.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>The cybersecurity industry often fixates on “behavior change,” expecting users to take on unrealistic tasks instead of designing safer, smarter systems. </p><p><br>Matt Wallaert (founder of BeSci.io and author of <em>Start at the End: How to Build Products that Create Change</em>) explains behavioral science isn't about forcing behavior change. Instead, it's about understanding people so <strong>a thoughtfully-designed system can influence more secure outcomes.<br></strong><br></p><p><br></p><p>Whether you’re a UX designer, a security engineer, or a CISO, you influence security behaviors. Here’s how you can move towards more secure outcomes:</p><ul><li><strong>Stay Ahead of Threat Actors</strong>: Cybercriminals use behavioral science to their advantage. People designing the security user experience must not only catch up but outpace them.</li><li><strong>Define Clear Outcomes</strong>: Don’t just say “we want users to be secure.” Know exactly what behaviors you want and why. Vague goals lead to vague results.(as Matt explains, saying things like “I want people to be more secure” isn’t helpful. In fact, many people don’t know what “more secure” means in the context of their product or organization).</li><li><strong>Ask Better Questions</strong>: Use tools like the “sufficiency test.” For example, sure, it might be nice if users created complex passwords—but users don’t necessarily have to be the ones doing it. Why can’t the system create a complex password for them (as password managers do)?</li><li><strong>Understand promoting and inhibiting pressures</strong>. These concepts will help you design systems that are more resilient because they are built with people in mind. There are reasons people do and do not do things—when you understand why, you can develop systems that will be more effective in encouraging the behaviors you want. </li><li><strong>Security practitioners: </strong>tired of being perceived as the “department of no”? Matt explains how behavioral science can help you better collaborate with cross-disciplinary teams.</li></ul><p>Bonus: UX designers, after this episode you may never create another persona.</p>]]>
      </content:encoded>
      <pubDate>Thu, 05 Dec 2024 13:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/d139fa93/568db7be.mp3" length="37789341" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2361</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>The cybersecurity industry often fixates on “behavior change,” expecting users to take on unrealistic tasks instead of designing safer, smarter systems. </p><p><br>Matt Wallaert (founder of BeSci.io and author of <em>Start at the End: How to Build Products that Create Change</em>) explains behavioral science isn't about forcing behavior change. Instead, it's about understanding people so <strong>a thoughtfully-designed system can influence more secure outcomes.<br></strong><br></p><p><br></p><p>Whether you’re a UX designer, a security engineer, or a CISO, you influence security behaviors. Here’s how you can move towards more secure outcomes:</p><ul><li><strong>Stay Ahead of Threat Actors</strong>: Cybercriminals use behavioral science to their advantage. People designing the security user experience must not only catch up but outpace them.</li><li><strong>Define Clear Outcomes</strong>: Don’t just say “we want users to be secure.” Know exactly what behaviors you want and why. Vague goals lead to vague results.(as Matt explains, saying things like “I want people to be more secure” isn’t helpful. In fact, many people don’t know what “more secure” means in the context of their product or organization).</li><li><strong>Ask Better Questions</strong>: Use tools like the “sufficiency test.” For example, sure, it might be nice if users created complex passwords—but users don’t necessarily have to be the ones doing it. Why can’t the system create a complex password for them (as password managers do)?</li><li><strong>Understand promoting and inhibiting pressures</strong>. These concepts will help you design systems that are more resilient because they are built with people in mind. There are reasons people do and do not do things—when you understand why, you can develop systems that will be more effective in encouraging the behaviors you want. </li><li><strong>Security practitioners: </strong>tired of being perceived as the “department of no”? Matt explains how behavioral science can help you better collaborate with cross-disciplinary teams.</li></ul><p>Bonus: UX designers, after this episode you may never create another persona.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
      <podcast:transcript url="https://share.transistor.fm/s/d139fa93/transcript.txt" type="text/plain"/>
    </item>
    <item>
      <title>Tech &amp; Law: The Power of Understanding Both With Justine Phillips</title>
      <itunes:episode>49</itunes:episode>
      <podcast:episode>49</podcast:episode>
      <itunes:title>Tech &amp; Law: The Power of Understanding Both With Justine Phillips</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">891ee0b7-6456-4821-9c4b-18f0c0d63c03</guid>
      <link>https://share.transistor.fm/s/e4b7d9a1</link>
      <description>
        <![CDATA[<p>“Technical people need to better understand the laws and regulations and lawyers need to better understand the technology and processes in place. When that happens, when those worlds come together, that’s where you can meaningfully make things happen.” -Justine Phillips, Partner at Baker McKenzie</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Essential questions product teams should ask legal experts when integrating AI into new products and features.</li><li>In particular, why it’s important for designers and engineers to question the source of the data they are using for AI-powered products and features.</li><li>The need to anticipate international security and privacy regulations, which are constantly changing, including emerging regulations that could impact companies developing IoT devices.</li></ul><p><br></p><p>Justine Phillips is a Partner at Baker McKenzie, where she is co-chair of data+cyber for the Americas. She is the author of <em>Data Privacy Program Guide: How to Build a Privacy Program That Inspires Trust</em>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>“Technical people need to better understand the laws and regulations and lawyers need to better understand the technology and processes in place. When that happens, when those worlds come together, that’s where you can meaningfully make things happen.” -Justine Phillips, Partner at Baker McKenzie</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Essential questions product teams should ask legal experts when integrating AI into new products and features.</li><li>In particular, why it’s important for designers and engineers to question the source of the data they are using for AI-powered products and features.</li><li>The need to anticipate international security and privacy regulations, which are constantly changing, including emerging regulations that could impact companies developing IoT devices.</li></ul><p><br></p><p>Justine Phillips is a Partner at Baker McKenzie, where she is co-chair of data+cyber for the Americas. She is the author of <em>Data Privacy Program Guide: How to Build a Privacy Program That Inspires Trust</em>.</p>]]>
      </content:encoded>
      <pubDate>Thu, 14 Nov 2024 05:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/e4b7d9a1/6af457de.mp3" length="43610966" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2724</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>“Technical people need to better understand the laws and regulations and lawyers need to better understand the technology and processes in place. When that happens, when those worlds come together, that’s where you can meaningfully make things happen.” -Justine Phillips, Partner at Baker McKenzie</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Essential questions product teams should ask legal experts when integrating AI into new products and features.</li><li>In particular, why it’s important for designers and engineers to question the source of the data they are using for AI-powered products and features.</li><li>The need to anticipate international security and privacy regulations, which are constantly changing, including emerging regulations that could impact companies developing IoT devices.</li></ul><p><br></p><p>Justine Phillips is a Partner at Baker McKenzie, where she is co-chair of data+cyber for the Americas. She is the author of <em>Data Privacy Program Guide: How to Build a Privacy Program That Inspires Trust</em>.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Complexity Undermines Security With Bill Bonney, Gary Hayslip, and Matt Stamper</title>
      <itunes:episode>48</itunes:episode>
      <podcast:episode>48</podcast:episode>
      <itunes:title>Complexity Undermines Security With Bill Bonney, Gary Hayslip, and Matt Stamper</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c3d47c1c-805f-41c2-869d-fb1b15428947</guid>
      <link>https://share.transistor.fm/s/48f84290</link>
      <description>
        <![CDATA[<p>What do CISOs have to say about the security tools their teams use?:</p>“When we introduce a level of complexity in the system, it undermines security. Every moment wasted trying to use a tool effectively benefits the adversary.” - Matt Stamper<p><br>In this episode, we talk to cybsecurity leaders Bill Bonney, Gary Hayslip, and Matt Stamper about:</p><ul><li>The ever-evolving role of the CISO and what CISOs care about most.</li><li>What product teams designing security software need to understand:<ul><li>Security tools need to operate across varied ecosystems (which means your product team needs to understand those ecosystems).</li><li>Complexity is the enemy of security. Yes, UX matters.</li><li>Context-switching means security teams waste time. Instead, security tools need to present the right information at the right time.</li><li>Why CISOs are excited to leverage AI in security tools—and what concerns them the most.</li></ul></li></ul><p>Bill Bonney, Gary Hayslip, and Matt Stamper are seasoned CISOs and cybersecurity leaders. They are co-founders of the CISO Desk Reference Guide—a series of books including topics such as security policy, third-party risk, privacy, and incident response—which provide actionable insights for security leaders.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What do CISOs have to say about the security tools their teams use?:</p>“When we introduce a level of complexity in the system, it undermines security. Every moment wasted trying to use a tool effectively benefits the adversary.” - Matt Stamper<p><br>In this episode, we talk to cybsecurity leaders Bill Bonney, Gary Hayslip, and Matt Stamper about:</p><ul><li>The ever-evolving role of the CISO and what CISOs care about most.</li><li>What product teams designing security software need to understand:<ul><li>Security tools need to operate across varied ecosystems (which means your product team needs to understand those ecosystems).</li><li>Complexity is the enemy of security. Yes, UX matters.</li><li>Context-switching means security teams waste time. Instead, security tools need to present the right information at the right time.</li><li>Why CISOs are excited to leverage AI in security tools—and what concerns them the most.</li></ul></li></ul><p>Bill Bonney, Gary Hayslip, and Matt Stamper are seasoned CISOs and cybersecurity leaders. They are co-founders of the CISO Desk Reference Guide—a series of books including topics such as security policy, third-party risk, privacy, and incident response—which provide actionable insights for security leaders.</p>]]>
      </content:encoded>
      <pubDate>Wed, 30 Oct 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/48f84290/7a5ab3a7.mp3" length="45322941" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2831</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What do CISOs have to say about the security tools their teams use?:</p>“When we introduce a level of complexity in the system, it undermines security. Every moment wasted trying to use a tool effectively benefits the adversary.” - Matt Stamper<p><br>In this episode, we talk to cybsecurity leaders Bill Bonney, Gary Hayslip, and Matt Stamper about:</p><ul><li>The ever-evolving role of the CISO and what CISOs care about most.</li><li>What product teams designing security software need to understand:<ul><li>Security tools need to operate across varied ecosystems (which means your product team needs to understand those ecosystems).</li><li>Complexity is the enemy of security. Yes, UX matters.</li><li>Context-switching means security teams waste time. Instead, security tools need to present the right information at the right time.</li><li>Why CISOs are excited to leverage AI in security tools—and what concerns them the most.</li></ul></li></ul><p>Bill Bonney, Gary Hayslip, and Matt Stamper are seasoned CISOs and cybersecurity leaders. They are co-founders of the CISO Desk Reference Guide—a series of books including topics such as security policy, third-party risk, privacy, and incident response—which provide actionable insights for security leaders.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Security Tools Don’t Get a Free Pass When It Comes to Human-Centered Design with Jaron Mink</title>
      <itunes:episode>47</itunes:episode>
      <podcast:episode>47</podcast:episode>
      <itunes:title>Security Tools Don’t Get a Free Pass When It Comes to Human-Centered Design with Jaron Mink</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">c2e64624-92a6-467b-9dc7-6f87f535fe1a</guid>
      <link>https://share.transistor.fm/s/4cb5a6d7</link>
      <description>
        <![CDATA[<p>In this episode, we talk about: </p><ul><li>Security tools don’t get a free pass when it comes to involving end users as part of the design process. </li><li>People studying and building ML-based security tools make a lot of assumptions. Instead of wasting time on assumptions, why not learn from security practitioners directly?</li><li>Businesses (and academia) are investing a great deal in building ML-based security tools. But are those tools actually useful? Are they introducing problems you didn’t anticipate? And even if they are useful, how do you know security practitioners will adopt them?</li><li>Why are adversarial machine learning defenses outlined in academic research not being put into practice? Jaron outlines three places where there are significant roadblocks: First, there are barriers to developers being aware of these defenses in the first place. Second, developers need to understand how the threats impact their systems. And third, they need to know how to effectively implement the defenses (and, importantly, be incentivized to do so).</li></ul><p>Jaron Mink is an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University focused on the intersection of usable security, machine learning, and system security. </p><p><br></p><p><strong>In this episode, we highlight two of Jaron’s papers:</strong></p><ul><li><em>“Everybody’s Got ML, Tell Me What Else Do You Have”: Practitioners’ Perception of ML-Based Security Tools and Explanations.”</em></li><li><em>“Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry</em></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about: </p><ul><li>Security tools don’t get a free pass when it comes to involving end users as part of the design process. </li><li>People studying and building ML-based security tools make a lot of assumptions. Instead of wasting time on assumptions, why not learn from security practitioners directly?</li><li>Businesses (and academia) are investing a great deal in building ML-based security tools. But are those tools actually useful? Are they introducing problems you didn’t anticipate? And even if they are useful, how do you know security practitioners will adopt them?</li><li>Why are adversarial machine learning defenses outlined in academic research not being put into practice? Jaron outlines three places where there are significant roadblocks: First, there are barriers to developers being aware of these defenses in the first place. Second, developers need to understand how the threats impact their systems. And third, they need to know how to effectively implement the defenses (and, importantly, be incentivized to do so).</li></ul><p>Jaron Mink is an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University focused on the intersection of usable security, machine learning, and system security. </p><p><br></p><p><strong>In this episode, we highlight two of Jaron’s papers:</strong></p><ul><li><em>“Everybody’s Got ML, Tell Me What Else Do You Have”: Practitioners’ Perception of ML-Based Security Tools and Explanations.”</em></li><li><em>“Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry</em></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 23 Oct 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/4cb5a6d7/359269b0.mp3" length="41777916" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2610</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, we talk about: </p><ul><li>Security tools don’t get a free pass when it comes to involving end users as part of the design process. </li><li>People studying and building ML-based security tools make a lot of assumptions. Instead of wasting time on assumptions, why not learn from security practitioners directly?</li><li>Businesses (and academia) are investing a great deal in building ML-based security tools. But are those tools actually useful? Are they introducing problems you didn’t anticipate? And even if they are useful, how do you know security practitioners will adopt them?</li><li>Why are adversarial machine learning defenses outlined in academic research not being put into practice? Jaron outlines three places where there are significant roadblocks: First, there are barriers to developers being aware of these defenses in the first place. Second, developers need to understand how the threats impact their systems. And third, they need to know how to effectively implement the defenses (and, importantly, be incentivized to do so).</li></ul><p>Jaron Mink is an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University focused on the intersection of usable security, machine learning, and system security. </p><p><br></p><p><strong>In this episode, we highlight two of Jaron’s papers:</strong></p><ul><li><em>“Everybody’s Got ML, Tell Me What Else Do You Have”: Practitioners’ Perception of ML-Based Security Tools and Explanations.”</em></li><li><em>“Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry</em></li></ul>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Leverage UX Research to Improve the Security User Experience with Serge Egelman</title>
      <itunes:episode>46</itunes:episode>
      <podcast:episode>46</podcast:episode>
      <itunes:title>Leverage UX Research to Improve the Security User Experience with Serge Egelman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3e3a2172-1631-4f67-b3f6-79b33044b229</guid>
      <link>https://share.transistor.fm/s/a06cc2b7</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>The role misaligned incentives play in security behaviors.</li><li>How Serge and his team approach security-focused UX research. </li><li>Looking upstream at the security decisions made by software engineers and, in turn, the situations they are often placed in due to resource constraints and competing priorities at their organizations.</li><li>Learning from other industries with highly-skilled professionals (shout-out to the humble check list!)</li><li>Regulations and policy changes will likely place greater liability on the organizations shipping software.</li></ul><p><br></p><p>Serge Egelman is the Founder and Chief Scientist at AppCensus and Research Director at International Computer Science Institute (ICSI). He’s written countless research papers on usable security and privacy. Most recently, his research centers around improving the user experience for users who are responsible for safeguarding their customer’s data (such as software engineers).</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>The role misaligned incentives play in security behaviors.</li><li>How Serge and his team approach security-focused UX research. </li><li>Looking upstream at the security decisions made by software engineers and, in turn, the situations they are often placed in due to resource constraints and competing priorities at their organizations.</li><li>Learning from other industries with highly-skilled professionals (shout-out to the humble check list!)</li><li>Regulations and policy changes will likely place greater liability on the organizations shipping software.</li></ul><p><br></p><p>Serge Egelman is the Founder and Chief Scientist at AppCensus and Research Director at International Computer Science Institute (ICSI). He’s written countless research papers on usable security and privacy. Most recently, his research centers around improving the user experience for users who are responsible for safeguarding their customer’s data (such as software engineers).</p>]]>
      </content:encoded>
      <pubDate>Wed, 02 Oct 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/a06cc2b7/03dda024.mp3" length="30291457" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1892</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>The role misaligned incentives play in security behaviors.</li><li>How Serge and his team approach security-focused UX research. </li><li>Looking upstream at the security decisions made by software engineers and, in turn, the situations they are often placed in due to resource constraints and competing priorities at their organizations.</li><li>Learning from other industries with highly-skilled professionals (shout-out to the humble check list!)</li><li>Regulations and policy changes will likely place greater liability on the organizations shipping software.</li></ul><p><br></p><p>Serge Egelman is the Founder and Chief Scientist at AppCensus and Research Director at International Computer Science Institute (ICSI). He’s written countless research papers on usable security and privacy. Most recently, his research centers around improving the user experience for users who are responsible for safeguarding their customer’s data (such as software engineers).</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Help Security Analysts Tell the Story Behind the Threats with Shante Perrin</title>
      <itunes:episode>45</itunes:episode>
      <podcast:episode>45</podcast:episode>
      <itunes:title>Help Security Analysts Tell the Story Behind the Threats with Shante Perrin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">07b481c9-98d8-4962-a19d-95cee1c21d51</guid>
      <link>https://share.transistor.fm/s/c2abaaac</link>
      <description>
        <![CDATA[<p>Shante Perrin, a cybersecurity leader, and her team use cybersecurity software to not only to detect and respond to cybersecurity threats but also, as Shante describes, to help paint a picture for their customers:</p><p><br></p><p><em>“We like to build a timeline of events to build that picture, create that story so we can deliver it to the customer and explain why we felt it is suspicious. In other words, why are we bothering you about this?”</em></p><p><br></p><p>In this episode, we talk about:</p><p><br></p><ul><li>Building stories from data: analysts must translate technical information into clear, understandable narratives for customers.</li><li>If people designing cybersecurity software can design better, more effective experiences for analysts, analysts can do a better job of communicating these narratives to their customers.</li><li>How security analysts at different levels perceive and handle threats differently—and how that changes what they need or expect from cybersecurity software.</li><li>How thinking like an attacker can help security analysts—but only if the tools they use provide them with the right information at the right time. </li></ul><p><br></p><p>Shante Perrin is a cybersecurity leader and is currently the director of a managed services team. She led a cybersecurity team for a Fortune 100 company as an MSSP and has been a security analyst and security operations center (SOC) lead.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Shante Perrin, a cybersecurity leader, and her team use cybersecurity software to not only to detect and respond to cybersecurity threats but also, as Shante describes, to help paint a picture for their customers:</p><p><br></p><p><em>“We like to build a timeline of events to build that picture, create that story so we can deliver it to the customer and explain why we felt it is suspicious. In other words, why are we bothering you about this?”</em></p><p><br></p><p>In this episode, we talk about:</p><p><br></p><ul><li>Building stories from data: analysts must translate technical information into clear, understandable narratives for customers.</li><li>If people designing cybersecurity software can design better, more effective experiences for analysts, analysts can do a better job of communicating these narratives to their customers.</li><li>How security analysts at different levels perceive and handle threats differently—and how that changes what they need or expect from cybersecurity software.</li><li>How thinking like an attacker can help security analysts—but only if the tools they use provide them with the right information at the right time. </li></ul><p><br></p><p>Shante Perrin is a cybersecurity leader and is currently the director of a managed services team. She led a cybersecurity team for a Fortune 100 company as an MSSP and has been a security analyst and security operations center (SOC) lead.</p>]]>
      </content:encoded>
      <pubDate>Mon, 23 Sep 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/c2abaaac/598012b3.mp3" length="27835943" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1738</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Shante Perrin, a cybersecurity leader, and her team use cybersecurity software to not only to detect and respond to cybersecurity threats but also, as Shante describes, to help paint a picture for their customers:</p><p><br></p><p><em>“We like to build a timeline of events to build that picture, create that story so we can deliver it to the customer and explain why we felt it is suspicious. In other words, why are we bothering you about this?”</em></p><p><br></p><p>In this episode, we talk about:</p><p><br></p><ul><li>Building stories from data: analysts must translate technical information into clear, understandable narratives for customers.</li><li>If people designing cybersecurity software can design better, more effective experiences for analysts, analysts can do a better job of communicating these narratives to their customers.</li><li>How security analysts at different levels perceive and handle threats differently—and how that changes what they need or expect from cybersecurity software.</li><li>How thinking like an attacker can help security analysts—but only if the tools they use provide them with the right information at the right time. </li></ul><p><br></p><p>Shante Perrin is a cybersecurity leader and is currently the director of a managed services team. She led a cybersecurity team for a Fortune 100 company as an MSSP and has been a security analyst and security operations center (SOC) lead.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Putting Human-Centered Security Into Practice with Julie Haney</title>
      <itunes:episode>44</itunes:episode>
      <podcast:episode>44</podcast:episode>
      <itunes:title>Putting Human-Centered Security Into Practice with Julie Haney</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">18665676-fad9-4846-9137-b9c11e1cea7e</guid>
      <link>https://share.transistor.fm/s/00d9a0be</link>
      <description>
        <![CDATA[<p>In this episode, we talk about: </p><ul><li>The need for human-centered security—in order for security measures to be effective, they must center around people, making usability as crucial as technology. </li><li>We explore the gap between research and practice, highlighting the need to bring cybersecurity research into real-world application. Human-centered security research can’t possible be effective if no one knows about it or finds it challenging to implement.</li><li>The importance of collaboration, advocating for more shared spaces where researchers and practitioners can come together to address pressing cybersecurity challenges.</li></ul><p><a href="https://www.nist.gov/people/julie-haney">Julie Haney</a> is a Computer Scientist and Human-Centered Security Researcher and program lead at NIST (National Institute of Standards and Technology). She was formerly a Computer Scientist at the United States Department of Defense. In the episode we refer to two of Julie’s publications: <a href="https://www.nist.gov/publications/ivory-tower-real-world-building-bridges-between-research-and-practice-human-centered">“From Ivory Tower to Real World: Building Bridges Between Research and Practice in Human-Centered Cybersecurity”</a> and <a href="https://www.nist.gov/publications/towards-bridging-research-practice-gap-understanding-researcher-practitioner">“Towards Bridging the Research-Practice Gap: Understanding Researcher-Practitioner Interactions and Challenges in Human-Centered Cybersecurity.”</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about: </p><ul><li>The need for human-centered security—in order for security measures to be effective, they must center around people, making usability as crucial as technology. </li><li>We explore the gap between research and practice, highlighting the need to bring cybersecurity research into real-world application. Human-centered security research can’t possible be effective if no one knows about it or finds it challenging to implement.</li><li>The importance of collaboration, advocating for more shared spaces where researchers and practitioners can come together to address pressing cybersecurity challenges.</li></ul><p><a href="https://www.nist.gov/people/julie-haney">Julie Haney</a> is a Computer Scientist and Human-Centered Security Researcher and program lead at NIST (National Institute of Standards and Technology). She was formerly a Computer Scientist at the United States Department of Defense. In the episode we refer to two of Julie’s publications: <a href="https://www.nist.gov/publications/ivory-tower-real-world-building-bridges-between-research-and-practice-human-centered">“From Ivory Tower to Real World: Building Bridges Between Research and Practice in Human-Centered Cybersecurity”</a> and <a href="https://www.nist.gov/publications/towards-bridging-research-practice-gap-understanding-researcher-practitioner">“Towards Bridging the Research-Practice Gap: Understanding Researcher-Practitioner Interactions and Challenges in Human-Centered Cybersecurity.”</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Sep 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/00d9a0be/614d4934.mp3" length="48814137" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>3050</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>In this episode, we talk about: </p><ul><li>The need for human-centered security—in order for security measures to be effective, they must center around people, making usability as crucial as technology. </li><li>We explore the gap between research and practice, highlighting the need to bring cybersecurity research into real-world application. Human-centered security research can’t possible be effective if no one knows about it or finds it challenging to implement.</li><li>The importance of collaboration, advocating for more shared spaces where researchers and practitioners can come together to address pressing cybersecurity challenges.</li></ul><p><a href="https://www.nist.gov/people/julie-haney">Julie Haney</a> is a Computer Scientist and Human-Centered Security Researcher and program lead at NIST (National Institute of Standards and Technology). She was formerly a Computer Scientist at the United States Department of Defense. In the episode we refer to two of Julie’s publications: <a href="https://www.nist.gov/publications/ivory-tower-real-world-building-bridges-between-research-and-practice-human-centered">“From Ivory Tower to Real World: Building Bridges Between Research and Practice in Human-Centered Cybersecurity”</a> and <a href="https://www.nist.gov/publications/towards-bridging-research-practice-gap-understanding-researcher-practitioner">“Towards Bridging the Research-Practice Gap: Understanding Researcher-Practitioner Interactions and Challenges in Human-Centered Cybersecurity.”</a></p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>So Much Data, So Little Time—Designing for Security Workflows with Tom Harrison</title>
      <itunes:episode>43</itunes:episode>
      <podcast:episode>43</podcast:episode>
      <itunes:title>So Much Data, So Little Time—Designing for Security Workflows with Tom Harrison</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">81f3657f-de05-4892-93f6-57bf12c7c910</guid>
      <link>https://share.transistor.fm/s/8b7f20f8</link>
      <description>
        <![CDATA[<p>Security analysts respond to security detections and alerts. As part of this, they have to sift through a mountain of data and they have to do it fast. Not in hours, not in days. In minutes.</p><p><br></p><p>Tom Harrison, security operations manager at Secureworks, explains it perfectly, “We have a time crunch and it’s exacerbated by the other big issue security analysts have: we have an absolute ton of data that we have to sift through.”</p><p><br></p><p>In this episode:</p><p><br></p><p>Tom explains that security analysts are forced to go back to a pile of data with each subsequent question in their workflow. That’s a huge waste of time. And a terrible user experience. </p><p><br></p><p>Tom says, “It would lead to better accuracy, faster triage, and a better user experience if you can just take me directly to the answer or at the very least a subsection that has the answer I’m looking for.”</p><p><br></p><p>What does this mean for you as a UX designer designing security products? You need a deep understanding of security analyst workflows to help them identify and respond to attacks as quickly as possible.</p><p><br></p><p>That way, you can design security products that support users who are under intense pressure to do things quickly. Tom describes how the UX can “guide or complement the workflow.”</p><p><br></p><p>Tom talks about what gets him excited about integrating AI into security analyst workflows—and what has him worried, as well.</p><p><br></p><p>Tom Harrison is a Security Operations Manager at Secureworks. We dubbed Tom an “ideas machine” and a fierce advocate for the security analyst user experience. In fact, Tom is conducting UX research in the field better than most UX researchers. He’s a passionate teacher and shares his knowledge and resources in a free <a href="https://s0cm0nkey.gitbook.io/s0cm0nkeys-security-reference-guide/%20">security reference guide</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Security analysts respond to security detections and alerts. As part of this, they have to sift through a mountain of data and they have to do it fast. Not in hours, not in days. In minutes.</p><p><br></p><p>Tom Harrison, security operations manager at Secureworks, explains it perfectly, “We have a time crunch and it’s exacerbated by the other big issue security analysts have: we have an absolute ton of data that we have to sift through.”</p><p><br></p><p>In this episode:</p><p><br></p><p>Tom explains that security analysts are forced to go back to a pile of data with each subsequent question in their workflow. That’s a huge waste of time. And a terrible user experience. </p><p><br></p><p>Tom says, “It would lead to better accuracy, faster triage, and a better user experience if you can just take me directly to the answer or at the very least a subsection that has the answer I’m looking for.”</p><p><br></p><p>What does this mean for you as a UX designer designing security products? You need a deep understanding of security analyst workflows to help them identify and respond to attacks as quickly as possible.</p><p><br></p><p>That way, you can design security products that support users who are under intense pressure to do things quickly. Tom describes how the UX can “guide or complement the workflow.”</p><p><br></p><p>Tom talks about what gets him excited about integrating AI into security analyst workflows—and what has him worried, as well.</p><p><br></p><p>Tom Harrison is a Security Operations Manager at Secureworks. We dubbed Tom an “ideas machine” and a fierce advocate for the security analyst user experience. In fact, Tom is conducting UX research in the field better than most UX researchers. He’s a passionate teacher and shares his knowledge and resources in a free <a href="https://s0cm0nkey.gitbook.io/s0cm0nkeys-security-reference-guide/%20">security reference guide</a>.</p>]]>
      </content:encoded>
      <pubDate>Thu, 05 Sep 2024 12:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/8b7f20f8/4f781598.mp3" length="29884446" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1867</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Security analysts respond to security detections and alerts. As part of this, they have to sift through a mountain of data and they have to do it fast. Not in hours, not in days. In minutes.</p><p><br></p><p>Tom Harrison, security operations manager at Secureworks, explains it perfectly, “We have a time crunch and it’s exacerbated by the other big issue security analysts have: we have an absolute ton of data that we have to sift through.”</p><p><br></p><p>In this episode:</p><p><br></p><p>Tom explains that security analysts are forced to go back to a pile of data with each subsequent question in their workflow. That’s a huge waste of time. And a terrible user experience. </p><p><br></p><p>Tom says, “It would lead to better accuracy, faster triage, and a better user experience if you can just take me directly to the answer or at the very least a subsection that has the answer I’m looking for.”</p><p><br></p><p>What does this mean for you as a UX designer designing security products? You need a deep understanding of security analyst workflows to help them identify and respond to attacks as quickly as possible.</p><p><br></p><p>That way, you can design security products that support users who are under intense pressure to do things quickly. Tom describes how the UX can “guide or complement the workflow.”</p><p><br></p><p>Tom talks about what gets him excited about integrating AI into security analyst workflows—and what has him worried, as well.</p><p><br></p><p>Tom Harrison is a Security Operations Manager at Secureworks. We dubbed Tom an “ideas machine” and a fierce advocate for the security analyst user experience. In fact, Tom is conducting UX research in the field better than most UX researchers. He’s a passionate teacher and shares his knowledge and resources in a free <a href="https://s0cm0nkey.gitbook.io/s0cm0nkeys-security-reference-guide/%20">security reference guide</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Threat Modeling Parts of the User Journey That Cost Your Business Money With Adam Shostack</title>
      <itunes:episode>42</itunes:episode>
      <podcast:episode>42</podcast:episode>
      <itunes:title>Threat Modeling Parts of the User Journey That Cost Your Business Money With Adam Shostack</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9fe68e9b-b851-417d-b108-eb23b5147bd5</guid>
      <link>https://share.transistor.fm/s/3515d960</link>
      <description>
        <![CDATA[<p>“Even though usability and security tradeoffs will always be with us, we can get much smarter. Some of the techniques are really simple. For one, write everything down a user needs to do in order to use your app securely. Yeah, keep writing.”</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What is threat modeling and why should product teams and UX designers care about it? (Also check out <a href="https://share.transistor.fm/s/ad97b9b4">Adam’s first episode on Human-Centered Security</a>).</li><li>Focus on parts of the user journey where you might gain or lose customers: what tradeoffs between usability and security are you making here?</li><li>Involve a cross-disciplinary team from the very beginning. This is critiical: “How do we get focused on the parts of the problem that matter so we don’t spend forever on the wrong stuff?”</li></ul><p>Adam Shostack is an expert on threat modeling, having worked at Microsoft and currently running security consultancy <a href="https://shostack.org/">Shostack + Associates</a>. He is the author of <em>The New School of Information Security</em>, <em>Threat Modeling: Designing for Security</em> and <em>Threats: What Every Engineer Should Learn From Star Wars</em>. <a href="https://www.youtube.com/c/Shostack">Adam’s YouTube channel</a> has entertaining videos that are also excellent resources for learning about threat modeling.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>“Even though usability and security tradeoffs will always be with us, we can get much smarter. Some of the techniques are really simple. For one, write everything down a user needs to do in order to use your app securely. Yeah, keep writing.”</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What is threat modeling and why should product teams and UX designers care about it? (Also check out <a href="https://share.transistor.fm/s/ad97b9b4">Adam’s first episode on Human-Centered Security</a>).</li><li>Focus on parts of the user journey where you might gain or lose customers: what tradeoffs between usability and security are you making here?</li><li>Involve a cross-disciplinary team from the very beginning. This is critiical: “How do we get focused on the parts of the problem that matter so we don’t spend forever on the wrong stuff?”</li></ul><p>Adam Shostack is an expert on threat modeling, having worked at Microsoft and currently running security consultancy <a href="https://shostack.org/">Shostack + Associates</a>. He is the author of <em>The New School of Information Security</em>, <em>Threat Modeling: Designing for Security</em> and <em>Threats: What Every Engineer Should Learn From Star Wars</em>. <a href="https://www.youtube.com/c/Shostack">Adam’s YouTube channel</a> has entertaining videos that are also excellent resources for learning about threat modeling.</p>]]>
      </content:encoded>
      <pubDate>Thu, 22 Aug 2024 13:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/3515d960/7488fc0e.mp3" length="45156604" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2821</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>“Even though usability and security tradeoffs will always be with us, we can get much smarter. Some of the techniques are really simple. For one, write everything down a user needs to do in order to use your app securely. Yeah, keep writing.”</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What is threat modeling and why should product teams and UX designers care about it? (Also check out <a href="https://share.transistor.fm/s/ad97b9b4">Adam’s first episode on Human-Centered Security</a>).</li><li>Focus on parts of the user journey where you might gain or lose customers: what tradeoffs between usability and security are you making here?</li><li>Involve a cross-disciplinary team from the very beginning. This is critiical: “How do we get focused on the parts of the problem that matter so we don’t spend forever on the wrong stuff?”</li></ul><p>Adam Shostack is an expert on threat modeling, having worked at Microsoft and currently running security consultancy <a href="https://shostack.org/">Shostack + Associates</a>. He is the author of <em>The New School of Information Security</em>, <em>Threat Modeling: Designing for Security</em> and <em>Threats: What Every Engineer Should Learn From Star Wars</em>. <a href="https://www.youtube.com/c/Shostack">Adam’s YouTube channel</a> has entertaining videos that are also excellent resources for learning about threat modeling.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>No Room for Hype When Integrating AI Into Cybersecurity Products with John Robertson and Siddharth Hirwani</title>
      <itunes:episode>41</itunes:episode>
      <podcast:episode>41</podcast:episode>
      <itunes:title>No Room for Hype When Integrating AI Into Cybersecurity Products with John Robertson and Siddharth Hirwani</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">635b354c-54cd-443d-b360-727aad4e4cb3</guid>
      <link>https://share.transistor.fm/s/5f8eea7e</link>
      <description>
        <![CDATA[<p>“UX design can enhance the overall performance, adoption, and impact in cybersecurity tools that leverage AI, making the tools more accessible to a broader range of users, including those who don’t have deep technical or security knowledge.”</p><p><br></p><p>In this episode, Siddharth Hirwani and John Robertson talk about:</p><ul><li>Pressures and challenges security analysts face and how AI can help.</li><li>Moving beyond AI hype and focusing on integrating AI in a way that genuinely addresses security analyst’s needs.</li><li>How UX design can foster trust and adoption of AI tools, while still encouraging analysts to verify AI outputs. </li><li>John and Siddharth highlight problems like over-reliance and bias and how UX can be leveraged to address these concerns.</li></ul><p><br></p><p>Siddharth Hirwani is Senior Principal Product Designer interested in exploring the critical intersection of user experience and cybersecurity.</p><p><br></p><p>John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of cybersecurity analysts in security operations centers.</p><p><br></p><p>Siddharth and John will be presenting their paper “Cybersecurity Analyst’s Perception of AI Security Tools and Practical Implications” at USENIX SOUPS (Symposium on Usable Privacy and Security) in August 2024.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>“UX design can enhance the overall performance, adoption, and impact in cybersecurity tools that leverage AI, making the tools more accessible to a broader range of users, including those who don’t have deep technical or security knowledge.”</p><p><br></p><p>In this episode, Siddharth Hirwani and John Robertson talk about:</p><ul><li>Pressures and challenges security analysts face and how AI can help.</li><li>Moving beyond AI hype and focusing on integrating AI in a way that genuinely addresses security analyst’s needs.</li><li>How UX design can foster trust and adoption of AI tools, while still encouraging analysts to verify AI outputs. </li><li>John and Siddharth highlight problems like over-reliance and bias and how UX can be leveraged to address these concerns.</li></ul><p><br></p><p>Siddharth Hirwani is Senior Principal Product Designer interested in exploring the critical intersection of user experience and cybersecurity.</p><p><br></p><p>John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of cybersecurity analysts in security operations centers.</p><p><br></p><p>Siddharth and John will be presenting their paper “Cybersecurity Analyst’s Perception of AI Security Tools and Practical Implications” at USENIX SOUPS (Symposium on Usable Privacy and Security) in August 2024.</p>]]>
      </content:encoded>
      <pubDate>Wed, 07 Aug 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/5f8eea7e/38b838fa.mp3" length="34553414" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2158</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>“UX design can enhance the overall performance, adoption, and impact in cybersecurity tools that leverage AI, making the tools more accessible to a broader range of users, including those who don’t have deep technical or security knowledge.”</p><p><br></p><p>In this episode, Siddharth Hirwani and John Robertson talk about:</p><ul><li>Pressures and challenges security analysts face and how AI can help.</li><li>Moving beyond AI hype and focusing on integrating AI in a way that genuinely addresses security analyst’s needs.</li><li>How UX design can foster trust and adoption of AI tools, while still encouraging analysts to verify AI outputs. </li><li>John and Siddharth highlight problems like over-reliance and bias and how UX can be leveraged to address these concerns.</li></ul><p><br></p><p>Siddharth Hirwani is Senior Principal Product Designer interested in exploring the critical intersection of user experience and cybersecurity.</p><p><br></p><p>John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of cybersecurity analysts in security operations centers.</p><p><br></p><p>Siddharth and John will be presenting their paper “Cybersecurity Analyst’s Perception of AI Security Tools and Practical Implications” at USENIX SOUPS (Symposium on Usable Privacy and Security) in August 2024.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What Do You Know About Alert Fatigue? An Interview with John Robertson</title>
      <itunes:episode>40</itunes:episode>
      <podcast:episode>40</podcast:episode>
      <itunes:title>What Do You Know About Alert Fatigue? An Interview with John Robertson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">5ea101d4-fc2a-472d-91b7-bb2c2e6bdfe2</guid>
      <link>https://share.transistor.fm/s/30d10844</link>
      <description>
        <![CDATA[<p>“People try to talk about the technical user experience at too high of a level. You talk about alert fatigue and you kind of understand what alert fatigue is just by the name. Yeah, there’s a lot of alerts. But watching it in action is different.”</p><p><br></p><p>In this episode, Heidi interviews John about what he’s learned about designing for security analysts. We talk about:</p><ul><li>The importance of understanding user workflows. “Alert fatigue” is just a saying until you actually observe it in action.</li><li>While trust is hard to measure, it’s critical for improving the security user experience.</li><li>Practical tips on how to promote cross-disciplinary collaboration.</li></ul><p><br></p><p>John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of Cybersecurity Analysts in Security Operations Centers.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>“People try to talk about the technical user experience at too high of a level. You talk about alert fatigue and you kind of understand what alert fatigue is just by the name. Yeah, there’s a lot of alerts. But watching it in action is different.”</p><p><br></p><p>In this episode, Heidi interviews John about what he’s learned about designing for security analysts. We talk about:</p><ul><li>The importance of understanding user workflows. “Alert fatigue” is just a saying until you actually observe it in action.</li><li>While trust is hard to measure, it’s critical for improving the security user experience.</li><li>Practical tips on how to promote cross-disciplinary collaboration.</li></ul><p><br></p><p>John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of Cybersecurity Analysts in Security Operations Centers.</p>]]>
      </content:encoded>
      <pubDate>Wed, 31 Jul 2024 11:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/30d10844/9caf5318.mp3" length="18756193" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1171</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>“People try to talk about the technical user experience at too high of a level. You talk about alert fatigue and you kind of understand what alert fatigue is just by the name. Yeah, there’s a lot of alerts. But watching it in action is different.”</p><p><br></p><p>In this episode, Heidi interviews John about what he’s learned about designing for security analysts. We talk about:</p><ul><li>The importance of understanding user workflows. “Alert fatigue” is just a saying until you actually observe it in action.</li><li>While trust is hard to measure, it’s critical for improving the security user experience.</li><li>Practical tips on how to promote cross-disciplinary collaboration.</li></ul><p><br></p><p>John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of Cybersecurity Analysts in Security Operations Centers.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>How to Build Trust Through the User Experience with Carlie Hundt and Devon Hirth</title>
      <itunes:episode>39</itunes:episode>
      <podcast:episode>39</podcast:episode>
      <itunes:title>How to Build Trust Through the User Experience with Carlie Hundt and Devon Hirth</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6eacd365-451e-4737-89eb-a39e3ca3ff62</guid>
      <link>https://share.transistor.fm/s/10bda1fc</link>
      <description>
        <![CDATA[<p>Carlie Hundt and Devon Hirth believe a UX designer’s role is to “lift up the voices of the people trying to access and use government services.” Trust is really important. How do we build trust through the user experience, particularly when you are asking for personal information?</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Leveraging storytelling to “share with our government partners the real experience of real people who are trying ot access government services.”</li><li>Why you need to anticipate where users might question, “Why are you asking for this? What are you going to do with this information?”</li><li>Providing flexibility in the user experience. Carlie refers to this as “many welcoming doors.”</li><li>When and why you might give users the option to sign up for services without requiring them to create an account.</li></ul><p>Both Carlie Hundt and Devon Hirth work for Code for America, a civic tech non-profit, in the Safety Net Innovation Lab. Carlie is Staff Product Designer and Devon is Staff User Experience Designer.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Carlie Hundt and Devon Hirth believe a UX designer’s role is to “lift up the voices of the people trying to access and use government services.” Trust is really important. How do we build trust through the user experience, particularly when you are asking for personal information?</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Leveraging storytelling to “share with our government partners the real experience of real people who are trying ot access government services.”</li><li>Why you need to anticipate where users might question, “Why are you asking for this? What are you going to do with this information?”</li><li>Providing flexibility in the user experience. Carlie refers to this as “many welcoming doors.”</li><li>When and why you might give users the option to sign up for services without requiring them to create an account.</li></ul><p>Both Carlie Hundt and Devon Hirth work for Code for America, a civic tech non-profit, in the Safety Net Innovation Lab. Carlie is Staff Product Designer and Devon is Staff User Experience Designer.</p>]]>
      </content:encoded>
      <pubDate>Tue, 18 Jun 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/10bda1fc/419104f5.mp3" length="43282883" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2704</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Carlie Hundt and Devon Hirth believe a UX designer’s role is to “lift up the voices of the people trying to access and use government services.” Trust is really important. How do we build trust through the user experience, particularly when you are asking for personal information?</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Leveraging storytelling to “share with our government partners the real experience of real people who are trying ot access government services.”</li><li>Why you need to anticipate where users might question, “Why are you asking for this? What are you going to do with this information?”</li><li>Providing flexibility in the user experience. Carlie refers to this as “many welcoming doors.”</li><li>When and why you might give users the option to sign up for services without requiring them to create an account.</li></ul><p>Both Carlie Hundt and Devon Hirth work for Code for America, a civic tech non-profit, in the Safety Net Innovation Lab. Carlie is Staff Product Designer and Devon is Staff User Experience Designer.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Understand the Holistic Experience to Improve Cybersecurity Products with Lindsey Wallace</title>
      <itunes:episode>38</itunes:episode>
      <podcast:episode>38</podcast:episode>
      <itunes:title>Understand the Holistic Experience to Improve Cybersecurity Products with Lindsey Wallace</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">0dc1bfb8-9f56-4d04-8e04-26f4de527462</guid>
      <link>https://share.transistor.fm/s/0e607c47</link>
      <description>
        <![CDATA[<p>When thinking about building products for security teams, we often emphasize the technical side: reduced false positives, new detection techniques, and automation. But what about asking things like: how do security teams work together? What excites a security analyst about their job? How can we help them do more of that? What does the experience look like across a suite of cybersecurity products? To improve the user experience for security teams—and improve security outcomes—you have to think holistically.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>How a centralized UX research team fosters meta-analysis across different personas, workflows, and a suite of products.</li><li>Why in-person research—like visiting a security operations center (SOC)—is so important for UX researchers building security products.</li><li>Creative ways of engaging with customers and learning from them.</li><li>Why her UX research team has taken ownership over UX metrics and analytics.</li><li>Why asking stakeholders a simple question: “What kind of evidence are you looking for?” can save you a lot of time and frustration.</li></ul><p>Lindsey Wallace is the Director of Design Research and Strategy at Cisco Security Design. She has a PhD in Anthropology and previously worked at Adobe. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>When thinking about building products for security teams, we often emphasize the technical side: reduced false positives, new detection techniques, and automation. But what about asking things like: how do security teams work together? What excites a security analyst about their job? How can we help them do more of that? What does the experience look like across a suite of cybersecurity products? To improve the user experience for security teams—and improve security outcomes—you have to think holistically.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>How a centralized UX research team fosters meta-analysis across different personas, workflows, and a suite of products.</li><li>Why in-person research—like visiting a security operations center (SOC)—is so important for UX researchers building security products.</li><li>Creative ways of engaging with customers and learning from them.</li><li>Why her UX research team has taken ownership over UX metrics and analytics.</li><li>Why asking stakeholders a simple question: “What kind of evidence are you looking for?” can save you a lot of time and frustration.</li></ul><p>Lindsey Wallace is the Director of Design Research and Strategy at Cisco Security Design. She has a PhD in Anthropology and previously worked at Adobe. </p>]]>
      </content:encoded>
      <pubDate>Wed, 05 Jun 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/0e607c47/7921e5ca.mp3" length="48530237" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>3033</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>When thinking about building products for security teams, we often emphasize the technical side: reduced false positives, new detection techniques, and automation. But what about asking things like: how do security teams work together? What excites a security analyst about their job? How can we help them do more of that? What does the experience look like across a suite of cybersecurity products? To improve the user experience for security teams—and improve security outcomes—you have to think holistically.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>How a centralized UX research team fosters meta-analysis across different personas, workflows, and a suite of products.</li><li>Why in-person research—like visiting a security operations center (SOC)—is so important for UX researchers building security products.</li><li>Creative ways of engaging with customers and learning from them.</li><li>Why her UX research team has taken ownership over UX metrics and analytics.</li><li>Why asking stakeholders a simple question: “What kind of evidence are you looking for?” can save you a lot of time and frustration.</li></ul><p>Lindsey Wallace is the Director of Design Research and Strategy at Cisco Security Design. She has a PhD in Anthropology and previously worked at Adobe. </p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Include Users with Disabilities in Your Security UX Research with Joyce Oshita</title>
      <itunes:episode>37</itunes:episode>
      <podcast:episode>37</podcast:episode>
      <itunes:title>Include Users with Disabilities in Your Security UX Research with Joyce Oshita</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">1e230056-051e-4228-86f9-1e65aefd4c78</guid>
      <link>https://share.transistor.fm/s/cfbdc122</link>
      <description>
        <![CDATA[<p>Are you inadvertently designing a security user experience that makes it less likely your users will choose the most secure option for them? Are security-related roadblocks preventing people from using your service? In order to design inclusive experiences—including accessible experiences—you must include users with disabilities in your research.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Including users with disabilities as a co-creation exercise—not something you “check off” as part of your UX research.</li><li>Why flexibility is so important when it comes to the security user experience.</li><li>The importance of storytelling to help teams design accessible experiences.</li><li>Joyce’s experience when encountering a CAPTCHA using a screen reader (and listen to an example), where she is prevented from completing a form.</li><li>Why Joyce believes “today’s frustration will be the field for tomorrow’s innovation.”</li></ul><p>Joyce Oshita is a Certified Professional in Web Accessibility, accessibility trainer and educator, and advisor for the FIDO Alliance task force. Joyce created the <a href="https://youtu.be/8ttExPtn2iE">Digital Overload series</a>, which documents her experiences using digital services while using a screen reader.</p><p>Also check out the W3C Web Accessibility Initiative (WAI) Web Accessibility Perspective Videos.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Are you inadvertently designing a security user experience that makes it less likely your users will choose the most secure option for them? Are security-related roadblocks preventing people from using your service? In order to design inclusive experiences—including accessible experiences—you must include users with disabilities in your research.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Including users with disabilities as a co-creation exercise—not something you “check off” as part of your UX research.</li><li>Why flexibility is so important when it comes to the security user experience.</li><li>The importance of storytelling to help teams design accessible experiences.</li><li>Joyce’s experience when encountering a CAPTCHA using a screen reader (and listen to an example), where she is prevented from completing a form.</li><li>Why Joyce believes “today’s frustration will be the field for tomorrow’s innovation.”</li></ul><p>Joyce Oshita is a Certified Professional in Web Accessibility, accessibility trainer and educator, and advisor for the FIDO Alliance task force. Joyce created the <a href="https://youtu.be/8ttExPtn2iE">Digital Overload series</a>, which documents her experiences using digital services while using a screen reader.</p><p>Also check out the W3C Web Accessibility Initiative (WAI) Web Accessibility Perspective Videos.</p>]]>
      </content:encoded>
      <pubDate>Wed, 22 May 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/cfbdc122/ac3cfae3.mp3" length="47509569" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2969</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Are you inadvertently designing a security user experience that makes it less likely your users will choose the most secure option for them? Are security-related roadblocks preventing people from using your service? In order to design inclusive experiences—including accessible experiences—you must include users with disabilities in your research.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>Including users with disabilities as a co-creation exercise—not something you “check off” as part of your UX research.</li><li>Why flexibility is so important when it comes to the security user experience.</li><li>The importance of storytelling to help teams design accessible experiences.</li><li>Joyce’s experience when encountering a CAPTCHA using a screen reader (and listen to an example), where she is prevented from completing a form.</li><li>Why Joyce believes “today’s frustration will be the field for tomorrow’s innovation.”</li></ul><p>Joyce Oshita is a Certified Professional in Web Accessibility, accessibility trainer and educator, and advisor for the FIDO Alliance task force. Joyce created the <a href="https://youtu.be/8ttExPtn2iE">Digital Overload series</a>, which documents her experiences using digital services while using a screen reader.</p><p>Also check out the W3C Web Accessibility Initiative (WAI) Web Accessibility Perspective Videos.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Leveraging Data Science to Help Security Teams with Serge-Olivier Paquette</title>
      <itunes:episode>36</itunes:episode>
      <podcast:episode>36</podcast:episode>
      <itunes:title>Leveraging Data Science to Help Security Teams with Serge-Olivier Paquette</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bc8d5114-8a75-4627-8d59-69913972998d</guid>
      <link>https://share.transistor.fm/s/11e965f4</link>
      <description>
        <![CDATA[<p>How do you help security teams understand what happened and what to do next? Data science can help with that. Serge-Olivier Paquette, CPO at threat intelligence and analytics platform Flare, combines product, cybersecurity, and data science expertise to develop cutting-edge products and experiences that help security teams make informed decisions.</p><p>In this episode:</p><ul><li>The best explanation of data science you’ve ever heard.</li><li>Why you need to skeptical of data science models.</li><li>How to leverage data science to be more helpful to security teams.</li><li>How to build trust—particularly when tools can increasing perform actions on behalf of users.</li></ul><p>Serge-Olivier Paquette is CPO at Flare, a cybersecurity platform that helps organizations proactively identify security threats. He works at the intersection of product management, data science, cybersecurity, and platform engineering. Serge-Olivier was previously tech lead and senior manager at Secureworks.</p><p> </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>How do you help security teams understand what happened and what to do next? Data science can help with that. Serge-Olivier Paquette, CPO at threat intelligence and analytics platform Flare, combines product, cybersecurity, and data science expertise to develop cutting-edge products and experiences that help security teams make informed decisions.</p><p>In this episode:</p><ul><li>The best explanation of data science you’ve ever heard.</li><li>Why you need to skeptical of data science models.</li><li>How to leverage data science to be more helpful to security teams.</li><li>How to build trust—particularly when tools can increasing perform actions on behalf of users.</li></ul><p>Serge-Olivier Paquette is CPO at Flare, a cybersecurity platform that helps organizations proactively identify security threats. He works at the intersection of product management, data science, cybersecurity, and platform engineering. Serge-Olivier was previously tech lead and senior manager at Secureworks.</p><p> </p>]]>
      </content:encoded>
      <pubDate>Wed, 08 May 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/11e965f4/3c3c3878.mp3" length="40292664" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2518</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>How do you help security teams understand what happened and what to do next? Data science can help with that. Serge-Olivier Paquette, CPO at threat intelligence and analytics platform Flare, combines product, cybersecurity, and data science expertise to develop cutting-edge products and experiences that help security teams make informed decisions.</p><p>In this episode:</p><ul><li>The best explanation of data science you’ve ever heard.</li><li>Why you need to skeptical of data science models.</li><li>How to leverage data science to be more helpful to security teams.</li><li>How to build trust—particularly when tools can increasing perform actions on behalf of users.</li></ul><p>Serge-Olivier Paquette is CPO at Flare, a cybersecurity platform that helps organizations proactively identify security threats. He works at the intersection of product management, data science, cybersecurity, and platform engineering. Serge-Olivier was previously tech lead and senior manager at Secureworks.</p><p> </p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What Designers Need to Know About Digital Identity and Access with David Mahdi</title>
      <itunes:episode>35</itunes:episode>
      <podcast:episode>35</podcast:episode>
      <itunes:title>What Designers Need to Know About Digital Identity and Access with David Mahdi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">6c6ecc29-debc-4113-9c2d-533875cc68ce</guid>
      <link>https://share.transistor.fm/s/bdf40107</link>
      <description>
        <![CDATA[<p>What do the terms digital identity and access mean for the user experience? David Mahdi, CIO at Transmit Security and digital identity and cybersecurity expert, breaks it all down in this episode.</p><p>We talk about:</p><ul><li>Access-related terms you need to understand: Digital identity, authentication, and authorization.</li><li>Why so many security problems are, in fact, access problems.</li><li>User experience implications.</li><li>The future of digital identity and what it might mean for your product and your users.</li></ul><p>David Mahdi is the CIO at Transmit Security, former Gartner research VP, and was previously CSO at Sectigo. An IAM leader and visionary, David is an expert in digital identity, cryptography, and cybersecurity. </p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>What do the terms digital identity and access mean for the user experience? David Mahdi, CIO at Transmit Security and digital identity and cybersecurity expert, breaks it all down in this episode.</p><p>We talk about:</p><ul><li>Access-related terms you need to understand: Digital identity, authentication, and authorization.</li><li>Why so many security problems are, in fact, access problems.</li><li>User experience implications.</li><li>The future of digital identity and what it might mean for your product and your users.</li></ul><p>David Mahdi is the CIO at Transmit Security, former Gartner research VP, and was previously CSO at Sectigo. An IAM leader and visionary, David is an expert in digital identity, cryptography, and cybersecurity. </p>]]>
      </content:encoded>
      <pubDate>Wed, 24 Apr 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/bdf40107/8183d0df.mp3" length="43643038" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2727</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>What do the terms digital identity and access mean for the user experience? David Mahdi, CIO at Transmit Security and digital identity and cybersecurity expert, breaks it all down in this episode.</p><p>We talk about:</p><ul><li>Access-related terms you need to understand: Digital identity, authentication, and authorization.</li><li>Why so many security problems are, in fact, access problems.</li><li>User experience implications.</li><li>The future of digital identity and what it might mean for your product and your users.</li></ul><p>David Mahdi is the CIO at Transmit Security, former Gartner research VP, and was previously CSO at Sectigo. An IAM leader and visionary, David is an expert in digital identity, cryptography, and cybersecurity. </p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Bake Security Into the DNA of Your Product and Improve the Security User Experience with Darren Thomas and Margaret Cunningham</title>
      <itunes:episode>34</itunes:episode>
      <podcast:episode>34</podcast:episode>
      <itunes:title>Bake Security Into the DNA of Your Product and Improve the Security User Experience with Darren Thomas and Margaret Cunningham</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bf9a6e07-6347-4bf0-8081-6e6f22b910ff</guid>
      <link>https://share.transistor.fm/s/cff09ec9</link>
      <description>
        <![CDATA[<p>We start the episode discussing a very serious topic: emojis. Then we get back to your regularly scheduled programming.</p><p><br></p><p>How would you approach security if you were building something from scratch? How would you address security user experience challenges? Darren Thomas and Margaret Cunningham from Wethos AI talk about how they’ve built security into their product and how cross-disciplinary collaboration helps them improve the security user experience.</p><p><br></p><p><strong>In this episode, we talk about:</strong></p><ul><li>How to build security into your product development lifecycle when you need move quickly.</li><li>How to anticipate—and design for—security and privacy concerns.</li><li>Why getting users to the product’s value faster and relates to the security user experience.</li></ul><p>Darren Thomas is the co-founder and Chief Product Officer at Wethos AI, a platform that helps people and teams connect and understand one another to improve both individual and team performance. Darren is also the founding team member and head of product at NumberOne AI. A veteran in product management within the security industry, Darren has previously worked at Tenable and McAfee.</p><p><br></p><p>Margaret Cunningham is an experimental psychologist and is Chief Scientist at Wethos AI. Previously, Margaret was Senior Staff Behavioral Engineer, Security &amp; Privacy at Robinhood and Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab. Check out the Margaret’s <a href="https://share.transistor.fm/s/0d7df5af">first interview on the Human-Centered Security podcast (Episode 9)</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>We start the episode discussing a very serious topic: emojis. Then we get back to your regularly scheduled programming.</p><p><br></p><p>How would you approach security if you were building something from scratch? How would you address security user experience challenges? Darren Thomas and Margaret Cunningham from Wethos AI talk about how they’ve built security into their product and how cross-disciplinary collaboration helps them improve the security user experience.</p><p><br></p><p><strong>In this episode, we talk about:</strong></p><ul><li>How to build security into your product development lifecycle when you need move quickly.</li><li>How to anticipate—and design for—security and privacy concerns.</li><li>Why getting users to the product’s value faster and relates to the security user experience.</li></ul><p>Darren Thomas is the co-founder and Chief Product Officer at Wethos AI, a platform that helps people and teams connect and understand one another to improve both individual and team performance. Darren is also the founding team member and head of product at NumberOne AI. A veteran in product management within the security industry, Darren has previously worked at Tenable and McAfee.</p><p><br></p><p>Margaret Cunningham is an experimental psychologist and is Chief Scientist at Wethos AI. Previously, Margaret was Senior Staff Behavioral Engineer, Security &amp; Privacy at Robinhood and Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab. Check out the Margaret’s <a href="https://share.transistor.fm/s/0d7df5af">first interview on the Human-Centered Security podcast (Episode 9)</a>.</p>]]>
      </content:encoded>
      <pubDate>Wed, 03 Apr 2024 05:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/cff09ec9/dbd3d641.mp3" length="39528606" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2469</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>We start the episode discussing a very serious topic: emojis. Then we get back to your regularly scheduled programming.</p><p><br></p><p>How would you approach security if you were building something from scratch? How would you address security user experience challenges? Darren Thomas and Margaret Cunningham from Wethos AI talk about how they’ve built security into their product and how cross-disciplinary collaboration helps them improve the security user experience.</p><p><br></p><p><strong>In this episode, we talk about:</strong></p><ul><li>How to build security into your product development lifecycle when you need move quickly.</li><li>How to anticipate—and design for—security and privacy concerns.</li><li>Why getting users to the product’s value faster and relates to the security user experience.</li></ul><p>Darren Thomas is the co-founder and Chief Product Officer at Wethos AI, a platform that helps people and teams connect and understand one another to improve both individual and team performance. Darren is also the founding team member and head of product at NumberOne AI. A veteran in product management within the security industry, Darren has previously worked at Tenable and McAfee.</p><p><br></p><p>Margaret Cunningham is an experimental psychologist and is Chief Scientist at Wethos AI. Previously, Margaret was Senior Staff Behavioral Engineer, Security &amp; Privacy at Robinhood and Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab. Check out the Margaret’s <a href="https://share.transistor.fm/s/0d7df5af">first interview on the Human-Centered Security podcast (Episode 9)</a>.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What UX Designers Need to Know About Privacy with Michelle Finneran Dennedy</title>
      <itunes:episode>33</itunes:episode>
      <podcast:episode>33</podcast:episode>
      <itunes:title>What UX Designers Need to Know About Privacy with Michelle Finneran Dennedy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ef050dbc-2964-4839-a795-d87aedca5f1d</guid>
      <link>https://share.transistor.fm/s/0b1c202c</link>
      <description>
        <![CDATA[<p>When your website says, “we value your privacy,” how do users interpret that statement? How do they experience “privacy” in your product? What messages are you conveying--perhaps unintentionally? Privacy expert Michelle Finneran Dennedy helps designers think about privacy in the context of the user experience.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What does privacy mean?</li><li>How, as designers, we give the user ideas of what to expect around privacy—an opportunity to erode or foster trust.</li><li>The approach her team took at McAfee when it came to redesigning their privacy policy.</li><li>Starting with ethics—and revving that “ethical engine.”</li><li>Who should designers reach out to about privacy at their organization? What should they ask?</li></ul><p>Michelle Finneran Dennedy is a privacy expert, the co-founder of Privacy Code, and was formerly Chief Privacy Officer at McAfee. She is the co-author of <a href="https://books.apple.com/us/book/the-privacy-engineers-manifesto/id809354613"><em>The Privacy Engineer’s Manifesto</em></a><em>.</em></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>When your website says, “we value your privacy,” how do users interpret that statement? How do they experience “privacy” in your product? What messages are you conveying--perhaps unintentionally? Privacy expert Michelle Finneran Dennedy helps designers think about privacy in the context of the user experience.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What does privacy mean?</li><li>How, as designers, we give the user ideas of what to expect around privacy—an opportunity to erode or foster trust.</li><li>The approach her team took at McAfee when it came to redesigning their privacy policy.</li><li>Starting with ethics—and revving that “ethical engine.”</li><li>Who should designers reach out to about privacy at their organization? What should they ask?</li></ul><p>Michelle Finneran Dennedy is a privacy expert, the co-founder of Privacy Code, and was formerly Chief Privacy Officer at McAfee. She is the co-author of <a href="https://books.apple.com/us/book/the-privacy-engineers-manifesto/id809354613"><em>The Privacy Engineer’s Manifesto</em></a><em>.</em></p>]]>
      </content:encoded>
      <pubDate>Wed, 13 Mar 2024 09:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/0b1c202c/13514cd3.mp3" length="48233391" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>3013</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>When your website says, “we value your privacy,” how do users interpret that statement? How do they experience “privacy” in your product? What messages are you conveying--perhaps unintentionally? Privacy expert Michelle Finneran Dennedy helps designers think about privacy in the context of the user experience.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>What does privacy mean?</li><li>How, as designers, we give the user ideas of what to expect around privacy—an opportunity to erode or foster trust.</li><li>The approach her team took at McAfee when it came to redesigning their privacy policy.</li><li>Starting with ethics—and revving that “ethical engine.”</li><li>Who should designers reach out to about privacy at their organization? What should they ask?</li></ul><p>Michelle Finneran Dennedy is a privacy expert, the co-founder of Privacy Code, and was formerly Chief Privacy Officer at McAfee. She is the co-author of <a href="https://books.apple.com/us/book/the-privacy-engineers-manifesto/id809354613"><em>The Privacy Engineer’s Manifesto</em></a><em>.</em></p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Learning and Iterating Are Key to Improving the Security User Experience with Kevin Goldman</title>
      <itunes:episode>32</itunes:episode>
      <podcast:episode>32</podcast:episode>
      <itunes:title>Learning and Iterating Are Key to Improving the Security User Experience with Kevin Goldman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">65f9b9e3-312f-43b0-b10b-e184a6466842</guid>
      <link>https://share.transistor.fm/s/c00cf8ce</link>
      <description>
        <![CDATA[<p>Designing for the security user experience is challenging because if security controls are too complex or burdensome, users may bypass them, which compromises security. Additionally, the constant evolution of threats means that effective security controls must be continuously updated to stay ahead of threat actors. In other words, what may have been relatively effective yesterday might not be effective tomorrow. Exactly why the security user experience is so exciting!</p><p><br></p><p>Thankfully, <a href="https://www.linkedin.com/in/kevingoldman">Kevin Goldman</a> shares my enthusiasm. Kevin is a design executive whose most recent focus has been in identity and access management. Kevin is the Chair of the UX Working Group at the FIDO Alliance, a nonprofit global industry organization that has developed the standards for passkeys.</p><p><br></p><p>During this episode, Kevin and I talk about: </p><ul><li>How to get buy-in for a human-centered approach to the security user experience.</li><li>A key moment when Kevin and in his team faced a UX challenge with passkeys that forced them to take a step back and re-evaluate their approach.</li><li>The surprising findings and resolution after they dug deeper to understand the problem.</li><li>How Kevin worked with his cross-disciplinary team members to identify tradeoffs in usability and security and how they worked through them.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Designing for the security user experience is challenging because if security controls are too complex or burdensome, users may bypass them, which compromises security. Additionally, the constant evolution of threats means that effective security controls must be continuously updated to stay ahead of threat actors. In other words, what may have been relatively effective yesterday might not be effective tomorrow. Exactly why the security user experience is so exciting!</p><p><br></p><p>Thankfully, <a href="https://www.linkedin.com/in/kevingoldman">Kevin Goldman</a> shares my enthusiasm. Kevin is a design executive whose most recent focus has been in identity and access management. Kevin is the Chair of the UX Working Group at the FIDO Alliance, a nonprofit global industry organization that has developed the standards for passkeys.</p><p><br></p><p>During this episode, Kevin and I talk about: </p><ul><li>How to get buy-in for a human-centered approach to the security user experience.</li><li>A key moment when Kevin and in his team faced a UX challenge with passkeys that forced them to take a step back and re-evaluate their approach.</li><li>The surprising findings and resolution after they dug deeper to understand the problem.</li><li>How Kevin worked with his cross-disciplinary team members to identify tradeoffs in usability and security and how they worked through them.</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 07 Feb 2024 05:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/c00cf8ce/acf87687.mp3" length="43482047" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2716</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Designing for the security user experience is challenging because if security controls are too complex or burdensome, users may bypass them, which compromises security. Additionally, the constant evolution of threats means that effective security controls must be continuously updated to stay ahead of threat actors. In other words, what may have been relatively effective yesterday might not be effective tomorrow. Exactly why the security user experience is so exciting!</p><p><br></p><p>Thankfully, <a href="https://www.linkedin.com/in/kevingoldman">Kevin Goldman</a> shares my enthusiasm. Kevin is a design executive whose most recent focus has been in identity and access management. Kevin is the Chair of the UX Working Group at the FIDO Alliance, a nonprofit global industry organization that has developed the standards for passkeys.</p><p><br></p><p>During this episode, Kevin and I talk about: </p><ul><li>How to get buy-in for a human-centered approach to the security user experience.</li><li>A key moment when Kevin and in his team faced a UX challenge with passkeys that forced them to take a step back and re-evaluate their approach.</li><li>The surprising findings and resolution after they dug deeper to understand the problem.</li><li>How Kevin worked with his cross-disciplinary team members to identify tradeoffs in usability and security and how they worked through them.</li></ul>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Build a UX of AI Framework for Your Cross-Disciplinary Team with John Robertson</title>
      <itunes:episode>31</itunes:episode>
      <podcast:episode>31</podcast:episode>
      <itunes:title>Build a UX of AI Framework for Your Cross-Disciplinary Team with John Robertson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">31128ebb-af92-49d4-b01d-95f84f505e35</guid>
      <link>https://share.transistor.fm/s/73648a57</link>
      <description>
        <![CDATA[<p>UX folks are great at asking questions about AI and that’s exactly what we do in this episode. But “questions” sounds boring so we gave the set of questions a fancy name: a UX of AI framework. UX researcher John Robertson describes the UX of AI framework he and his team helped build.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>The importance of a human-centered design approach to AI.</li><li>The need to slow down and consider safety, privacy, and ethics as part of implementing AI.</li><li>Looking beyond the data: each data point represents a human.</li><li>The need to build and maintain trust in the AI user experience.</li><li>Understanding how humans and AI can work as teammates and how that dynamic might play out.</li></ul><p><br></p><p>John Robertson is a skilled UX researcher with a background in neuroscience and experience working at organizations such as American Airlines, IBM, and Visa. Currently he is a Senior Principal UX Researcher for a cybersecurity software company implementing quantitative and qualitative methods to create human centered security analyst experiences.</p><p>In the episode, we reference:</p><p><a href="https://jakobnielsenphd.substack.com/p/ai-qualitative-data-at-scale">Analyzing Qualitative User Data at Enterprise Scale with AI: The GE Case Study by Jakob Nielsen</a></p><p><a href="https://dl.acm.org/doi/abs/10.1145/3576915.3623157">Do Users Write More Insecure Code With AI Assistants?</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>UX folks are great at asking questions about AI and that’s exactly what we do in this episode. But “questions” sounds boring so we gave the set of questions a fancy name: a UX of AI framework. UX researcher John Robertson describes the UX of AI framework he and his team helped build.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>The importance of a human-centered design approach to AI.</li><li>The need to slow down and consider safety, privacy, and ethics as part of implementing AI.</li><li>Looking beyond the data: each data point represents a human.</li><li>The need to build and maintain trust in the AI user experience.</li><li>Understanding how humans and AI can work as teammates and how that dynamic might play out.</li></ul><p><br></p><p>John Robertson is a skilled UX researcher with a background in neuroscience and experience working at organizations such as American Airlines, IBM, and Visa. Currently he is a Senior Principal UX Researcher for a cybersecurity software company implementing quantitative and qualitative methods to create human centered security analyst experiences.</p><p>In the episode, we reference:</p><p><a href="https://jakobnielsenphd.substack.com/p/ai-qualitative-data-at-scale">Analyzing Qualitative User Data at Enterprise Scale with AI: The GE Case Study by Jakob Nielsen</a></p><p><a href="https://dl.acm.org/doi/abs/10.1145/3576915.3623157">Do Users Write More Insecure Code With AI Assistants?</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 10 Jan 2024 05:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/73648a57/53232477.mp3" length="42393671" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2648</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>UX folks are great at asking questions about AI and that’s exactly what we do in this episode. But “questions” sounds boring so we gave the set of questions a fancy name: a UX of AI framework. UX researcher John Robertson describes the UX of AI framework he and his team helped build.</p><p><br></p><p>In this episode, we talk about:</p><ul><li>The importance of a human-centered design approach to AI.</li><li>The need to slow down and consider safety, privacy, and ethics as part of implementing AI.</li><li>Looking beyond the data: each data point represents a human.</li><li>The need to build and maintain trust in the AI user experience.</li><li>Understanding how humans and AI can work as teammates and how that dynamic might play out.</li></ul><p><br></p><p>John Robertson is a skilled UX researcher with a background in neuroscience and experience working at organizations such as American Airlines, IBM, and Visa. Currently he is a Senior Principal UX Researcher for a cybersecurity software company implementing quantitative and qualitative methods to create human centered security analyst experiences.</p><p>In the episode, we reference:</p><p><a href="https://jakobnielsenphd.substack.com/p/ai-qualitative-data-at-scale">Analyzing Qualitative User Data at Enterprise Scale with AI: The GE Case Study by Jakob Nielsen</a></p><p><a href="https://dl.acm.org/doi/abs/10.1145/3576915.3623157">Do Users Write More Insecure Code With AI Assistants?</a></p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Build Security and UX Into Your Product Development Process with Ali Cuthbertson and Jason Telner</title>
      <itunes:episode>30</itunes:episode>
      <podcast:episode>30</podcast:episode>
      <itunes:title>Build Security and UX Into Your Product Development Process with Ali Cuthbertson and Jason Telner</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a386e04f-09cd-48cb-8da6-f35c190cc692</guid>
      <link>https://share.transistor.fm/s/5708f7be</link>
      <description>
        <![CDATA[<p>If there’s one thing both UX teams and security teams can empathize with each other on is being involved too late in the development process. Ali Cuthbertson and Jason Telner realized that it wasn’t enough for teams to embrace the need for UX and security—they needed a method for integrating them into their agile development processes.</p><p><br></p><p>Throughout the interview, Ali and Jason will be referencing a project they worked on together to help develop and foster a consistent process for integrating UX and security into an agile development process for teams at IBM. As a result of their work, they developed a set of principles and best practices. They talk about:</p><ul><li>How a set of principles can serve as a guide for teams.</li><li>Why integrating UX and security involved a cultural shift for teams in order to be successful.</li><li>Why support from leadership is instrumental for new processes to be effective.</li><li>Tips for leveraging mixed methods user research to look at problems from different angles.</li><li>How to measure the success of embedding UX and security into existing processes.</li></ul><p>Ali and Jason presented some of their research and recommendations at the 2023 UXPA presentation called “How to balance strong user experiences with enhanced security within an agile framework? Lessons learned and best practices.”</p><p><br></p><p>Ali Cuthbertson is the Technical Vitality Development Manager and CIO Design Program Manager at IBM. Ali brings over 20 years of seasoned expertise navigating software and hardware engineering. She has become the Indiana Jones of life sciences, user experience, talent management, vitality optimization, security protocols, AI advancements, data analytics, scientific exploration, and cutting edge cloud technologies.</p><p><br></p><p>Jason Telner, PhD, is a senior user researcher within IBM’s CIO design user research and data analytics team. Jason has over 15 years of experience working within the field of user research. In his current role at IBM, Jason’s focus has been on improving the user experience of employee support applications such as chatbots, web support, and voice interface support.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>If there’s one thing both UX teams and security teams can empathize with each other on is being involved too late in the development process. Ali Cuthbertson and Jason Telner realized that it wasn’t enough for teams to embrace the need for UX and security—they needed a method for integrating them into their agile development processes.</p><p><br></p><p>Throughout the interview, Ali and Jason will be referencing a project they worked on together to help develop and foster a consistent process for integrating UX and security into an agile development process for teams at IBM. As a result of their work, they developed a set of principles and best practices. They talk about:</p><ul><li>How a set of principles can serve as a guide for teams.</li><li>Why integrating UX and security involved a cultural shift for teams in order to be successful.</li><li>Why support from leadership is instrumental for new processes to be effective.</li><li>Tips for leveraging mixed methods user research to look at problems from different angles.</li><li>How to measure the success of embedding UX and security into existing processes.</li></ul><p>Ali and Jason presented some of their research and recommendations at the 2023 UXPA presentation called “How to balance strong user experiences with enhanced security within an agile framework? Lessons learned and best practices.”</p><p><br></p><p>Ali Cuthbertson is the Technical Vitality Development Manager and CIO Design Program Manager at IBM. Ali brings over 20 years of seasoned expertise navigating software and hardware engineering. She has become the Indiana Jones of life sciences, user experience, talent management, vitality optimization, security protocols, AI advancements, data analytics, scientific exploration, and cutting edge cloud technologies.</p><p><br></p><p>Jason Telner, PhD, is a senior user researcher within IBM’s CIO design user research and data analytics team. Jason has over 15 years of experience working within the field of user research. In his current role at IBM, Jason’s focus has been on improving the user experience of employee support applications such as chatbots, web support, and voice interface support.</p>]]>
      </content:encoded>
      <pubDate>Wed, 13 Dec 2023 05:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/5708f7be/28dda077.mp3" length="37105669" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2317</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>If there’s one thing both UX teams and security teams can empathize with each other on is being involved too late in the development process. Ali Cuthbertson and Jason Telner realized that it wasn’t enough for teams to embrace the need for UX and security—they needed a method for integrating them into their agile development processes.</p><p><br></p><p>Throughout the interview, Ali and Jason will be referencing a project they worked on together to help develop and foster a consistent process for integrating UX and security into an agile development process for teams at IBM. As a result of their work, they developed a set of principles and best practices. They talk about:</p><ul><li>How a set of principles can serve as a guide for teams.</li><li>Why integrating UX and security involved a cultural shift for teams in order to be successful.</li><li>Why support from leadership is instrumental for new processes to be effective.</li><li>Tips for leveraging mixed methods user research to look at problems from different angles.</li><li>How to measure the success of embedding UX and security into existing processes.</li></ul><p>Ali and Jason presented some of their research and recommendations at the 2023 UXPA presentation called “How to balance strong user experiences with enhanced security within an agile framework? Lessons learned and best practices.”</p><p><br></p><p>Ali Cuthbertson is the Technical Vitality Development Manager and CIO Design Program Manager at IBM. Ali brings over 20 years of seasoned expertise navigating software and hardware engineering. She has become the Indiana Jones of life sciences, user experience, talent management, vitality optimization, security protocols, AI advancements, data analytics, scientific exploration, and cutting edge cloud technologies.</p><p><br></p><p>Jason Telner, PhD, is a senior user researcher within IBM’s CIO design user research and data analytics team. Jason has over 15 years of experience working within the field of user research. In his current role at IBM, Jason’s focus has been on improving the user experience of employee support applications such as chatbots, web support, and voice interface support.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Designing for Cybersecurity Power Users with Tom Keenoy</title>
      <itunes:episode>29</itunes:episode>
      <podcast:episode>29</podcast:episode>
      <itunes:title>Designing for Cybersecurity Power Users with Tom Keenoy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">d721a0b0-531d-40bb-8964-7f0a46bc893c</guid>
      <link>https://share.transistor.fm/s/6dfc24db</link>
      <description>
        <![CDATA[<p>Ever wonder what it’s like to design enterprise cybersecurity software? Tom Keenoy, a design leader for a cybersecurity company, explains why what you learned in design school may not apply when you’re building software for specialized power users (think: security analysts, IT administrators, devops).</p><ul><li>How do you get up-to-speed when designing for complex domains like cybersecurity?</li><li>How do you adapt your design process for enterprise power users (spoiler: stripping away information isn’t always the right answer)?</li><li>How to prioritize when “everyone wants to build all the cool things.”</li><li>Why Tom thinks much of a designer’s job is “de-risking.”</li><li>The most important skills designers need to be successful in building enterprise security software.</li></ul><p>Tom Keenoy is a design leader who loves building technical products for power users. At various points in his career he’s been a designer, an educator, an engineer, a product manager, and a startup founder. He’s currently leading a design team at a cybersecurity company and advising growth stage startups to help right-size their UX and product design programs.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Ever wonder what it’s like to design enterprise cybersecurity software? Tom Keenoy, a design leader for a cybersecurity company, explains why what you learned in design school may not apply when you’re building software for specialized power users (think: security analysts, IT administrators, devops).</p><ul><li>How do you get up-to-speed when designing for complex domains like cybersecurity?</li><li>How do you adapt your design process for enterprise power users (spoiler: stripping away information isn’t always the right answer)?</li><li>How to prioritize when “everyone wants to build all the cool things.”</li><li>Why Tom thinks much of a designer’s job is “de-risking.”</li><li>The most important skills designers need to be successful in building enterprise security software.</li></ul><p>Tom Keenoy is a design leader who loves building technical products for power users. At various points in his career he’s been a designer, an educator, an engineer, a product manager, and a startup founder. He’s currently leading a design team at a cybersecurity company and advising growth stage startups to help right-size their UX and product design programs.</p>]]>
      </content:encoded>
      <pubDate>Wed, 29 Nov 2023 05:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/6dfc24db/18060065.mp3" length="31956788" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1996</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Ever wonder what it’s like to design enterprise cybersecurity software? Tom Keenoy, a design leader for a cybersecurity company, explains why what you learned in design school may not apply when you’re building software for specialized power users (think: security analysts, IT administrators, devops).</p><ul><li>How do you get up-to-speed when designing for complex domains like cybersecurity?</li><li>How do you adapt your design process for enterprise power users (spoiler: stripping away information isn’t always the right answer)?</li><li>How to prioritize when “everyone wants to build all the cool things.”</li><li>Why Tom thinks much of a designer’s job is “de-risking.”</li><li>The most important skills designers need to be successful in building enterprise security software.</li></ul><p>Tom Keenoy is a design leader who loves building technical products for power users. At various points in his career he’s been a designer, an educator, an engineer, a product manager, and a startup founder. He’s currently leading a design team at a cybersecurity company and advising growth stage startups to help right-size their UX and product design programs.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>Security Engineers Hate CAPTCHAs, Too with Jason Puglisi</title>
      <itunes:episode>28</itunes:episode>
      <podcast:episode>28</podcast:episode>
      <itunes:title>Security Engineers Hate CAPTCHAs, Too with Jason Puglisi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bc54dedf-12d7-4486-b2c9-a92256ee6645</guid>
      <link>https://share.transistor.fm/s/504dd7c8</link>
      <description>
        <![CDATA[<p>Ever encountered a CAPTCHA and thought to yourself, “whoever decided to put this here must really hate people”? It turns out, the people who make the decisions to use CAPTCHAs hate them as much as you do. Jason Puglisi, an application security engineer, describes what teams like his think about when evaluating potential solutions to a security issue. (Spoiler: you’ll be pleased to know these considerations include how security solutions may affect the user experience).</p><ul><li>The surprising similarities between UX and security teams.</li><li>What designers need to know about information security risks, as well as how designers can help security teams understand the UX tradeoffs they may be making.</li><li>What designers can do to more effectively collaborate with their cross-disciplinary teams, including the security engineering team.</li><li>What to consider when designing for users in higher-risk scenarios—users who have privileged access and are operating at scale (for example, if your end users are engineers, IT professionals, or security analysts).</li></ul><p>Jason Puglisi is an application security engineer at a financial technology company. He performs ethical hacking to discover vulnerabilities, guide solutions, and inform organization-wide security measures. Human security is a particular passion of his, including security culture, awareness, and various aspects of social engineering.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Ever encountered a CAPTCHA and thought to yourself, “whoever decided to put this here must really hate people”? It turns out, the people who make the decisions to use CAPTCHAs hate them as much as you do. Jason Puglisi, an application security engineer, describes what teams like his think about when evaluating potential solutions to a security issue. (Spoiler: you’ll be pleased to know these considerations include how security solutions may affect the user experience).</p><ul><li>The surprising similarities between UX and security teams.</li><li>What designers need to know about information security risks, as well as how designers can help security teams understand the UX tradeoffs they may be making.</li><li>What designers can do to more effectively collaborate with their cross-disciplinary teams, including the security engineering team.</li><li>What to consider when designing for users in higher-risk scenarios—users who have privileged access and are operating at scale (for example, if your end users are engineers, IT professionals, or security analysts).</li></ul><p>Jason Puglisi is an application security engineer at a financial technology company. He performs ethical hacking to discover vulnerabilities, guide solutions, and inform organization-wide security measures. Human security is a particular passion of his, including security culture, awareness, and various aspects of social engineering.</p>]]>
      </content:encoded>
      <pubDate>Fri, 17 Nov 2023 05:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/504dd7c8/cd3df49a.mp3" length="38522090" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2406</itunes:duration>
      <itunes:summary>
        <![CDATA[<p>Ever encountered a CAPTCHA and thought to yourself, “whoever decided to put this here must really hate people”? It turns out, the people who make the decisions to use CAPTCHAs hate them as much as you do. Jason Puglisi, an application security engineer, describes what teams like his think about when evaluating potential solutions to a security issue. (Spoiler: you’ll be pleased to know these considerations include how security solutions may affect the user experience).</p><ul><li>The surprising similarities between UX and security teams.</li><li>What designers need to know about information security risks, as well as how designers can help security teams understand the UX tradeoffs they may be making.</li><li>What designers can do to more effectively collaborate with their cross-disciplinary teams, including the security engineering team.</li><li>What to consider when designing for users in higher-risk scenarios—users who have privileged access and are operating at scale (for example, if your end users are engineers, IT professionals, or security analysts).</li></ul><p>Jason Puglisi is an application security engineer at a financial technology company. He performs ethical hacking to discover vulnerabilities, guide solutions, and inform organization-wide security measures. Human security is a particular passion of his, including security culture, awareness, and various aspects of social engineering.</p>]]>
      </itunes:summary>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Threat Modeling for UX Designers with Adam Shostack</title>
      <itunes:episode>27</itunes:episode>
      <podcast:episode>27</podcast:episode>
      <itunes:title>Threat Modeling for UX Designers with Adam Shostack</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">389cf32c-6859-43f7-a42d-ebf7619aa356</guid>
      <link>https://share.transistor.fm/s/ad97b9b4</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Questions you should be asking to uncover information security threats early on in the design process.</li><li>How to account for human behavior in a structured way as part of threat modeling (spoiler: this is not so different from what you are doing now).</li><li>How to collaborate with an interdisciplinary team as part of an iterative design process to improve the user experience of security.</li></ul><p>Adam Shostack is an expert on threat modeling, having worked at Microsoft and currently running security consultancy <a href="https://shostack.org/">Shostack + Associates</a>. He is the author of <em>The New School of Information Security</em>, <em>Threat Modeling: Designing for Security</em> and the forthcoming <em>Threats: What Every Engineer Should Learn From Star Wars</em>. <a href="https://www.youtube.com/c/Shostack">Adam’s YouTube channel</a> has entertaining videos that are also excellent resources for learning about threat modeling.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Questions you should be asking to uncover information security threats early on in the design process.</li><li>How to account for human behavior in a structured way as part of threat modeling (spoiler: this is not so different from what you are doing now).</li><li>How to collaborate with an interdisciplinary team as part of an iterative design process to improve the user experience of security.</li></ul><p>Adam Shostack is an expert on threat modeling, having worked at Microsoft and currently running security consultancy <a href="https://shostack.org/">Shostack + Associates</a>. He is the author of <em>The New School of Information Security</em>, <em>Threat Modeling: Designing for Security</em> and the forthcoming <em>Threats: What Every Engineer Should Learn From Star Wars</em>. <a href="https://www.youtube.com/c/Shostack">Adam’s YouTube channel</a> has entertaining videos that are also excellent resources for learning about threat modeling.</p>]]>
      </content:encoded>
      <pubDate>Wed, 09 Nov 2022 06:00:00 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/ad97b9b4/52ddd9e0.mp3" length="38983487" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2435</itunes:duration>
      <itunes:summary>How can we proactively anticipate threats in effort to design user experiences that are both safe and usable? Adam describes threat modeling and the role UX designers play in threat modeling exercises.</itunes:summary>
      <itunes:subtitle>How can we proactively anticipate threats in effort to design user experiences that are both safe and usable? Adam describes threat modeling and the role UX designers play in threat modeling exercises.</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Designing Multi-Factor Authentication with Blair Shen and Bethany Sonefeld</title>
      <itunes:episode>26</itunes:episode>
      <podcast:episode>26</podcast:episode>
      <itunes:title>Designing Multi-Factor Authentication with Blair Shen and Bethany Sonefeld</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">55b80db2-2664-4262-b75f-b9adb1c602e7</guid>
      <link>https://share.transistor.fm/s/52f79a3b</link>
      <description>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>How designing for security is different from (and the same as) designing for other types of experiences.</li><li>How to tackle aspects of the user experience that may be necessary but are perceived as annoying roadblocks.</li><li>How to anticipate where things might go wrong for the user.</li><li>How to effectively collaborate with technical teams.</li></ul><p><br></p><p>Bethany Sonefeld is the founder of <a href="https://www.CreatewithConscience.com">Create With Conscience</a>, a space dedicated to educating and committing to building healthier technology. Create With Conscience was something Bethany developed out of interest in creating a healthier balance of technology in her own life. Bethany is a design manager at Duo Security and was previously at Cloudflare, RetailMeNot, and IBM.</p><p><br></p><p>Blair Shen is a product designer at Duo Security and was previously at Cloudflare and Harry&amp;David. She is also a YouTube content creator, where she mentors and coaches aspiring UX designers.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>How designing for security is different from (and the same as) designing for other types of experiences.</li><li>How to tackle aspects of the user experience that may be necessary but are perceived as annoying roadblocks.</li><li>How to anticipate where things might go wrong for the user.</li><li>How to effectively collaborate with technical teams.</li></ul><p><br></p><p>Bethany Sonefeld is the founder of <a href="https://www.CreatewithConscience.com">Create With Conscience</a>, a space dedicated to educating and committing to building healthier technology. Create With Conscience was something Bethany developed out of interest in creating a healthier balance of technology in her own life. Bethany is a design manager at Duo Security and was previously at Cloudflare, RetailMeNot, and IBM.</p><p><br></p><p>Blair Shen is a product designer at Duo Security and was previously at Cloudflare and Harry&amp;David. She is also a YouTube content creator, where she mentors and coaches aspiring UX designers.</p>]]>
      </content:encoded>
      <pubDate>Wed, 19 Oct 2022 09:02:16 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/52f79a3b/9360c99e.mp3" length="36827797" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2300</itunes:duration>
      <itunes:summary>Two-factor authentication is often perceived by users as an annoying roadblock placed between them and the goal they want to accomplish. As a UX designer, how do you approach these types of scenarios where you have to balance usability with security?</itunes:summary>
      <itunes:subtitle>Two-factor authentication is often perceived by users as an annoying roadblock placed between them and the goal they want to accomplish. As a UX designer, how do you approach these types of scenarios where you have to balance usability with security?</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Unintended Consequences: What Questions Should Designers Be Asking? With Bethany Sonefeld</title>
      <itunes:episode>25</itunes:episode>
      <podcast:episode>25</podcast:episode>
      <itunes:title>Unintended Consequences: What Questions Should Designers Be Asking? With Bethany Sonefeld</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9a96c2a3-40ce-4d00-a9e6-00eff8e17af8</guid>
      <link>https://share.transistor.fm/s/dbaa0598</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>How do you tackle situations where business goals might be at odds with what’s ethical or what’s best for the human using the product?</li><li>How can designers make a difference even if they don’t have a leadership role at their organization?</li><li>How do you anticipate potentially unhealthy behaviors or unintended consequences? </li><li>What are some actionable steps you can take today?</li></ul><p><br></p><p>Bethany Sonefeld is the founder of <a href="https://www.CreatewithConscience.com">Create With Conscience</a>, a space dedicated to educating and committing to building healthier technology. Create With Conscience was something Bethany developed out of interest in creating a healthier balance of technology in her own life. Bethany is a design manager at Duo Security and was previously at Cloudflare, RetailMeNot, and IBM.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>How do you tackle situations where business goals might be at odds with what’s ethical or what’s best for the human using the product?</li><li>How can designers make a difference even if they don’t have a leadership role at their organization?</li><li>How do you anticipate potentially unhealthy behaviors or unintended consequences? </li><li>What are some actionable steps you can take today?</li></ul><p><br></p><p>Bethany Sonefeld is the founder of <a href="https://www.CreatewithConscience.com">Create With Conscience</a>, a space dedicated to educating and committing to building healthier technology. Create With Conscience was something Bethany developed out of interest in creating a healthier balance of technology in her own life. Bethany is a design manager at Duo Security and was previously at Cloudflare, RetailMeNot, and IBM.</p>]]>
      </content:encoded>
      <pubDate>Wed, 24 Aug 2022 09:12:31 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/dbaa0598/e5cf94d5.mp3" length="37089846" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2316</itunes:duration>
      <itunes:summary>In security we try to anticipate and account for what might go wrong. Thinking more broadly, what are the unintended consequences of the products we put out into the world? What questions should we be asking as product designers?</itunes:summary>
      <itunes:subtitle>In security we try to anticipate and account for what might go wrong. Thinking more broadly, what are the unintended consequences of the products we put out into the world? What questions should we be asking as product designers?</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What Role Does the UX Team Play in Security? With Michael Snell</title>
      <itunes:episode>24</itunes:episode>
      <podcast:episode>24</podcast:episode>
      <itunes:title>What Role Does the UX Team Play in Security? With Michael Snell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">e17371c8-df99-4a9a-8747-1de3792b7ef9</guid>
      <link>https://share.transistor.fm/s/662675e1</link>
      <description>
        <![CDATA[<p>How do the UX, product, and technology teams effectively collaborate when it comes to security? How do we, as part of the UX team, take part in the security conversations and what role do we play?</p><p><br></p><p>In this episode, we talk about:</p><ul><li>How Michael’s user research for dating apps helped him understand the unintended consequences of digital products on our behaviors.</li><li>Why we need new frameworks for security and privacy in the digital world.</li><li>How users’ perceptions and expectations for security and privacy are highly contextual and changing. </li><li>How to break down the user experience of security so your team isn’t treading water in the abstract and can take steps to improve security outcomes.</li></ul><p><br></p><p>Michael Snell is the UX research team lead at JPMorgan Chase managing research focused on security and authentication. He previously worked at Microsoft and Verizon Connect. He has a PhD in psychology from the University of Georgia.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>How do the UX, product, and technology teams effectively collaborate when it comes to security? How do we, as part of the UX team, take part in the security conversations and what role do we play?</p><p><br></p><p>In this episode, we talk about:</p><ul><li>How Michael’s user research for dating apps helped him understand the unintended consequences of digital products on our behaviors.</li><li>Why we need new frameworks for security and privacy in the digital world.</li><li>How users’ perceptions and expectations for security and privacy are highly contextual and changing. </li><li>How to break down the user experience of security so your team isn’t treading water in the abstract and can take steps to improve security outcomes.</li></ul><p><br></p><p>Michael Snell is the UX research team lead at JPMorgan Chase managing research focused on security and authentication. He previously worked at Microsoft and Verizon Connect. He has a PhD in psychology from the University of Georgia.</p>]]>
      </content:encoded>
      <pubDate>Wed, 20 Jul 2022 06:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/662675e1/3414ca10.mp3" length="36040024" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2250</itunes:duration>
      <itunes:summary>Michael Snell, UX research lead at JPMorgan Chase, describes the role UX designers and researchers play in improving security outcomes. He describes the need for new frameworks for security and privacy in the digital world and explains how to go from security in the abstract to actionable next steps.</itunes:summary>
      <itunes:subtitle>Michael Snell, UX research lead at JPMorgan Chase, describes the role UX designers and researchers play in improving security outcomes. He describes the need for new frameworks for security and privacy in the digital world and explains how to go from secu</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Testing for Usability and Security with Jeremiah Still</title>
      <itunes:episode>23</itunes:episode>
      <podcast:episode>23</podcast:episode>
      <itunes:title>Testing for Usability and Security with Jeremiah Still</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">df38eb3b-0bee-44fb-b180-ec5d7ce088c8</guid>
      <link>https://share.transistor.fm/s/37efc300</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Where the fields of cognitive psychology, security, and user experience meet.</li><li>Why Jeremiah and his team chose to investigate graphical authentication.</li><li>How they cleverly incorporated testing both usability and security in their two-part study.</li><li>The importance of research around learnability: is it easy for users to learn how to use your new authentication schema?</li></ul><p><br></p><p>Read Jeremiah’s research: <a href="https://uxpajournal.org/usability-osa-resistant-authentication/">Usability Comparison of Over-the-Shoulder Attack Resistant Authentication Schemes</a>. </p><p><br></p><p>Jeremiah is the Director of Human Factors, Ph.D. Track and Associate Professor of Psychology and the School of Cybersecurity at Old Dominion University. He runs the Psychology of Design Laboratory, which focuses on human cognition and technology, including usable security.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Where the fields of cognitive psychology, security, and user experience meet.</li><li>Why Jeremiah and his team chose to investigate graphical authentication.</li><li>How they cleverly incorporated testing both usability and security in their two-part study.</li><li>The importance of research around learnability: is it easy for users to learn how to use your new authentication schema?</li></ul><p><br></p><p>Read Jeremiah’s research: <a href="https://uxpajournal.org/usability-osa-resistant-authentication/">Usability Comparison of Over-the-Shoulder Attack Resistant Authentication Schemes</a>. </p><p><br></p><p>Jeremiah is the Director of Human Factors, Ph.D. Track and Associate Professor of Psychology and the School of Cybersecurity at Old Dominion University. He runs the Psychology of Design Laboratory, which focuses on human cognition and technology, including usable security.</p>]]>
      </content:encoded>
      <pubDate>Wed, 25 May 2022 08:15:07 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/37efc300/feed91e1.mp3" length="32397338" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2022</itunes:duration>
      <itunes:summary>How do you test the usability and security of your design ideas? Jeremiah Still walks us through research he and his team conducted on graphical authentication, where users select system-generated images for their passwords.</itunes:summary>
      <itunes:subtitle>How do you test the usability and security of your design ideas? Jeremiah Still walks us through research he and his team conducted on graphical authentication, where users select system-generated images for their passwords.</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Technical Users Care About UX, Too</title>
      <itunes:episode>22</itunes:episode>
      <podcast:episode>22</podcast:episode>
      <itunes:title>Technical Users Care About UX, Too</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">90101f4e-cb6d-410a-bc3e-361bb2160b2b</guid>
      <link>https://share.transistor.fm/s/fb71befc</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Why technical users expect a great user experience just like everyone else.</li><li>How to find and incentivize participants who are extremely busy.</li><li>How to support users in making a decision without telling them what to do.</li><li>Deciding what data to show and how to show it.</li></ul><p>Tanja Venborg Hansen is a seasoned user researcher who has worked in both the enterprise cybersecurity (Forcepoint) and aviation industries (Finnair). She earned a master of science degree focused on design and innovation from the Technical University of Denmark.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Why technical users expect a great user experience just like everyone else.</li><li>How to find and incentivize participants who are extremely busy.</li><li>How to support users in making a decision without telling them what to do.</li><li>Deciding what data to show and how to show it.</li></ul><p>Tanja Venborg Hansen is a seasoned user researcher who has worked in both the enterprise cybersecurity (Forcepoint) and aviation industries (Finnair). She earned a master of science degree focused on design and innovation from the Technical University of Denmark.</p>]]>
      </content:encoded>
      <pubDate>Wed, 09 Mar 2022 07:51:04 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/fb71befc/a0275c7c.mp3" length="26995926" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1685</itunes:duration>
      <itunes:summary>Tanja Venborg Hansen, a user researcher who has worked for an enterprise security company and the aviation industry explains the challenges in creating the user experience for products that are used by information security professionals. We talk about getting buy-in from stakeholders, how to find people to participate in research studies, and deciding what “actionable” data really means.</itunes:summary>
      <itunes:subtitle>Tanja Venborg Hansen, a user researcher who has worked for an enterprise security company and the aviation industry explains the challenges in creating the user experience for products that are used by information security professionals. We talk about get</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Responsible Innovation in the Technology Industry with Chloe Poynton</title>
      <itunes:episode>21</itunes:episode>
      <podcast:episode>21</podcast:episode>
      <itunes:title>Responsible Innovation in the Technology Industry with Chloe Poynton</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">3b3506b6-2176-48a9-9579-f7190107adb3</guid>
      <link>https://share.transistor.fm/s/b63cf3ac</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>What is responsible innovation and where can companies get started?</li><li>How can companies take guiding principles, establish a framework, and operationalize that framework in a way that “informs decision-making in a meaningful way”?</li><li>How are regulations impacting responsible innovation programs?</li><li>What happens when an organization’s business model conflicts with responsible innovation principles?</li></ul><p><br></p><p><br></p><p>Chloe Poynton is the co-founder and principal at <a href="https://www.articleoneadvisors.com">Article One Advisors</a>, a management consultancy with expertise in human rights, responsible innovation, and social impact.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>What is responsible innovation and where can companies get started?</li><li>How can companies take guiding principles, establish a framework, and operationalize that framework in a way that “informs decision-making in a meaningful way”?</li><li>How are regulations impacting responsible innovation programs?</li><li>What happens when an organization’s business model conflicts with responsible innovation principles?</li></ul><p><br></p><p><br></p><p>Chloe Poynton is the co-founder and principal at <a href="https://www.articleoneadvisors.com">Article One Advisors</a>, a management consultancy with expertise in human rights, responsible innovation, and social impact.</p>]]>
      </content:encoded>
      <pubDate>Wed, 08 Dec 2021 08:31:08 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/b63cf3ac/453f0d04.mp3" length="39895018" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2491</itunes:duration>
      <itunes:summary>Technology is changing at a rapid pace but what are the unintended consequences of the digital products we create and put out into the world? How does your product affect human rights? Chloe Poynton, co-founder of Article One Advisors, talks about responsible innovation: “anticipating risks before they become real-world harms” and what that means for technology companies.</itunes:summary>
      <itunes:subtitle>Technology is changing at a rapid pace but what are the unintended consequences of the digital products we create and put out into the world? How does your product affect human rights? Chloe Poynton, co-founder of Article One Advisors, talks about respons</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Why Designers Need to Learn About Security with Jared Spool</title>
      <itunes:episode>20</itunes:episode>
      <podcast:episode>20</podcast:episode>
      <itunes:title>Why Designers Need to Learn About Security with Jared Spool</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ba788a05-f3b0-4cee-8b71-c56ca9211370</guid>
      <link>https://share.transistor.fm/s/c7950801</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Why security UX requires “selective usability” and how that poses unique challenges for designers.</li><li>Thinking about security in terms of safety systems: putting the burden on the system rather than on the user.</li><li>How to work effectively with the security team.</li></ul><p>And Jared shares lots of examples.</p><p>Jared Spool is the founder of UX consultancy <a href="https://www.uie.com">UIE</a> and the co-founder of UX design school Center Centre. Interested in hearing more about what Jared has to say about the security of UX? Watch the talk:<a href="https://www.uie.com/jared-live"> Insecure and Unintuitive: Why We Need to Fix the Security of UX.</a></p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>Why security UX requires “selective usability” and how that poses unique challenges for designers.</li><li>Thinking about security in terms of safety systems: putting the burden on the system rather than on the user.</li><li>How to work effectively with the security team.</li></ul><p>And Jared shares lots of examples.</p><p>Jared Spool is the founder of UX consultancy <a href="https://www.uie.com">UIE</a> and the co-founder of UX design school Center Centre. Interested in hearing more about what Jared has to say about the security of UX? Watch the talk:<a href="https://www.uie.com/jared-live"> Insecure and Unintuitive: Why We Need to Fix the Security of UX.</a></p>]]>
      </content:encoded>
      <pubDate>Wed, 10 Nov 2021 07:41:31 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/c7950801/679463e7.mp3" length="45275874" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2827</itunes:duration>
      <itunes:summary>Jared Spool, founder UIE and co-founder of Center Centre, talks about why designers need to learn about security and how to demonstrate to stakeholders that having a poor security UX is more expensive than fixing it.</itunes:summary>
      <itunes:subtitle>Jared Spool, founder UIE and co-founder of Center Centre, talks about why designers need to learn about security and how to demonstrate to stakeholders that having a poor security UX is more expensive than fixing it.</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Improve, Adapt, and Customize Cybersecurity Awareness Strategies and Metrics with Kate Brett Goldman</title>
      <itunes:episode>19</itunes:episode>
      <podcast:episode>19</podcast:episode>
      <itunes:title>Improve, Adapt, and Customize Cybersecurity Awareness Strategies and Metrics with Kate Brett Goldman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">44b2abd5-93bd-4603-a17d-5417a85719d2</guid>
      <link>https://share.transistor.fm/s/7973d13e</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>What’s next for the cybersecurity awareness industry.</li><li>How to leverage qualitative and quantitative metrics (with similar challenges and opportunities to measuring the user experience).</li><li>How to go about understanding and changing your organization’s cybersecurity culture.</li></ul><p><br>Kate Brett Goldman is the Founder and CEO of Cybermaniacs, an innovative cybersecurity awareness company. Prior to founding Cybermaniacs, Kate spent over 20 years developing solutions that encourage human and organizational change in enterprise IT.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>What’s next for the cybersecurity awareness industry.</li><li>How to leverage qualitative and quantitative metrics (with similar challenges and opportunities to measuring the user experience).</li><li>How to go about understanding and changing your organization’s cybersecurity culture.</li></ul><p><br>Kate Brett Goldman is the Founder and CEO of Cybermaniacs, an innovative cybersecurity awareness company. Prior to founding Cybermaniacs, Kate spent over 20 years developing solutions that encourage human and organizational change in enterprise IT.</p>]]>
      </content:encoded>
      <pubDate>Wed, 27 Oct 2021 07:22:17 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/7973d13e/3d897043.mp3" length="36346265" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2269</itunes:duration>
      <itunes:summary>Kate Brett Goldman, founder and CEO of Cybermaniacs, talks about “cybersecurity awareness 2.0” and how to best leverage metrics in order to provide insights that are unique and actionable for your organization.</itunes:summary>
      <itunes:subtitle>Kate Brett Goldman, founder and CEO of Cybermaniacs, talks about “cybersecurity awareness 2.0” and how to best leverage metrics in order to provide insights that are unique and actionable for your organization.</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Everything You Wanted to Know About Security But Were Too Afraid to Ask with Ira Winkler</title>
      <itunes:episode>18</itunes:episode>
      <podcast:episode>18</podcast:episode>
      <itunes:title>Everything You Wanted to Know About Security But Were Too Afraid to Ask with Ira Winkler</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">61d2233c-8626-4a9b-b26d-8002cbb6c1fe</guid>
      <link>https://share.transistor.fm/s/8ecc5ddb</link>
      <description>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>Building a system in a way that, as Ira says, “a user cannot initiate a loss”</li><li>What designers need to know about prevention, detection, and reaction when it comes to security </li><li>What we can learn from safety science </li><li>How designers can get a seat at the table when it comes to human security engineering</li></ul><p>Ira Winkler is the founder of Secure Mentem and Chief Information Security Officer at Skyline Technology Soutions. He is the author of seven books on security, the latest of which is <em>You Can Stop Stupid </em>(discussed in this episode). He also has a new book in the works, <em>Security Awareness for Dummies</em>, which will be available in 2022.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>Building a system in a way that, as Ira says, “a user cannot initiate a loss”</li><li>What designers need to know about prevention, detection, and reaction when it comes to security </li><li>What we can learn from safety science </li><li>How designers can get a seat at the table when it comes to human security engineering</li></ul><p>Ira Winkler is the founder of Secure Mentem and Chief Information Security Officer at Skyline Technology Soutions. He is the author of seven books on security, the latest of which is <em>You Can Stop Stupid </em>(discussed in this episode). He also has a new book in the works, <em>Security Awareness for Dummies</em>, which will be available in 2022.</p>]]>
      </content:encoded>
      <pubDate>Wed, 15 Sep 2021 06:00:00 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/8ecc5ddb/b6bdd28a.mp3" length="39969153" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2496</itunes:duration>
      <itunes:summary>Ira Winkler, founder of Secure Mentem and Chief Information Security Officer at Skyline Technology Solutions, talks about how thoughtfully-designed user experiences can build in security prevention, detection, and reaction.</itunes:summary>
      <itunes:subtitle>Ira Winkler, founder of Secure Mentem and Chief Information Security Officer at Skyline Technology Solutions, talks about how thoughtfully-designed user experiences can build in security prevention, detection, and reaction.</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>IoT Devices: Establishing Trust through Transparency with Matt Wyckhouse</title>
      <itunes:episode>17</itunes:episode>
      <podcast:episode>17</podcast:episode>
      <itunes:title>IoT Devices: Establishing Trust through Transparency with Matt Wyckhouse</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">51dabce4-16f8-4832-95e0-4d0e633f58f0</guid>
      <link>https://share.transistor.fm/s/b3f4e14c</link>
      <description>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>The security risks associated with IoT devices.</li><li>Why IoT devices can be less secure than, for example, a mobile device.</li><li>Supply chain security.</li><li>How UX designers can more effectively communicate risk to their users.</li></ul><p><br></p><p>Prior to founding <a href="https://www.finitestate.io">Finite State</a>, Matt spent 15 years leading the research and development of advanced solutions to some of the hardest problems in cyber security, with experience across the spectrum of offensive and defensive cyber operations. Notably, he was the technical founder and CTO of Battelle's Cyber Innovations business unit. Throughout his career, Matt has spearheaded complex national security programs ranging from detection of malicious integrated circuits in the supply chain to next generation intrusion detection systems for low-power embedded systems. Matt directed numerous intelligence programs related to the security of embedded and IoT devices and has been a speaker on the subject at events around the world.</p><p>You can follow Finite State on <a href="https://twitter.com/FiniteStateInc">Twitter</a> and <a href="https://www.linkedin.com/company/finitestate%C2%A0">LinkedIn</a>.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>The security risks associated with IoT devices.</li><li>Why IoT devices can be less secure than, for example, a mobile device.</li><li>Supply chain security.</li><li>How UX designers can more effectively communicate risk to their users.</li></ul><p><br></p><p>Prior to founding <a href="https://www.finitestate.io">Finite State</a>, Matt spent 15 years leading the research and development of advanced solutions to some of the hardest problems in cyber security, with experience across the spectrum of offensive and defensive cyber operations. Notably, he was the technical founder and CTO of Battelle's Cyber Innovations business unit. Throughout his career, Matt has spearheaded complex national security programs ranging from detection of malicious integrated circuits in the supply chain to next generation intrusion detection systems for low-power embedded systems. Matt directed numerous intelligence programs related to the security of embedded and IoT devices and has been a speaker on the subject at events around the world.</p><p>You can follow Finite State on <a href="https://twitter.com/FiniteStateInc">Twitter</a> and <a href="https://www.linkedin.com/company/finitestate%C2%A0">LinkedIn</a>.</p>]]>
      </content:encoded>
      <pubDate>Tue, 24 Aug 2021 17:17:44 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/b3f4e14c/15c9f37f.mp3" length="42410001" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2648</itunes:duration>
      <itunes:summary>Because of the lack of transparency about what goes into IoT devices, enterprises—and consumers—often have make security decisions based on what brand they trust the most. Matt Wyckhouse, founder of Finite State, explains why this situation is problematic and provides more effective solutions for assessing and communicating the risk around IoT devices.</itunes:summary>
      <itunes:subtitle>Because of the lack of transparency about what goes into IoT devices, enterprises—and consumers—often have make security decisions based on what brand they trust the most. Matt Wyckhouse, founder of Finite State, explains why this situation is problematic</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>How an Anthropologist Approaches a Security Breach with Patricia Ensworth</title>
      <itunes:episode>16</itunes:episode>
      <podcast:episode>16</podcast:episode>
      <itunes:title>How an Anthropologist Approaches a Security Breach with Patricia Ensworth</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">cea4659b-c9ea-476c-879f-4235eb739db6</guid>
      <link>https://share.transistor.fm/s/28ed3137</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>How anthropology can help security teams uncover the “why” behind security breaches.</li><li>Why it’s important for designers to familiarize themselves with information security risk management. </li><li>What designers should know about quality assurance applied to security.</li><li>How to fight for the time needed to build security into products.</li></ul><p><br>Patricia Ensworth is a business anthropologist whose work focuses on the human factors affecting the development and maintenance of innovative products, services, and systems. As a technology project manager at leading global financial services firms (Merrill Lynch, Moody’s UBS, Citigroup, Morgan Stanley) she came to specialize in risk analysis and quality assurance, often recently in relation to cybersecurity vulnerabilities. Her consulting firm <a href="https://www.Harborlightmanagement.com">Harborlight Management Services LLC</a> provides organizational research and management training to clients in a broad range of industries, as well as government agencies and non-profits. She is the author of <em>The Accidental Project Manager: Surviving the Transition from Techie to Manager </em>(Wiley 2001) and numerous technical articles about multicultural teamwork in software engineering. She is also an Adjunct Assistant Professor teaching in a graduate business degree program at New York University.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>How anthropology can help security teams uncover the “why” behind security breaches.</li><li>Why it’s important for designers to familiarize themselves with information security risk management. </li><li>What designers should know about quality assurance applied to security.</li><li>How to fight for the time needed to build security into products.</li></ul><p><br>Patricia Ensworth is a business anthropologist whose work focuses on the human factors affecting the development and maintenance of innovative products, services, and systems. As a technology project manager at leading global financial services firms (Merrill Lynch, Moody’s UBS, Citigroup, Morgan Stanley) she came to specialize in risk analysis and quality assurance, often recently in relation to cybersecurity vulnerabilities. Her consulting firm <a href="https://www.Harborlightmanagement.com">Harborlight Management Services LLC</a> provides organizational research and management training to clients in a broad range of industries, as well as government agencies and non-profits. She is the author of <em>The Accidental Project Manager: Surviving the Transition from Techie to Manager </em>(Wiley 2001) and numerous technical articles about multicultural teamwork in software engineering. She is also an Adjunct Assistant Professor teaching in a graduate business degree program at New York University.</p>]]>
      </content:encoded>
      <pubDate>Wed, 11 Aug 2021 07:24:55 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/28ed3137/2b8a9224.mp3" length="38958016" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2432</itunes:duration>
      <itunes:summary>Patricia Ensworth talks about how her training as a business anthropologist has influenced how she approaches security breaches, why we need to carefully consider the project management style we use, and how quality assurance practices can help us “establish a framework of continuous adaptation” when it comes to security.</itunes:summary>
      <itunes:subtitle>Patricia Ensworth talks about how her training as a business anthropologist has influenced how she approaches security breaches, why we need to carefully consider the project management style we use, and how quality assurance practices can help us “establ</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Where do "people" fit in with process and technology? with Dr. Nikki Robinson</title>
      <itunes:episode>15</itunes:episode>
      <podcast:episode>15</podcast:episode>
      <itunes:title>Where do "people" fit in with process and technology? with Dr. Nikki Robinson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b5417dbf-3e00-4593-b5a3-0a4d3fcf6c74</guid>
      <link>https://share.transistor.fm/s/15eb5549</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><p><br></p><ul><li>Why human factors is important when it comes to cybersecurity and why it’s still a relatively unexplored topic.</li><li>The importance of communication and empathy in cybersecurity.</li><li>Dr. Robinson’s research around low and medium vulnerabilities—and how their potential use in combination warrants additional attention.</li><li>Dr. Robinson’s most recent research around “vulnerability chaining blindness” and why the words we use and a shared understanding are crucial for making progress in cybersecurity.</li></ul><p>Dr. Nikki Robinson is a Security Architect and holds a Doctorate of Science in CyberSecurity, as well as several industry certifications (CISSP, CEH, MCITP, etc). She is currently working on a PhD in Human Factors and research in blending psychology and cybersecurity. With a background in IT Operations and Engineering, she moved into security several years ago.</p><p><br></p><ul><li>Connect with Dr. Nikki Robinson on LinkedIn</li><li>Listen to Dr. Nikki Robinson’s podcast: <a href="https://podcasts.apple.com/us/podcast/resilient-cyber/id1555928024">The Resilient Cyber Podcast</a></li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><p><br></p><ul><li>Why human factors is important when it comes to cybersecurity and why it’s still a relatively unexplored topic.</li><li>The importance of communication and empathy in cybersecurity.</li><li>Dr. Robinson’s research around low and medium vulnerabilities—and how their potential use in combination warrants additional attention.</li><li>Dr. Robinson’s most recent research around “vulnerability chaining blindness” and why the words we use and a shared understanding are crucial for making progress in cybersecurity.</li></ul><p>Dr. Nikki Robinson is a Security Architect and holds a Doctorate of Science in CyberSecurity, as well as several industry certifications (CISSP, CEH, MCITP, etc). She is currently working on a PhD in Human Factors and research in blending psychology and cybersecurity. With a background in IT Operations and Engineering, she moved into security several years ago.</p><p><br></p><ul><li>Connect with Dr. Nikki Robinson on LinkedIn</li><li>Listen to Dr. Nikki Robinson’s podcast: <a href="https://podcasts.apple.com/us/podcast/resilient-cyber/id1555928024">The Resilient Cyber Podcast</a></li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 14 Jul 2021 07:51:50 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/15eb5549/9bdc58f7.mp3" length="28538904" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1781</itunes:duration>
      <itunes:summary>As Dr. Nikki Robinson explains, when it comes to the “people, process, and technology” framework, we are still trying to get the “process” and the “technology” parts right—much less the part about “people.”</itunes:summary>
      <itunes:subtitle>As Dr. Nikki Robinson explains, when it comes to the “people, process, and technology” framework, we are still trying to get the “process” and the “technology” parts right—much less the part about “people.”</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Adapting the Human Factors Analysis and Classification System to Cybersecurity with Robin Bylenga</title>
      <itunes:episode>14</itunes:episode>
      <podcast:episode>14</podcast:episode>
      <itunes:title>Adapting the Human Factors Analysis and Classification System to Cybersecurity with Robin Bylenga</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">741f27c4-0a92-4943-bede-859d1939e518</guid>
      <link>https://share.transistor.fm/s/2788cd46</link>
      <description>
        <![CDATA[<p>During this episode, we talk about:</p><ul><li>How an insider threat at her own company led Robin into cybersecurity.</li><li>Why looking at the human side of errors and using a framework like HFCAS can help identify the root cause of the problem.</li><li>How Robin’s research challenges the idea that “humans are the weakest link.”</li><li>How HFACS can be applied to cybersecurity’s existing frameworks.</li></ul><p><br></p><p>Robin Bylenga is a seasoned client-facing expert, having drawn her initial skills early in her career as a flight attendant. Prior to entering cybersecurity, she was the CEO and Founder of Pedal Chic, the first women-specific bike shop in North America. She built the brand, won national awards, and designed a full line of bicycles for a niche market. Then her company suffered an insider threat attack. That experience changed the course of her life and brought her to a new career and the opportunity to adapt the Human Factors Analysis and Classification System (HFACS) framework to cyber.</p><p>Learn more about Robin's research at <a href="https://htdesignstudiollc.my.webex.com/htdesignstudiollc.my-en/url.php?frompanel=false&amp;gourl=https%3A%2F%2Fhfacs-cyber.com">https://hfacs-cyber.com</a>/</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>During this episode, we talk about:</p><ul><li>How an insider threat at her own company led Robin into cybersecurity.</li><li>Why looking at the human side of errors and using a framework like HFCAS can help identify the root cause of the problem.</li><li>How Robin’s research challenges the idea that “humans are the weakest link.”</li><li>How HFACS can be applied to cybersecurity’s existing frameworks.</li></ul><p><br></p><p>Robin Bylenga is a seasoned client-facing expert, having drawn her initial skills early in her career as a flight attendant. Prior to entering cybersecurity, she was the CEO and Founder of Pedal Chic, the first women-specific bike shop in North America. She built the brand, won national awards, and designed a full line of bicycles for a niche market. Then her company suffered an insider threat attack. That experience changed the course of her life and brought her to a new career and the opportunity to adapt the Human Factors Analysis and Classification System (HFACS) framework to cyber.</p><p>Learn more about Robin's research at <a href="https://htdesignstudiollc.my.webex.com/htdesignstudiollc.my-en/url.php?frompanel=false&amp;gourl=https%3A%2F%2Fhfacs-cyber.com">https://hfacs-cyber.com</a>/</p>]]>
      </content:encoded>
      <pubDate>Wed, 30 Jun 2021 08:16:54 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/2788cd46/d6d6b922.mp3" length="33565941" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2095</itunes:duration>
      <itunes:summary>The Human Factors Analysis and Classification System (HFACS) is a framework designed to account for human errors. It was originally developed by Dr. Scott Shappell and Dr. Doug Wiegmann and used to analyze aviation accidents within the US Air Force. Robin Bylenga has adapted the framework to cybersecurity.</itunes:summary>
      <itunes:subtitle>The Human Factors Analysis and Classification System (HFACS) is a framework designed to account for human errors. It was originally developed by Dr. Scott Shappell and Dr. Doug Wiegmann and used to analyze aviation accidents within the US Air Force. Robin</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Avoid the Temptation to Start Cybersecurity Conversations with “You’re Doing It Wrong” with Ryan Cloutier</title>
      <itunes:episode>13</itunes:episode>
      <podcast:episode>13</podcast:episode>
      <itunes:title>Avoid the Temptation to Start Cybersecurity Conversations with “You’re Doing It Wrong” with Ryan Cloutier</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9de50aeb-7c06-482b-a1ad-7ecbe23aeb0b</guid>
      <link>https://share.transistor.fm/s/a07aab07</link>
      <description>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>How security experts can more effectively communicate with end users.</li><li>The issue of delayed consequences in the digital realm and how that impacts how people behave.</li><li>The role accountability plays in improving information security.</li></ul><p><br>Ryan Cloutier is the principal security consultant for <a href="https://securitystudio.com">SecurityStudio</a>. He is an experienced IT/cybersecurity professional with over 15 years experience developing cybersecurity programs for Fortune 500 organizations. Ryan is a virtual Chief Information Security Officer for K12 districts across the country and is Certified Information Systems Security Professional (CISSP) and is proficient in cloud security, dev-ops, and sec-ops methodologies, security policy, process, audit, compliance, network security, and application security architecture. Ryan also co-hosts a weekly security podcast and is included on the top 100 most influential people in cybersecurity.</p><p><br>You can also find Ryan:</p><ul><li>On Twitter @cloutiersec</li><li>On <a href="https://securityshitshow.com">The Security Shitshow</a></li><li>During the episode, Ryan mentions <a href="https://s2me.io">S2me</a> (by SecurityStudio), a free security risk assessment resource</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode, we talk about:</p><ul><li>How security experts can more effectively communicate with end users.</li><li>The issue of delayed consequences in the digital realm and how that impacts how people behave.</li><li>The role accountability plays in improving information security.</li></ul><p><br>Ryan Cloutier is the principal security consultant for <a href="https://securitystudio.com">SecurityStudio</a>. He is an experienced IT/cybersecurity professional with over 15 years experience developing cybersecurity programs for Fortune 500 organizations. Ryan is a virtual Chief Information Security Officer for K12 districts across the country and is Certified Information Systems Security Professional (CISSP) and is proficient in cloud security, dev-ops, and sec-ops methodologies, security policy, process, audit, compliance, network security, and application security architecture. Ryan also co-hosts a weekly security podcast and is included on the top 100 most influential people in cybersecurity.</p><p><br>You can also find Ryan:</p><ul><li>On Twitter @cloutiersec</li><li>On <a href="https://securityshitshow.com">The Security Shitshow</a></li><li>During the episode, Ryan mentions <a href="https://s2me.io">S2me</a> (by SecurityStudio), a free security risk assessment resource</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 16 Jun 2021 07:45:48 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/a07aab07/10b60a33.mp3" length="37865979" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2364</itunes:duration>
      <itunes:summary>Cybersecurity can be an intimidating field to people outside the industry. As Ryan Cloutier explains, when security professionals find themselves starting off conversations with users with "you're doing it wrong," they are already starting off at a disadvantage. In fact, this kind of dynamic only serves to further ostracize end users.</itunes:summary>
      <itunes:subtitle>Cybersecurity can be an intimidating field to people outside the industry. As Ryan Cloutier explains, when security professionals find themselves starting off conversations with users with "you're doing it wrong," they are already starting off at a disadv</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Cybersecurity Risk Management for UX Practitioners with Natalie Hill</title>
      <itunes:episode>12</itunes:episode>
      <podcast:episode>12</podcast:episode>
      <itunes:title>Cybersecurity Risk Management for UX Practitioners with Natalie Hill</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">637a6e95-df7b-4a22-b582-7ac6c2fe1048</guid>
      <link>https://share.transistor.fm/s/b41220cf</link>
      <description>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>Thinking about cybersecurity risk from a UX practitioner’s perspective.</li><li>Balancing ease of use while not introducing unnecessary risk.</li><li>Building personas and scenarios for bad actors so you can make conscious decisions about how controls might be circumvented.</li><li>The importance of content strategy and collaborating with UX writers.</li><li>Tips for conducting user research when it’s difficult to get access to end users.</li></ul><p><a href="https://leafygreenmedia.com"><strong>Natalie Hill</strong></a> is a senior product designer with over 20 years of professional experience and a Master of Science in Information Studies. Her niche is enterprise UX. She loves finding elegant solutions to complex design problems and understanding the psychology that drives human behavior. Natalie considers cybersecurity one of the most important things in the world and has spent the last four years designing network, web, and email security solutions.</p><p><br></p><p>Natalie is a seasoned guitar player who enjoys playing live with a band in non-pandemic times. She is also on the board of directors of the nonprofit <a href="https://girlsrockaustin.org">Girls Rock Austin</a>, an organization dedicated to empowering girls, transgender, and non-binary youth through music education, mentorship, and self-care.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>Thinking about cybersecurity risk from a UX practitioner’s perspective.</li><li>Balancing ease of use while not introducing unnecessary risk.</li><li>Building personas and scenarios for bad actors so you can make conscious decisions about how controls might be circumvented.</li><li>The importance of content strategy and collaborating with UX writers.</li><li>Tips for conducting user research when it’s difficult to get access to end users.</li></ul><p><a href="https://leafygreenmedia.com"><strong>Natalie Hill</strong></a> is a senior product designer with over 20 years of professional experience and a Master of Science in Information Studies. Her niche is enterprise UX. She loves finding elegant solutions to complex design problems and understanding the psychology that drives human behavior. Natalie considers cybersecurity one of the most important things in the world and has spent the last four years designing network, web, and email security solutions.</p><p><br></p><p>Natalie is a seasoned guitar player who enjoys playing live with a band in non-pandemic times. She is also on the board of directors of the nonprofit <a href="https://girlsrockaustin.org">Girls Rock Austin</a>, an organization dedicated to empowering girls, transgender, and non-binary youth through music education, mentorship, and self-care.</p>]]>
      </content:encoded>
      <pubDate>Wed, 19 May 2021 07:48:18 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/b41220cf/415c1ea4.mp3" length="36269531" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2264</itunes:duration>
      <itunes:summary>UX and cybersecurity have a lot of things in common: it can be hard to get buy-in from stakeholders and both are often thought about only after something goes wrong. Natalie Hill, a senior product designer with expertise building network, web, and email security solutions, talks understanding information security risk, balancing ease-of-use without introducing unnecessary risk, as well as tips for working with teams and getting stakeholder buy-in.</itunes:summary>
      <itunes:subtitle>UX and cybersecurity have a lot of things in common: it can be hard to get buy-in from stakeholders and both are often thought about only after something goes wrong. Natalie Hill, a senior product designer with expertise building network, web, and email s</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Expectation vs. Outcome: Accounting for Human Behavior with Dr. Alexander Stein</title>
      <itunes:episode>11</itunes:episode>
      <podcast:episode>11</podcast:episode>
      <itunes:title>Expectation vs. Outcome: Accounting for Human Behavior with Dr. Alexander Stein</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">f163e104-34e9-4235-a286-b8d1fee02c5d</guid>
      <link>https://share.transistor.fm/s/ce58aecf</link>
      <description>
        <![CDATA[<p><strong>During this episode, we talk about:</strong></p><ul><li>Why looking for a silver bullet for cybersecurity is hopeless. Like any human issue, it is a multi-dimensional and complex.</li><li>Expectations versus outcomes: how we must take into account how “things will play out when you involve people.”</li><li>"Changing how people think and behave is complicated, non-linear, painstaking, and does not conform to your expectations.” Despite this, understanding and accounting for people when it comes to cybersecurity is critically important.</li><li>What organizations are missing and what organizations are doing well when it comes to accounting for people in cybersecurity.</li></ul><p><br><strong>Alexander Stein, PhD</strong> is an expert in human behavior and decision-making, and founder and managing principal of Dolus Advisors, a pyschodynamic management consultancy that advises CEOs, senior management teams, and boards in issues involving leadership, culture, governance, ethics, risk, and other organizational matters with complex psychological underpinnings. Dr. Stein is an internationally regarded authority in human risk and the psychodynamics of fraud and is frequently engaged as a specialist advisor in multi-jurisdictional, corruption, and executive misconduct matters and also helps companies mitigate and address human factor vulnerabilities in cybersecurity. He also consults with companies that develop and deliver technologies that assume decision-making functions in human affairs to mitigate unintended consequences to people, organizations, and society. Dr. Stein is a widely published and cited writer and thought leader, currently a regular contributor to Forbes on the psychology of leadership and misbehavior in business, and a frequent podcast and webinar guest, on-camera expert commentator, and keynote speaker and panelist.</p><p><strong>Find more information on Dr. Stein and Dolus Advisors:</strong></p><ul><li><a href="https://www.dolusadvisors.com">Dolus Advisors</a></li><li><a href="https://www.dolusadvisors.com/subscribe">The Briefing</a>, Dolus Advisors’ periodic digest of thought-leadership and analysis</li><li><a href="https://www.linkedin.com/in/alexandersteinphd">Dr. Stein on LinkedIn</a></li><li><a href="https://www.linkedin.com/in/dolus-advisors">Dolus Advisors on LinkedIn</a></li><li><a href="https://podcasts.apple.com/us/podcast/humans-and-technology-a-complicated-and-fascinating-pair">Humans and technology: A complicated and fascinating pair</a>, RSA Conference Podcast, Episode 33, March 3, 2020</li><li><a href="https://www.youtube.com/watch?v=5NHwEtJlsmo">To Phish or Not to Phish? That is the Question</a>, Wizer Training Webinar, January 13, 2021</li><li><a href="https://www.forbes.com/sites/alexanderstein/2019/01/06/the-pitfalls-of-outsourcing-self-awareness-to-ai-heres-what-leaders-need-to-know/">Pitfalls of Outsourcing Self-Awareness to AI</a>, Forbes, January 6, 2019 </li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><strong>During this episode, we talk about:</strong></p><ul><li>Why looking for a silver bullet for cybersecurity is hopeless. Like any human issue, it is a multi-dimensional and complex.</li><li>Expectations versus outcomes: how we must take into account how “things will play out when you involve people.”</li><li>"Changing how people think and behave is complicated, non-linear, painstaking, and does not conform to your expectations.” Despite this, understanding and accounting for people when it comes to cybersecurity is critically important.</li><li>What organizations are missing and what organizations are doing well when it comes to accounting for people in cybersecurity.</li></ul><p><br><strong>Alexander Stein, PhD</strong> is an expert in human behavior and decision-making, and founder and managing principal of Dolus Advisors, a pyschodynamic management consultancy that advises CEOs, senior management teams, and boards in issues involving leadership, culture, governance, ethics, risk, and other organizational matters with complex psychological underpinnings. Dr. Stein is an internationally regarded authority in human risk and the psychodynamics of fraud and is frequently engaged as a specialist advisor in multi-jurisdictional, corruption, and executive misconduct matters and also helps companies mitigate and address human factor vulnerabilities in cybersecurity. He also consults with companies that develop and deliver technologies that assume decision-making functions in human affairs to mitigate unintended consequences to people, organizations, and society. Dr. Stein is a widely published and cited writer and thought leader, currently a regular contributor to Forbes on the psychology of leadership and misbehavior in business, and a frequent podcast and webinar guest, on-camera expert commentator, and keynote speaker and panelist.</p><p><strong>Find more information on Dr. Stein and Dolus Advisors:</strong></p><ul><li><a href="https://www.dolusadvisors.com">Dolus Advisors</a></li><li><a href="https://www.dolusadvisors.com/subscribe">The Briefing</a>, Dolus Advisors’ periodic digest of thought-leadership and analysis</li><li><a href="https://www.linkedin.com/in/alexandersteinphd">Dr. Stein on LinkedIn</a></li><li><a href="https://www.linkedin.com/in/dolus-advisors">Dolus Advisors on LinkedIn</a></li><li><a href="https://podcasts.apple.com/us/podcast/humans-and-technology-a-complicated-and-fascinating-pair">Humans and technology: A complicated and fascinating pair</a>, RSA Conference Podcast, Episode 33, March 3, 2020</li><li><a href="https://www.youtube.com/watch?v=5NHwEtJlsmo">To Phish or Not to Phish? That is the Question</a>, Wizer Training Webinar, January 13, 2021</li><li><a href="https://www.forbes.com/sites/alexanderstein/2019/01/06/the-pitfalls-of-outsourcing-self-awareness-to-ai-heres-what-leaders-need-to-know/">Pitfalls of Outsourcing Self-Awareness to AI</a>, Forbes, January 6, 2019 </li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 05 May 2021 08:05:55 -0400</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/ce58aecf/7c36ec4a.mp3" length="34365715" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2145</itunes:duration>
      <itunes:summary>You launch the product, enact the policy, or put the control in place and yet...people aren’t behaving the way you expect. As Dr. Alexander Stein, an expert in human behavior and decision-making and founder and managing principal of Dolus Advisors says, “people just don’t cooperate the way you want them to…there are lots of risk management elements that are beautifully architected but there is a delta between theory and practice.”</itunes:summary>
      <itunes:subtitle>You launch the product, enact the policy, or put the control in place and yet...people aren’t behaving the way you expect. As Dr. Alexander Stein, an expert in human behavior and decision-making and founder and managing principal of Dolus Advisors says, “</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>How Do You Get People to Care About Cybersecurity? with Laura Nespoli</title>
      <itunes:episode>10</itunes:episode>
      <podcast:episode>10</podcast:episode>
      <itunes:title>How Do You Get People to Care About Cybersecurity? with Laura Nespoli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b4cbeb8f-d673-433c-9dd2-4a8618bb04ea</guid>
      <link>https://share.transistor.fm/s/2ebb05b7</link>
      <description>
        <![CDATA[<p>Laura Nespoli is founder of Meshin Movement, a brand strategy consultancy. Laura has spent her career serving as a strategic problem-solver and brand storyteller across the sales marketing spectrum in many facets--from agency to client-side, media to creative, market</p><p>research to integrated marketing planning. Her professional focus is in helping brands and teams reveal business opportunity and advantage while her passion is rooted in inspiring ideas that serve the world for greater good. </p><p>During this episode we talk about:</p><ul><li>Incorporating cybersecurity into the "fabric of your organization’s brand."</li><li>How to create meaning and understanding that leads to a new behavior.</li><li>The FOGG Behavior Model: motivation, ability, and a prompt must converge for a behavior to happen.</li><li>How to deal with our natural aversion to complexity.</li><li>How purpose is a way to create more unified understanding of what everyone is working towards and helps people put more meaning around the security-related tasks that may have otherwise been perceived as meaningless. </li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Laura Nespoli is founder of Meshin Movement, a brand strategy consultancy. Laura has spent her career serving as a strategic problem-solver and brand storyteller across the sales marketing spectrum in many facets--from agency to client-side, media to creative, market</p><p>research to integrated marketing planning. Her professional focus is in helping brands and teams reveal business opportunity and advantage while her passion is rooted in inspiring ideas that serve the world for greater good. </p><p>During this episode we talk about:</p><ul><li>Incorporating cybersecurity into the "fabric of your organization’s brand."</li><li>How to create meaning and understanding that leads to a new behavior.</li><li>The FOGG Behavior Model: motivation, ability, and a prompt must converge for a behavior to happen.</li><li>How to deal with our natural aversion to complexity.</li><li>How purpose is a way to create more unified understanding of what everyone is working towards and helps people put more meaning around the security-related tasks that may have otherwise been perceived as meaningless. </li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 24 Feb 2021 09:50:24 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/2ebb05b7/98630df7.mp3" length="27721500" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1730</itunes:duration>
      <itunes:summary>Have you ever thought, why don’t people care about cybersecurity? For this episode, I wanted to take a different approach and learn from a branding expert who focuses on rallying people around a common purpose and activating behavior change. Laura Nespoli, founder of Meshin Movement, a brand strategy consultancy, helps us approach cybersecurity awareness from a different angle: encouraging behavior change by not only providing knowledge but rallying people around a common purpose.</itunes:summary>
      <itunes:subtitle>Have you ever thought, why don’t people care about cybersecurity? For this episode, I wanted to take a different approach and learn from a branding expert who focuses on rallying people around a common purpose and activating behavior change. Laura Nespoli</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>We All Have Been the “Stupid User” at Some Point with Dr. Margaret Cunningham</title>
      <itunes:episode>9</itunes:episode>
      <podcast:episode>9</podcast:episode>
      <itunes:title>We All Have Been the “Stupid User” at Some Point with Dr. Margaret Cunningham</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">780ff030-9852-44c7-92ad-3183503833e7</guid>
      <link>https://share.transistor.fm/s/0d7df5af</link>
      <description>
        <![CDATA[<p>Dr. Margaret Cunningham is an experimental psychologist and the Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab.  In this role, she serves as the behavioral science subject matter expert in an interdisciplinary security team <a href="https://www.forcepoint.com/company/biographies/dr-margaret-cunningham">driving the development of human-centric security solutions</a>. Previously, she supported the Human Systems Integration branch of The Department of Homeland Security.  </p><p><br></p><p>In this episode, we talk about:</p><ul><li>Why saying “people are the weakest link” is not a productive mindset when it comes to cybersecurity.</li><li>How we can thoughtfully create systems/designs that mitigate the risk of human limitations.</li><li>The Human Factors Analysis and Classification System (whether you are in UX or cybersecurity, you will likely find this framework interesting).</li><li>The nuances around errors and rulebreaking and how we can, ideally, learn from our employees’ behavior to make the systems <em>and</em> the organization better.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Dr. Margaret Cunningham is an experimental psychologist and the Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab.  In this role, she serves as the behavioral science subject matter expert in an interdisciplinary security team <a href="https://www.forcepoint.com/company/biographies/dr-margaret-cunningham">driving the development of human-centric security solutions</a>. Previously, she supported the Human Systems Integration branch of The Department of Homeland Security.  </p><p><br></p><p>In this episode, we talk about:</p><ul><li>Why saying “people are the weakest link” is not a productive mindset when it comes to cybersecurity.</li><li>How we can thoughtfully create systems/designs that mitigate the risk of human limitations.</li><li>The Human Factors Analysis and Classification System (whether you are in UX or cybersecurity, you will likely find this framework interesting).</li><li>The nuances around errors and rulebreaking and how we can, ideally, learn from our employees’ behavior to make the systems <em>and</em> the organization better.</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 10 Feb 2021 07:11:57 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/0d7df5af/7929999f.mp3" length="33326940" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2080</itunes:duration>
      <itunes:summary>One of the biggest obstacles in cybersecurity? Labeling people as the weakest link, says Dr. Margaret Cunningham, experimental psychologist and Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab. She shares why this approach is unproductive, the role human factors plays in her research, as well as human performance, mistakes, and rulebreaking.</itunes:summary>
      <itunes:subtitle>One of the biggest obstacles in cybersecurity? Labeling people as the weakest link, says Dr. Margaret Cunningham, experimental psychologist and Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab. She shares why this approach is unproduc</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Using Analogies to Help People Understand Information Security with Brian Murphy</title>
      <itunes:episode>8</itunes:episode>
      <podcast:episode>8</podcast:episode>
      <itunes:title>Using Analogies to Help People Understand Information Security with Brian Murphy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">9f927b15-b415-4fc6-849c-f393663c3ac2</guid>
      <link>https://share.transistor.fm/s/cbb71bf4</link>
      <description>
        <![CDATA[<p>Brian Murphy, a security specialist at GreyCastle Security, is a technology, information security, and risk management professional. He assists with the development and implementation of cybersecurity solutions for a variety of industries. Brian has knowledge of PCI, SOX, GLBA compliance requirements, as well as ISO and NIST standards and regulations.</p><p><br></p><p>On this episode we talk about:</p><ul><li>How we are constantly doing risk assessments in our everyday life. At least, we should be.</li><li>How using analogies and stories help people connect with something new, like cybersecurity.</li><li>Shifting the mindset to ensure the cybersecurity team's goals tie back to the business’ goals.</li><li>The importance of culture and providing an environment where employees and the cybersecurity team are constantly learning.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Brian Murphy, a security specialist at GreyCastle Security, is a technology, information security, and risk management professional. He assists with the development and implementation of cybersecurity solutions for a variety of industries. Brian has knowledge of PCI, SOX, GLBA compliance requirements, as well as ISO and NIST standards and regulations.</p><p><br></p><p>On this episode we talk about:</p><ul><li>How we are constantly doing risk assessments in our everyday life. At least, we should be.</li><li>How using analogies and stories help people connect with something new, like cybersecurity.</li><li>Shifting the mindset to ensure the cybersecurity team's goals tie back to the business’ goals.</li><li>The importance of culture and providing an environment where employees and the cybersecurity team are constantly learning.</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 03 Feb 2021 08:17:44 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/cbb71bf4/f28d6d37.mp3" length="20699555" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1291</itunes:duration>
      <itunes:summary>Brian Murphy, a security specialist at GreyCastle Security, talks about how we can help employees understand information security by using stories and analogies people already understand and connect with. This connection is the first step in encouraging the behavior change that fosters an organizational culture that then leads to better information security.</itunes:summary>
      <itunes:subtitle>Brian Murphy, a security specialist at GreyCastle Security, talks about how we can help employees understand information security by using stories and analogies people already understand and connect with. This connection is the first step in encouraging t</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>What can we learn from human factors programs in other industries? with Dr. Calvin Nobles</title>
      <itunes:episode>7</itunes:episode>
      <podcast:episode>7</podcast:episode>
      <itunes:title>What can we learn from human factors programs in other industries? with Dr. Calvin Nobles</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">ef9ecd38-75eb-4731-85be-46708e8783fb</guid>
      <link>https://share.transistor.fm/s/1eb7c83c</link>
      <description>
        <![CDATA[<p>Dr. Nobles is a cybersecurity scientist and human factors practitioner with more than 25 years of experience. He retired from the U.S. Navy and currently works in the financial services industry. Dr. Nobles recently completed a Cybersecurity Policy Fellowship with the New America Think Tank in Washington, D.C.</p><p>In this episode we talk about:</p><ul><li>What human factors is and what a human factors engineer does.</li><li>Chronic fatigue and stress in the cybersecurity industry.</li><li>What approaches the aviation industry has taken to address the likelihood of human error.</li><li>What leaders at organizations can do to embrace human factors and design systems that are "more favorable to humans."</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Dr. Nobles is a cybersecurity scientist and human factors practitioner with more than 25 years of experience. He retired from the U.S. Navy and currently works in the financial services industry. Dr. Nobles recently completed a Cybersecurity Policy Fellowship with the New America Think Tank in Washington, D.C.</p><p>In this episode we talk about:</p><ul><li>What human factors is and what a human factors engineer does.</li><li>Chronic fatigue and stress in the cybersecurity industry.</li><li>What approaches the aviation industry has taken to address the likelihood of human error.</li><li>What leaders at organizations can do to embrace human factors and design systems that are "more favorable to humans."</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 27 Jan 2021 08:18:06 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/1eb7c83c/48ce30c5.mp3" length="39433570" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2462</itunes:duration>
      <itunes:summary>Dr. Nobles, a cybersecurity scientist and human factors practitioner, explains how other industries (aviation and medicine, for example) have leveraged the discipline of human factors to reduce human errors and how we should take a similar approach in cybersecurity.</itunes:summary>
      <itunes:subtitle>Dr. Nobles, a cybersecurity scientist and human factors practitioner, explains how other industries (aviation and medicine, for example) have leveraged the discipline of human factors to reduce human errors and how we should take a similar approach in cyb</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Managing Risk Through Two-Way Communication with Alexandra Panaretos</title>
      <itunes:episode>6</itunes:episode>
      <podcast:episode>6</podcast:episode>
      <itunes:title>Managing Risk Through Two-Way Communication with Alexandra Panaretos</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">4e3f2a08-be42-499d-a5e2-a60af47b1fbd</guid>
      <link>https://share.transistor.fm/s/81868c4d</link>
      <description>
        <![CDATA[<p>Alex is the EY Americas Cybersecurity Lead for Secure Culture Activation. With a background in sports broadcasting and operational security, she is experienced in security communications and education, awareness program development, the psychology of social engineering, and behavior analytics. In her free time, she is a mother of three and she volunteers with law enforcement agencies and neighborhood organizations to educate community members, elder care organizations, children and parents on information security and social media safety.</p><p>During this episode, we’re focusing on what successful organizations are doing to manage risk. We talk about:</p><ul><li>Why it’s difficult for people to understand risk in the digital realm.</li><li>Why taking the time to “brand” security at the organization is important.</li><li>How organizations can foster an open dialogue around security to encourage engagement and lasting behavior changes.</li><li>How field visits can be used to develop more effective solutions for awareness and behavior change.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Alex is the EY Americas Cybersecurity Lead for Secure Culture Activation. With a background in sports broadcasting and operational security, she is experienced in security communications and education, awareness program development, the psychology of social engineering, and behavior analytics. In her free time, she is a mother of three and she volunteers with law enforcement agencies and neighborhood organizations to educate community members, elder care organizations, children and parents on information security and social media safety.</p><p>During this episode, we’re focusing on what successful organizations are doing to manage risk. We talk about:</p><ul><li>Why it’s difficult for people to understand risk in the digital realm.</li><li>Why taking the time to “brand” security at the organization is important.</li><li>How organizations can foster an open dialogue around security to encourage engagement and lasting behavior changes.</li><li>How field visits can be used to develop more effective solutions for awareness and behavior change.</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 20 Jan 2021 08:44:16 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/81868c4d/dce7218a.mp3" length="30585607" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1909</itunes:duration>
      <itunes:summary>Alexandra Panaretos, EY’s Americas Cybersecurity Lead for Secure Culture Activation, talks about the problem of being on autopilot and blindly trusting technology, the importance of establishing relationships between employees and the security team, and how effective security programs are built by understanding and designing for how people work.</itunes:summary>
      <itunes:subtitle>Alexandra Panaretos, EY’s Americas Cybersecurity Lead for Secure Culture Activation, talks about the problem of being on autopilot and blindly trusting technology, the importance of establishing relationships between employees and the security team, and h</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>Yes</itunes:explicit>
    </item>
    <item>
      <title>Improving the User Experience with Passwordless Security with Yan Grinshtein</title>
      <itunes:episode>5</itunes:episode>
      <podcast:episode>5</podcast:episode>
      <itunes:title>Improving the User Experience with Passwordless Security with Yan Grinshtein</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">bc2aeef4-3cd4-4017-b7a7-2036a5fb7744</guid>
      <link>https://share.transistor.fm/s/7c41a970</link>
      <description>
        <![CDATA[<p><a href="https://yangrinshtein.com">Yan Grinshtein</a> is an HCI and accessibility certified human-centered design leader, speaker, and mentor. Currently the head of design at <a href="https://hypr.com">HYPR</a>, Yan has over 20 years of experience as a creative and design leader. He has worked on three different continents across four countries with companies ranging from Fortune 500 to startups, some of which have become multi-billion dollar companies today. You can follow Yan on <a href="https://medium.com/@yangrin">Medium</a> or <a href="https://www.linkedin.com/in/yangrinshtein/">Linkedin</a>.</p><p>In this episode, we talk about:</p><ul><li>How to design better, more thoughtful solutions when users try to get around security.</li><li>How conducting your own user research helps you question your team's assumptions and, even better, leads to product-defining insights.</li><li>Why it's important to invest in the user experience of advanced/technical users (like administrators).</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p><a href="https://yangrinshtein.com">Yan Grinshtein</a> is an HCI and accessibility certified human-centered design leader, speaker, and mentor. Currently the head of design at <a href="https://hypr.com">HYPR</a>, Yan has over 20 years of experience as a creative and design leader. He has worked on three different continents across four countries with companies ranging from Fortune 500 to startups, some of which have become multi-billion dollar companies today. You can follow Yan on <a href="https://medium.com/@yangrin">Medium</a> or <a href="https://www.linkedin.com/in/yangrinshtein/">Linkedin</a>.</p><p>In this episode, we talk about:</p><ul><li>How to design better, more thoughtful solutions when users try to get around security.</li><li>How conducting your own user research helps you question your team's assumptions and, even better, leads to product-defining insights.</li><li>Why it's important to invest in the user experience of advanced/technical users (like administrators).</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 13 Jan 2021 08:28:51 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/7c41a970/891344a7.mp3" length="33403925" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2085</itunes:duration>
      <itunes:summary>Yan Grinshtein, head of design at HYPR, talks about passwordless security, the value of user research, and why administrators are "the forgotten side of usability."</itunes:summary>
      <itunes:subtitle>Yan Grinshtein, head of design at HYPR, talks about passwordless security, the value of user research, and why administrators are "the forgotten side of usability."</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>How to Design Great User Experiences in a Complicated Cybersecurity Ecosystem with Christian Rohrer</title>
      <itunes:episode>4</itunes:episode>
      <podcast:episode>4</podcast:episode>
      <itunes:title>How to Design Great User Experiences in a Complicated Cybersecurity Ecosystem with Christian Rohrer</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">a301a3a0-a432-47b6-a110-bebd306553f4</guid>
      <link>https://share.transistor.fm/s/a88f7010</link>
      <description>
        <![CDATA[<p>Christian Rohrer is Senior Director, User Experience at McAfee, returning to the company after a 5-year hiatus during which he was Founder and Principal at XD Strategy, a UX strategy consultancy, and former Vice President of Design, Research and Enterprise Services at Capital One. He has also led UX teams at Realtor.com, eBay, and Yahoo!. Christian holds a Bachelors in Computer Science from UC Santa Cruz and a Ph.D in Cognitive Science and Education from Stanford University.</p><p>Christian not only has a deep understanding of the complex cybersecurity ecosystem, he also appreciates the challenges in getting stakeholder buy-in to ensure the user experience is prioritized.</p><p>In this episode, we talk about:</p><ul><li>Human-centered design: what is it and why is it important? (we talk about Nielsen Norman Group co-founder and author of the Design of Everyday Things, Don Norman, who has <a href="https://www.youtube.com/watch?v=rmM0kRf8Dbk&amp;feature=emb_rel_end">a great video describing the principles of Human-Centered Design</a>)</li><li>The complicated cybersecurity ecosystem and the challenges it presents when designing user experiences. </li><li>How great user experiences in cybersecurity are "a human and a technology problem to solve."</li><li>How to speak the language of stakeholders by using metrics, including <a href="https://www.nngroup.com/articles/pure-method/">the PURE Method</a>, which Christian co-developed.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Christian Rohrer is Senior Director, User Experience at McAfee, returning to the company after a 5-year hiatus during which he was Founder and Principal at XD Strategy, a UX strategy consultancy, and former Vice President of Design, Research and Enterprise Services at Capital One. He has also led UX teams at Realtor.com, eBay, and Yahoo!. Christian holds a Bachelors in Computer Science from UC Santa Cruz and a Ph.D in Cognitive Science and Education from Stanford University.</p><p>Christian not only has a deep understanding of the complex cybersecurity ecosystem, he also appreciates the challenges in getting stakeholder buy-in to ensure the user experience is prioritized.</p><p>In this episode, we talk about:</p><ul><li>Human-centered design: what is it and why is it important? (we talk about Nielsen Norman Group co-founder and author of the Design of Everyday Things, Don Norman, who has <a href="https://www.youtube.com/watch?v=rmM0kRf8Dbk&amp;feature=emb_rel_end">a great video describing the principles of Human-Centered Design</a>)</li><li>The complicated cybersecurity ecosystem and the challenges it presents when designing user experiences. </li><li>How great user experiences in cybersecurity are "a human and a technology problem to solve."</li><li>How to speak the language of stakeholders by using metrics, including <a href="https://www.nngroup.com/articles/pure-method/">the PURE Method</a>, which Christian co-developed.</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 06 Jan 2021 08:48:25 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/a88f7010/9f2468b8.mp3" length="41156550" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2570</itunes:duration>
      <itunes:summary>Christian Rohrer, Senior Director, User Experience at McAfee, talks about why human-centered design is important, the challenges teams in the security space face that contribute to poor user experiences, how to get stakeholder buy-in when it comes to investing in the user experience, and how to think about user experience metrics.</itunes:summary>
      <itunes:subtitle>Christian Rohrer, Senior Director, User Experience at McAfee, talks about why human-centered design is important, the challenges teams in the security space face that contribute to poor user experiences, how to get stakeholder buy-in when it comes to inve</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Using Self-Sovereign Identity as the Foundation for Secure, Trusted Digital Relationships with Kaliya Young</title>
      <itunes:episode>3</itunes:episode>
      <podcast:episode>3</podcast:episode>
      <itunes:title>Using Self-Sovereign Identity as the Foundation for Secure, Trusted Digital Relationships with Kaliya Young</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">b65c0189-62b3-41bd-90c4-da83ef339381</guid>
      <link>https://share.transistor.fm/s/1518fce2</link>
      <description>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>What Kaliya describes as a new “layer” to the Internet to support decentralized identity, much like how html or email supported what came next.</li><li>The importance of open standards.</li><li>How to build a “digital wallet” paradigm that makes sense to people.</li><li>What SSI means for businesses/business models.</li></ul><p><br></p><p>Kaliya is the co-author of <a href="https://www.amazon.com/Comprehensive-Guide-Self-Sovereign-Identity-ebook/dp/B07Q3TXLDP">“Comprehensive Guide to Self-Sovereign Identity,”</a> and author of <a href="https://www.anthempress.com/the-domains-of-identity-pb">“Domains of Identity.”</a> She is also one of the co-founders of the Internet Identity Workshop, which brings together people to help develop open standards for ways people can own and control their digital representations of themselves.</p>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>In this episode we talk about:</p><ul><li>What Kaliya describes as a new “layer” to the Internet to support decentralized identity, much like how html or email supported what came next.</li><li>The importance of open standards.</li><li>How to build a “digital wallet” paradigm that makes sense to people.</li><li>What SSI means for businesses/business models.</li></ul><p><br></p><p>Kaliya is the co-author of <a href="https://www.amazon.com/Comprehensive-Guide-Self-Sovereign-Identity-ebook/dp/B07Q3TXLDP">“Comprehensive Guide to Self-Sovereign Identity,”</a> and author of <a href="https://www.anthempress.com/the-domains-of-identity-pb">“Domains of Identity.”</a> She is also one of the co-founders of the Internet Identity Workshop, which brings together people to help develop open standards for ways people can own and control their digital representations of themselves.</p>]]>
      </content:encoded>
      <pubDate>Wed, 23 Dec 2020 09:11:59 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/1518fce2/654ab32e.mp3" length="29257988" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>1826</itunes:duration>
      <itunes:summary>Kaliya Young (“Identity Woman”), an expert in self-sovereign identity, explains what self-sovereign identity (SSI) is, why it’s important in the contexts of both security and privacy, and why it’s critical to get the user experience right in order to encourage its adoption.</itunes:summary>
      <itunes:subtitle>Kaliya Young (“Identity Woman”), an expert in self-sovereign identity, explains what self-sovereign identity (SSI) is, why it’s important in the contexts of both security and privacy, and why it’s critical to get the user experience right in order to enco</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>Reframing the Information Security Conversation for Business Owners with Jim Nelson</title>
      <itunes:episode>2</itunes:episode>
      <podcast:episode>2</podcast:episode>
      <itunes:title>Reframing the Information Security Conversation for Business Owners with Jim Nelson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">7b7de32c-d91f-483e-9cb5-15ba7aa8ce40</guid>
      <link>https://share.transistor.fm/s/1005f592</link>
      <description>
        <![CDATA[<p>Jim Nelson, Senior Security Consultant for <a href="https://innovativesol.com">Innovative Solutions</a>, has been working with organizations to help raise their security posture based on their risk for the last 17 years.</p><p>In this episode, we talk about:</p><ul><li>How to reframe the security conversation so business owners understand that an investment in security is taking a proactive stance. Ultimately, you have to empathize with business owners.</li><li>Why fear-based tactics may not be the best solution in getting people to care about security.</li><li>Why it's so important to understand the business and its employees before establishing security controls.</li><li>Expectations around security--customers just assume that their data is safe.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Jim Nelson, Senior Security Consultant for <a href="https://innovativesol.com">Innovative Solutions</a>, has been working with organizations to help raise their security posture based on their risk for the last 17 years.</p><p>In this episode, we talk about:</p><ul><li>How to reframe the security conversation so business owners understand that an investment in security is taking a proactive stance. Ultimately, you have to empathize with business owners.</li><li>Why fear-based tactics may not be the best solution in getting people to care about security.</li><li>Why it's so important to understand the business and its employees before establishing security controls.</li><li>Expectations around security--customers just assume that their data is safe.</li></ul>]]>
      </content:encoded>
      <pubDate>Wed, 16 Dec 2020 08:55:09 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/1005f592/d0acee38.mp3" length="39152767" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2444</itunes:duration>
      <itunes:summary>Jim Nelson, Senior Security Consultant for Innovative Solutions, talks about how empathy is key when talking to business owners about security, how a “checkbox mentality” can be problematic, and why organizations should consider their own customers’ assumptions and expectations when it comes to security.</itunes:summary>
      <itunes:subtitle>Jim Nelson, Senior Security Consultant for Innovative Solutions, talks about how empathy is key when talking to business owners about security, how a “checkbox mentality” can be problematic, and why organizations should consider their own customers’ assum</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
    <item>
      <title>The Role of Storytelling in Cybersecurity Awareness Training with Gabriel Friedlander</title>
      <itunes:episode>1</itunes:episode>
      <podcast:episode>1</podcast:episode>
      <itunes:title>The Role of Storytelling in Cybersecurity Awareness Training with Gabriel Friedlander</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <guid isPermaLink="false">75dfbcec-658b-4b57-873a-e000f8e94420</guid>
      <link>https://share.transistor.fm/s/72db8e40</link>
      <description>
        <![CDATA[<p>Gabriel has been studying human behavior for a long time. His first company, ObserveIT, an insider threat management platform recently acquired by Proofpoint, dealt with monitoring and reporting on out-of-policy employee behavior. Today, as the founder of <a href="https://Wizer-training.com">Wizer</a>, a security awareness training platform, Gabriel is focused on ensuring, as he put it, “security awareness is a basic human skill.” In fact, not only is Wizer’s training user-friendly and in digestible chunks, most of it is free.</p><p>In this episode, we talk about:</p><ul><li>Cybersecurity awareness training should start with stories, to connect with people and encourage them to take action.</li><li>Cybersecurity awareness training should then focus on developing the skills that can be applied to a variety of scenarios (as Gabriel says, "we can't teach everything.").</li><li>Make security easy--but roadblocks may necessary to get users to slow down and think.</li></ul>]]>
      </description>
      <content:encoded>
        <![CDATA[<p>Gabriel has been studying human behavior for a long time. His first company, ObserveIT, an insider threat management platform recently acquired by Proofpoint, dealt with monitoring and reporting on out-of-policy employee behavior. Today, as the founder of <a href="https://Wizer-training.com">Wizer</a>, a security awareness training platform, Gabriel is focused on ensuring, as he put it, “security awareness is a basic human skill.” In fact, not only is Wizer’s training user-friendly and in digestible chunks, most of it is free.</p><p>In this episode, we talk about:</p><ul><li>Cybersecurity awareness training should start with stories, to connect with people and encourage them to take action.</li><li>Cybersecurity awareness training should then focus on developing the skills that can be applied to a variety of scenarios (as Gabriel says, "we can't teach everything.").</li><li>Make security easy--but roadblocks may necessary to get users to slow down and think.</li></ul>]]>
      </content:encoded>
      <pubDate>Tue, 08 Dec 2020 12:00:52 -0500</pubDate>
      <author>Voice+Code</author>
      <enclosure url="https://media.transistor.fm/72db8e40/e4496527.mp3" length="42565420" type="audio/mpeg"/>
      <itunes:author>Voice+Code</itunes:author>
      <itunes:duration>2658</itunes:duration>
      <itunes:summary>Gabriel Friedlander, founder of security awareness training company, Wizer, talks about how “security awareness should be a basic life skill,” building empathy with end users, the power of storytelling in getting people to pay attention and take action, and the role an organization’s culture plays in security.</itunes:summary>
      <itunes:subtitle>Gabriel Friedlander, founder of security awareness training company, Wizer, talks about how “security awareness should be a basic life skill,” building empathy with end users, the power of storytelling in getting people to pay attention and take action, a</itunes:subtitle>
      <itunes:keywords>cybersecurity, information security, ux, user experience</itunes:keywords>
      <itunes:explicit>No</itunes:explicit>
    </item>
  </channel>
</rss>
