Michelle Mazurek on Building in Security People Will Actually Use
September 13, 2017
Michelle Mazurek is an assistant professor in the Department of Computer Science and in the University of Maryland Institute for Advanced Computer Studies (UMIACS), and is an affiliate assistant professor in the College of Information Studies (iSchool) at the University of Maryland, College Park. Her research focuses on end-user preferences and behaviors with cybersecurity and privacy, and then using those insights to build systems that are more user friendly and secure. At the 2017 IEEE Secure Development (SecDev) Conference one of her coauthors will discuss their work on a panel called “Developers Need Support, Too: A Survey of Security Advice for Software Developers.”
In this interview, Mazurek explores the challenge of balancing security with human factors — for both end users and developers.
Question: Your research focuses on how human factors affect cybersecurity, so you spend a lot of time analyzing user behaviors and preferences. What are some of the most intriguing or surprising things you’ve learned over the years?
Mazurek: This is why I stay in this line of research! People are always doing interesting things that I don’t expect before we start some of our studies. The first study I worked on as a grad student was about how people share or don’t share files and data. We found if they were worried about someone else using their computers and seeing their files, they would bury files in many layers of subfolders and give them funny or random — or very boring sounding — names. They believed no one would find their important, secret files as long as they were a certain level of subfolders deep. That’s one of my favorites.
More recently, my grad students and I have been exploring how people learn different security behaviors. We found that one important motivator for people refusing to adopt two-factor authentication was not wanting to provide their phone number. They felt that that was an invasion of their privacy.
When you have a Google or Facebook account, they know basically all the details of your life. But then when they ask for a phone number for two-factor authentication, that’s a bit too far. We were surprised to learn that that’s where people draw the line.
Question: For developers, what are the big challenges to building security in when designing systems that don’t ignore or run afoul of those behaviors and preferences and thus create security and privacy vulnerabilities in the process?
Mazurek: Some of the best-known challenges in the usable security or human-centered security field center around the fact that security is a secondary task for people. For example, you don’t connect to your bank to do security. You connect to your bank because you need to check your balance, pay a bill or something like that.
So anything you do for security almost inherently creates some additional friction. You need to find a way to make it as transparent as possible to minimize that friction. If that’s not possible, the next best thing is to nudge people in the right direction that gives them cues about what’s a correct or a safe decision, or how they should make a decision.
There’s a weird paradox here: If you make the security transparent enough, it doesn’t get in people’s way. But people also don’t trust something if they haven’t seen its security aspects. Take the example of an encryption system where everything just happens in the background. It feels less secure to people than one where they see the text transform into gibberish.
Question: You gave a presentation at last year’s IEEE SecDev Conference, “You are Not Your Developer, Either: A Research Agenda for Usable Privacy and Security Beyond End Users.” For those who missed it, what were the major points?
Mazurek: The point is that we — the usable security community — have spent the past 20 years or so trying to understand security and privacy from the user’s point of view. How do we make security work better for end users? How do we help them to make more informed decisions? How do we avoid having them make unnecessary decisions? Things like that.
But we haven’t devoted as much attention to software developers, who also make security-relevant decisions. What can we do to make it easier for software developers to write secure code? A lot of end-user problems plague developers, too, albeit in a slightly different form. We worry about it when end users who are not security experts have to make security decisions. Well, developers are, in some ways, experts. We hope that they’re experts at software. But certainly there are lots of programmers who are not security experts and don’t necessarily understand the security consequences of some of the programming decisions they make.
Developers want to make the next cool app and get it to market quickly. Security is not their top consideration, so it’s very easy to accidentally end up with insecure code. We found evidence that people will look at stack overflow and paste in a code snippet that might be insecure, as long as it appears to immediately solve their problems. There are parallels with what end users face. You can’t just keep giving people more stuff to do. Every time they get an email, they can’t be expected to check if the sender is really who they claim to be, or if the link in that message looks dubious.
If you wanted to try to comply with all of these recommended security behaviors, you could never do anything else. It’s not really a long-term, sustainable approach of just continuing to tell people, “Do more stuff to keep yourself safe online.” By the same token, we can’t just tell developers, “Here’s a checklist of the 250 things you have to do to make sure your code is secure.” For both end users and developers, we need to figure out how to create fewer situations where they need to make secure choices, and when they do, make sure the default option is a secure one. When that fails, we need to find ways to prioritize advice to make sure the most important information gets through.
Further reading: Mazurek recently coauthored an article, “Comparing the Usability of Cryptographic APIs,” which explores how the complexity of APIs causes dangerous errors. It examines both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries.