Please note: All times US Pacific Daylight Time (PDT = UTC/GMT-7:00 hours).
Alan Mislove is currently serving as the Deputy United States Chief Technology Officer for Privacy in the White House Office of Science and Technology Policy, where he works to improve privacy protections for all Americans. He was most recently a Professor and Senior Associate Dean for Academic Affairs at the Khoury College of Computer Sciences at Northeastern University. In his academic career, Alan's research is on algorithmic auditing: he develops methodologies to study the real-world AI-powered systems that millions of users interact with every day, focusing on issues of algorithmic discrimination, fairness, and privacy. His aim is to enable regulators, policymakers, and society at large to better understand how these systems work, how they are used and abused, and what impacts they are having on end users and society at large.
Years of research examining "smart" Internet of Things (IoT) devices have found that developers of these devices have poor privacy practices. This includes collection of personal information without notice or consent and transmission of data to online advertisers and data brokers.
Automobiles are the next frontier for the "datafication" of consumer devices. To the best of our knowledge, all recent model year vehicles available in major market are connected cars: they include always-on internet connections, collect and transmit data about the vehicle and the driver, and incorporate companion smartphone apps.
A recent report from the Mozilla Foundation highlights privacy concerns around connected cars. This report drew on automakers' privacy policies to identify the different types of data that automakers claim to collect and disclosures around data selling and sharing to third-parties. The practices revealed in the report are very concerning, in part because vehicle ownership is a de-facto requirement for modern life and car owners have little—if any—ability to opt-out of vehicle's data collection. Recent reporting revealed that automakers are sharing driving data with insurance companies and that this is causing real-world harms to vehicle owners.
In this study we propose to use data subject access requests issued under the CCPA and GDPR to investigate the privacy implications of connected cars. The main shortcoming of the Mozilla study is that the report relies on disclosures from privacy policies, which may under- and over-disclose data collection and sharing practices. For example, studies of privacy policies from websites and mobile apps have found that they sometimes contain vague language that permits all data to be collected and shared—thus revealing nothing about actual collection and sharing practices—or they fail to disclose all practices.
Our goal is to obtain at least one data report from every major auto manufacturer to examine the type and granularity of data being collected, as well as compare the provided data to the manufacturer's privacy disclosures. Additionally, we will examine the types of data that are not included in the reports to uncover potential violations of data subject access rules. (extended PDF)
Ten years after the release of the first generation of smart glasses, the next generation is on the market, with drastic differences in marketing and price point. In this paper, we propose to investigate potential privacy and usability issues of this next-generation of smart eyewear. Our proposed project is multi-pronged: in the first stage, we propose to conduct a security and privacy analysis of such devices. The goal of this stage is to analyze what data these devices are collecting, as well as how and where such data are transmitted. In the second stage, we propose to conduct an online survey, aiming to determine consumers' exposure to and experiences with next-generation eyewear. We will also evaluate their attitudes toward such devices. In the third stage, we propose to conduct an experimental privacy and usability study. The goal of this study is to determine participants' changes in efficiency in accomplishing defined tasks with these devices, as well as their satisfaction and trust with respect to smart eyewear. (extended PDF)
The escalation of social engineering (SE) scams via messaging tools and phone calls on mobile devices presents a critical threat to individual privacy and financial security. The less tech-savvy segments of the population are particularly vulnerable to such scams. Detecting such scams is hard because of their multi-modal nature and privacy concerns with any solution – the interactions can include phone calls and messages in a certain context where the interaction is expected (e.g., user selling an item online, user needing tech support, etc.) and monitoring such interactions for the analysis itself can introduce security and privacy risks. This proposal proposes this area as one in urgent need of investigation. We discuss a simple version of the problem to motivate the research challenges and then discuss potential research areas that may contribute to designing an infrastructure to detect and prevent such scams. (extended PDF)
Sources of funding: This work is supported by a gift from the OpenAI Cybersecurity Grant program. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of OpenAI.
Automated scam calls, also called robocalls, are one of the most widespread security problems affecting phone users in the United States. Some of the most egregious robocalls target vulnerable segments of our society. These segments often consist of non-English speaking phone users (e.g., international students, recent immigrants, tourists, etc.). Fraudulent robocall campaigns that target these populations often use languages like Spanish, Mandarin, Hindi, Arabic, etc., in the robocall audio. Regulatory authorities, enforcement agencies, and researchers investigating illegal robocalling campaigns lack automated tools to study such non-English campaigns, thereby struggling to extract meaningful insights from bulk robocall data. Relying entirely on manual analysis or expecting investigators to be fluent in numerous languages substantially limits action against illegal non-English robocalling campaigns. Furthermore, existing robocall audio analysis techniques focus solely on English robocalls. We propose developing a semi-automated robocall audio analysis pipeline to handle real-world non-English robocalls. We intend to build this pipeline using pre-trained multi-lingual speech transformer models. (extended PDF)
Sources of funding: This material is based upon work supported by the National Science Foundation under grant number CNS-2142930. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, other funding agencies or financial supporters.
In this research proposal, we outline our plans to examine the characteristics and affordances of ad transparency systems provided by 22 online platforms. We outline a user study designed to evaluate the usability of eight of these systems by studying the actions and behaviors each system enables, as well as users' understanding of these transparency systems. (extended PDF)
Sources of funding: This material is based upon work supported by the National Science Foundation under Grants No. CNS-2149680, CNS-2151290, and CNS-2151837, as well as by the National Science Foundation Graduate Research Fellowship Program under Grant No. 2140001.
Many researchers and technologists are interested in the possibility of their work generating not only a scientific or financial contribution, but also a positive material impact through avenues such as consumer protection action. In many cases, it may be helpful to share their research data directly with regulators. However, there are consent, evidentiary, strategic, and other design concerns for generating data that bridges gaps between academic, product, and regulatory agencies. We propose a brief informational paper that outlines considerations for researchers and developers designing with enforcement in mind, without compromising a project's primary scientific or consumer protection goals. (extended PDF)
Sources of funding: This work was funded by the Omidyar Network, the Sloane Foundation, Craig Newmark Philanthropies, and the Ford Foundation.
A marked uptick in the presence of malware at the point of sale in consumer Android devices has been noted by researchers. This proposal seeks to explore new methods of accounting for the breadth of the supply-chain attack, and explore policy venues for addressing this ongoing danger to consumers. (extended PDF)
Sources of funding: This proposal is wholly funded by the Electronic Frontier Foundation (EFF), and is a project of the Public Interest Technologists team internal to EFF.
Privacy regimes are increasingly taking center stage for bringing up cases against violators or introducing new regulations to safeguard consumer rights. Health regulations mostly predate most of the generic privacy regulations. However, we still see how health entities fail to meet regulatory requirements. Prior work suggests that third-party code is responsible for a significant portion of these violations. Hence, we propose using Software Bills of Materials (SBOM) as an effective intervention for communicating compliance limitations and expectations surrounding third-party code to help developers make informed decisions. (extended PDF)
Sources of funding: This work was supported by the U.S. National Science Foundation (under grant CNS-2055772).
Lawmakers worldwide have taken notice of "dark patterns:" design practices that "[deceive, manipulate, or otherwise distort with technology users' ability to make informed decisions]." HCI scholarship has revealed dark patterns' pervasiveness in ubiquitous and emergent technologies, users' opinions of dark patterns, and dark patterns in contexts like consent and privacy. Critics, however, allege that the term (in law) is overbroad, impractical, and counterproductive insofar as it applies to normative, "omnipresent" design practices.
Established legal frameworks prohibit wrongful self-dealing in fields like finance (e.g., fiduciary duty) and medicine (e.g., "do no harm"). Scholars suggesting similar frameworks for privacy and technology like a "duty of loyalty for privacy law," in which platforms should act in the best privacy interests of end users. In this research proposal we explore a loyalty framework for dark patterns and design from interdisciplinary CS and law perspectives (extended PDF)
Sources of funding: This research was supported in part by NSF grants (#1955227 and #CNS-1900879).
Dark patterns are embedded in various online platforms to manipulate and trick users. Studies have primarily focused on non-video games-based platforms. We conducted a manual analysis of 500 video game reviews to identify the categories of dark patterns embedded in video games. We examine the ways in which different dark patterns affect the privacy of players and discuss gaps and challenges to be addressed by future research. (extended PDF)
Sources of funding: This research was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training (CDT) in Trust, Identity, Privacy, and Security in Large-scale Infrastructures (TIPS-at-Scale) at the Universities of Bristol and Bath (EP/S022465/1).
Dark patterns are manipulative, deceptive design practices deployed in online services aimed at influencing the decisions of users about their purchases, use of time, and disclosure of personal data. Further efforts are needed in both scholarship and enforcement to more effectively prevent the use of dark patterns with deeper sharing of expertise across both fields, but operationalizing such collaborations requires resolving interdisciplinary differences. In this project, we examine case-law and scholarly CS articles on dark patterns to directly compare the investigatory and evidentiary methods used by courts and scholars towards the purpose of improving collaboration across both fields. (extended PDF)
Sources of funding: This work is funded in part by the National Science Foundation under Grant No. 1909714, 1955227 and CNS-1900879, as well as ANR 22-PECY-0002 IPOP project of the Cybersecurity PEPR.
This research delves into the distinctive privacy challenges faced by the LGBTQ+ community, arising from a toxic environment and potential discrimination. By studying the privacy perceptions and behaviors in online social networks and dating applications, the study aims to inform the design of more inclusive technological solutions, with a particular focus on the LGBTQ+ community in Türkiye. (extended PDF)
Sources of funding: This research was supported in part by NSF grants (# 1955227 and #CNS-1900879). Any opinions, findings, conclusions, or recommendations expressed in this material are solely those of the authors.
The COVID-19 pandemic raised digital privacy concerns due to contact tracing via smartphones and IoT devices, potentially altering privacy risk perceptions. We recruited 1,671 Americans from 2020-2023 to answer survey regarding preferences for sharing personal data for health or marketing purposes, highlighting factors influencing acceptance of data sharing. Quantifying privacy attitude evolution during crises reveals interactions between stress, risk tolerance, and technology acceptance. Our research proposal aims to inform the evolution and advancement of a COVID-based survey with in-depth interviews exploring reactions to contact-tracing strategies, potentially identifying previously unconsidered underlying factors. (extended PDF)
Sources of funding: This research was supported in part by CTIA and the Comcast Innovation Fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Comcast, CTIA nor Indiana University. We acknowledge support from the US Department of Defense [Contract No. W52P1J2093009]. This material is based upon work supported by the U.S. Department of Homeland Security under Grant Award Number 17STQAC00001-07-00. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security nor the US Department of Defense. This project was also funded by Indiana University Research through the Faculty Research Support Program.