Research

Below is some of my recent research work.

1. Same Stereotypes, Different Term? Understanding the “Global South” in AI Ethics

E Radiya-Dixit, A Christin. AAAI/ACM Conference on AI, Ethics, and Society, 2025

We examine the usage, limitations, and power dynamics of the term “Global South” within AI ethics and policy spaces. Through interviews with scholars and practitioners, we find that the “Global South” often perpetuates an imperial gaze, yet many feel pressured to use the term due to broader research and funding structures that are oriented toward the United States. Rather than adopting another term that may carry similar stereotypes, we emphasize the need to ground AI ethics work in specific regions and power structures. This research was covered by Stanford HAI.

2. Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups

E Radiya-Dixit, M Bogen. Center for Democracy & Technology, 2024

We discuss lessons from local AI research groups that practitioners can adopt to broaden community participation in multilingual AI development. These research groups – such as Masakhane for African languages, AI4Bharat for Indian languages, and AmericasNLP for native American languages – are developing AI that serves their own communities, countering the dominance of English-centric AI. This research was covered by Tech Policy Press.

3. A Sociotechnical Audit: Assessing Police Use of Facial Recognition

E Radiya-Dixit, G Neff. ACM Conference on Fairness, Accountability, and Transparency, 2023

We present a sociotechnical audit that evaluates the ethics and legality of police use of facial recognition technology in the UK. We developed this audit to bring attention to broader concerns such as whether police consult affected communities and comply with human rights law. We applied this audit to three facial recognition deployments by police forces in the UK and found that all three failed to meet the standards we compiled in the audit. This research was covered by The Guardian and informed a local ban on facial recognition technology in East London.

4. Race and Surveillance Brief

E Radiya-Dixit, N Djanegara. Stanford Center for Comparative Studies in Race and Ethnicity, 2023

We led a panel discussion with scholars and community organizers where we discussed strategies for challenging surveillance technology. In this brief, we discuss takeaways from the panel, present a literature review on racialized surveillance, and provide recommendations for researchers and community members.

5. Data Poisoning Won’t Save You From Facial Recognition

E Radiya-Dixit, S Hong, N Carlini, F Tramèr. International Conference on Learning Representations, 2022

We assessed technological solutions that aim to enable users to resist facial recognition surveillance. Specifically, we evaluated the proposed solution of image cloaking, where users make pixel-level changes to the images they post online to deceive facial recognition models. We found that image-cloaking tools such as Fawkes (500K+ downloads) are easy to defeat and do not protect users from facial recognition, illuminating the risk of techno-solutionism and overpromising security to users.

6. Researching Inequities in a Public Benefits Program with a Racial Equity Framework

M Atwater, B Choi, L Mack, E Radiya-Dixit. New America, 2021

We discuss takeaways from identifying racial inequities in the Earned Income Tax Credit (EITC) benefits program. For example, the EITC provides fewer benefits to non-custodial fathers, whose role in the lives of children has often been underplayed by federal welfare programs. This illuminates inequities across familial structure that disproportionately impact Black men. We also found that many EITC documents are not translated into non-English languages, posing a barrier that especially impacts Latinx communities. Additionally, we ideate solutions to address these inequities and improve EITC accessibility.

7. Innovating Privacy Protection: Tools and Strategies for California Cities

A Buscher, B Choi, S Guha, C Hendren, M Krantz, M Ly, E Porubcin, E Radiya-Dixit, S Shattuck, E Wallack, A Warnke, B Wessley, C Zhang. Stanford Law School, 2020

We developed recommendations for the City of Berkeley to advance data privacy for its residents. For example, we identified litigation strategies under antitrust law and crafted regulatory proposals for a facial recognition ban and a transparency reporting mandate.