Headshot of Julia Cambre
Julia Cambre

(she / her)

jcambre [at] cs.cmu.edu

Google Scholar

Download my CV

Talks & Travel

Sept. 9:
PhD Thesis Defense
(Completed and passed! 🎓)

Hi! I'm Julia đź‘‹

I'm a PhD Candidate in the Human-Computer Interaction Institute at Carnegie Mellon University, where I'm advised by Chinmay Kulkarni, and am a fellow in the IES-funded Program in Interdisciplinary Education Research (PIER). In 2022, I was honored to be named an Adobe Research Fellow.

Earlier in my career, I had the great privilege of working at Mozilla during the summers of 2019 and 2020 as a Voice Research Intern, where I helped to build Firefox Voice. Prior to the PhD, I was a Technical Account Manager at Coursera, and a research assistant with Scott Klemmer at the Design Lab at UC San Diego. I graduated from Stanford University in 2014, where I received a Bachelor of Science degree in Symbolic Systems and minored in Spanish.

My research centers on why and how context matters for voice interfaces — in other words, what do technologies like voice assistants need to know about the varied situations and socio-cultural environments in which they might be used? How might we infuse voice interfaces with the right kinds of contextual knowledge to provide more delightful and intuitive user experiences, and even to challenge the problematic biases and assumptions that are embedded into many voice-based systems by default? I do a combination of both systems-building research, where I've built and evaluated working prototypes of novel voice interactions (see LUCA, Firefox Voice, and Vitro), as well as design research in which I critique the current state of voice technology and tools, and anticipate what the possible and preferable futures for voice might look like. I am especially excited to continue pursuing the (many!) open questions in this space as LLMs reshape the voice interface landscape, so if that's something you're also interested in, please get in touch!

Currently working on...

LUCA: A contextually-adaptive voice assistant prototype

My final thesis project aims to synthesize what I've learned about contextual voice design to date, and probes at a future in which our voice assistants are more contextually-aware, and contextually-adaptive. Meet LUCA ("Latent Understanding of Context Assistant")! LUCA is an iOS-based voice assistant that takes cues from your general environment to provide more relevant answers to your everyday questions. You can ask things like "Tell me about the history of this place" or "Where can I grab an iced coffee?" or "When's the next good day to wash my car?" and LUCA will provide responses that more intuitively understand what you're asking for — in other words, answers that are tailored to your context. Please check out the demo video to see LUCA in action.

Past Projects

Asterisks (*) indicate shared authorship contributions

Julia Cambre, Alex C. Williams, Afsaneh Razi, Ian Bicking, Abraham Wallin, Janice Tsai, Chinmay Kulkarni, Jofish Kaye

CHI 2021

Julia Cambre, Samantha Reig, Queenie Kravitz, Chinmay Kulkarni

DIS 2020

Julia Cambre*, Jessica Colnago*, Jim Maddock*, Janice Tsai, Jofish Kaye

CHI 2020

Julia Cambre, Chinmay Kulkarni

CSCW 2019

Julia Cambre*, Ying Liu, Rebecca E. Taylor, Chinmay Kulkarni*

DIS 2019

Julia Cambre, Scott Klemmer, Chinmay Kulkarni

CHI 2018

Julia Cambre, Scott Klemmer, Chinmay Kulkarni

CHI 2017

Julia Cambre, Scott Klemmer

CSCW 2017

Chinmay Kulkarni, Julia Cambre, Yasmine Kotturi, Michael Bernstein, Scott Klemmer

CSCW 2015