Talks & Travel
Hi! I'm Julia 👋
I'm a third year PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University, where I'm advised by Chinmay Kulkarni, and am a fellow in the IES-funded Program in Interdisciplinary Education Research (PIER).
Most recently, I had the great privilege of working at Mozilla during the summer of 2019 as an Voice Research Intern, where I helped to build Firefox Voice. Previously, I was a Technical Account Manager at Coursera, and a research assistant with Scott Klemmer at the Design Lab at UC San Diego. I graduated from Stanford University in 2014, where I received a Bachelor of Science degree in Symbolic Systems.
Asterisks (*) indicate shared authorship contributions
The advancement of text-to-speech (TTS) voices and a rise of commercial TTS platforms allow people to easily experience TTS voices across a variety of technologies, applications, and form factors. As such, we evaluated TTS voices for long-form content: not individual words or sentences, but voices that are pleasant to listen to for several minutes at a time. We introduce a method using a crowdsourcing platform and an online survey to evaluate voices based on listening experience, perception of clarity and quality, and comprehension. We evaluated 18 TTS voices, three human voices, and a text-only control condition. We found that TTS voices are close to rivaling human voices, yet no single voice outperforms the others across all evaluation dimensions. We conclude with considerations for selecting text-to-speech voices for long-form content.
Julia Cambre*, Jessica Colnago*, Jim Maddock*, Janice Tsai, Jofish Kaye
When a smart device talks, what should its voice sound like? Voice-enabled devices are becoming a ubiquitous presence in our everyday lives. Simultaneously, speech synthesis technology is rapidly improving, making it possible to generate increasingly varied and realistic computerized voices. Despite the flexibility and richness of expression that technology now affords, today’s most common voice assistants often have female-sounding, polite, and playful voices by default. In this paper, we examine the social consequences of voice design, and introduce a simple research framework for understanding how voice affects how we perceive and interact with smart devices. Based on the foundational paradigm of computers as social actors, and informed by research in human-robot interaction, this framework demonstrates how voice design depends on a complex interplay between characteristics of the user, device, and context. Through this framework, we propose a set of guiding questions to inform future research in the space of voice design for smart devices.
Julia Cambre, Chinmay Kulkarni
This paper investigates whether voice assistants can play a useful role in the specialized work-life of the knowledge worker (in a biology lab). It is motivated both by promising advances in voice-input technology, and a long-standing vision in the community to augment scientific processes with voice-based agents. Through a reflection on our design process and a limited but fully functional prototype, Vitro, we find that scientists wanted a voice-enabled device that acted not a lab assistant, but lab equipment. Second, we discovered that such a device would need to be deeply embedded in the physical and social space in which it served scientists. Finally, we discovered that scientists preferred a device that supported their practice of "careful deviation" from protocols in their lab work. Through this research, we contribute implications for the design of voice-enabled systems in workplace settings.
Julia Cambre*, Ying Liu, Rebecca E. Taylor, Chinmay Kulkarni*
Peer review asks novices to take on an evaluator’s role, yet novices often lack the perspective to accurately assess the quality of others’ work. To help learners give feedback on their peers’ work through an expert lens, we present the Juxtapeer peer review system for structured comparisons. We build on theories of learning through contrasting cases, and contribute the first systematic evaluation of comparative peer review. In a controlled experiment, 476 consenting learners across four courses submitted 1,297 submissions, 4,102 reviews, and 846 self assessments. Learners assigned to compare submissions wrote reviews and self-reflections that were longer and received higher ratings from experts than those who evaluated submissions one at a time. A second study found that a ranking of submissions derived from learners’ comparisons correlates well with staff ranking. These results demonstrate that comparing algorithmically-curated pairs of submissions helps learners write better feedback.
Julia Cambre, Scott Klemmer, Chinmay Kulkarni
In the days and months following the 2016 US Presidential Election, everyone from peers to public figures like the President and the Pope called for unity and dialogue among diverse Americans. However, social and geographic barriers often prevent citizens from engaging in political conversations with those who have different perspectives. This brief paper explores the design of political discussions and introduces a variant of the Talkabout discussion platform to support synchronous, online small-group discussions about politics with diverse citizens. We share learnings from an initial deployment shortly after the 2016 US Election and discuss opportunities for systems to support political dialogue.
Julia Cambre, Scott Klemmer, Chinmay Kulkarni
All design is redesign: many real-world projects directly build on others' work. By contrast, course projects usually demand the opposite: learners must use their own work from start to finish. Drawing inspiration from peer production communities, we introduce tournament-style remixing into project-based assignments. Remixing reduces learners' path dependence, enabling them to diverge from their work on early assignments. Remixing also gives learners an up-close look at other approaches to the same project. Finally, remixing provides the opportunity to practice the real-world skill of elaborating upon the work of others. We present an early pilot of remixing in a design course project and discuss implications for learning.
Julia Cambre, Scott Klemmer
Massive online classes are global and diverse. How can we harness this diversity to improve engagement and learning? Currently, though enrollments are high, students' interactions with each other are minimal: most are alone together. This isolation is particularly disappointing given that a global community is a major draw of online classes. This paper illustrates the potential of leveraging geographic diversity in massive online classes. We connect students from around the world through small-group video discussions. Our peer discussion system, Talkabout, has connected over 5,000 students in fourteen online classes. Three studies with 2,670 students from two classes found that globally diverse discussions boost student performance and engagement: the more geographically diverse the discussion group, the better the students performed on later quizzes. Through this work, we challenge the view that online
Chinmay Kulkarni, Julia Cambre, Yasmine Kotturi, Michael Bernstein, Scott Klemmer