Karen explores how humans and robots can safely and effectively interact with one another

By studying how humans interact with each other, she aims to equip robots with human-like social intelligence while ensuring safe and efficient operations.
What is your role and what does it involve?
I’m an assistant professor in the Department of Aeronautics and Astronautics at the University of Washington, where I direct the Control and Trustworthy Robotics Lab (CTRL). My research focuses on enabling robots to safely and efficiently interact with, around, and alongside humans. I study and model human behavior, and integrate these insights with trusted control-theoretic principles and formal methods to design reliable robot decision-making and planning algorithms. Beyond research, I teach and advise students, serve as the faculty advisor for the Women of Aerospace group—which builds and supports an inclusive community—and recently began my role as Director of the UW+Amazon Science Hub, fostering research partnerships between UW and Amazon. I also co-organize workshops on robot safety and formal methods in robotics.
What do you find most interesting or enjoyable about your work?
The intellectual diversity of my research keeps it both stimulating and rewarding. Developing trustworthy algorithms for safe and efficient human-robot interaction requires drawing from multiple disciplines, including human behavior modeling, generative modeling, control theory, decision-making, formal methods, and robotics/AI/ML/DL more broadly. The need to constantly learn from and connect ideas across these domains makes the work exciting (and sometimes exhausting!). I especially enjoy recognizing how breakthroughs in other fields can be reinterpreted and applied to the safety and trust challenges I study in robotics.
Tell us about your research (include an explicitly stated long-term goal)
My research focuses on developing safe and trustworthy robotic systems that can operate seamlessly with, alongside, and around humans. I not only want robots to behave safely around humans, but also to make humans feel safe around robots.
My long-term vision is for humans and robots to operate together in harmony—where their interactions are so natural and commonplace that “human-robot interaction” becomes an ordinary, everyday concept. To work toward this goal, my research considers three core directions:
- Understanding safety: While geometric reasoning can identify collisions or other undesirable outcomes, defining what safe behavior means is more complex, as it depends on context, social norms, and personal preferences. I’m interested in learning from human demonstrations to uncover what humans perceive as acceptable and preferred collision-avoidance behaviors.
- Modeling human behavior: Equipping robots with models of human behavior helps them anticipate human actions and plan accordingly. It also enhances the transparency and interpretability of robot decision-making. I develop models that predict human motion and study how humans coordinate and share responsibility for mutual safety.
- Planning and control: Ultimately, robots require real-time algorithms that instruct them on how to move and what decisions to make, while ensuring both safety and task performance. I develop planning and control algorithms that enable robots to interact safely and effectively with humans in dynamic environments, leveraging insights gained from the other research directions.
What working achievement and/or initiative are you most proud of?
I’m most proud when I see my students succeed and grow in confidence. That’s when I feel I might have done something right as their mentor. I’m deeply proud when they earn fellowships, win research awards, or take on leadership roles—but equally when I see them become thoughtful, confident researchers and mentors in their own right.
What’s next on the research horizon for you?
I recently received an NSF CAREER award to study what I refer to as the “safety landscape.” The goal is to gain a deeper understanding of how context, interaction history, social norms, and human demonstrations influence our perception of safety. By characterizing this landscape, we can more systematically evaluate the safety of new scenarios, generate diverse safety-critical data for training and testing, and design robot planning algorithms that are inherently safer and more context-aware.
Can you share some interesting work that you read about recently?
I’ve been intrigued by recent advances in flow matching models and their potential applications in robot planning. One particularly interesting paper explored the use of flow matching for ergodic coverage, which reframes how robots can efficiently explore or monitor an area. I’m excited by how these generative modeling techniques could open up new ways of reasoning about robot motion and decision-making.