Luis Figueredo is bringing safe and reliable robots closer to humans
He is building intelligent robots that can understand, learn and perform tasks efficiently while prioritising safety through geometric-aware certified control behaviour.
What is your role and what does it involve?
I’m an Assistant Professor in Assistive Robotics and Manipulation Control and Planning at the University of Nottingham (UoN), where I lead research topics spanning AI-driven methods, physical interaction and human factors, such as biomechanics and ergonomics, into robust and reliable advanced control theory and geometric methods. My aim is to build underlying physical, spatial and human-safe conditions that allow robots to thrive, learn and perform complex tasks, and interact with humans. I lead a small enthusiastic team at UoN and also at the Technical University of Munich—where I’m an associate fellow overseeing large-scale robotics initiatives for older adults. Although my role encompasses various tasks, such as teaching and project coordination, my personal priority is ensuring my team thrives—helping them to excel in their careers while building a strong understanding of the crossroads of rigorous mathematics, control, and novel AI approaches. All while truly enjoying their research journey. This is my most pressing, important, and preferred responsibility.
Recently, I also began serving at the IEEE RAS Chapter in the UK and Ireland, aiming to support our robotics community by creating new mechanisms for early-career and underrepresented members to prosper. I collaborate closely with colleagues who have deeper expertise than I do, including the RAS WIE affinity groups, to make this vision a reality.
What do you find most interesting or enjoyable about your work?
The most fulfilling aspect of my work is its inherently interdisciplinary nature—solving complex problems in robotics requires a broad perspective. Robotics is one of those fields where a narrow focus won’t take you far; real progress comes from blending scientific methods with engineering principles and integrating data-driven approaches that only truly work when anchored in solid model-based foundations. The most exciting challenges arise at the intersection of different fields, where solutions emerge from a combination of methods which are finally executed through strong control theory. That fusion is what makes robotics so fascinating to me and keeps me excited every day.
There’s also a unique thrill in seeing fundamental theoretical concepts—like geometric representations of motion or robust control laws—translate seamlessly into real-world applications. There’s something incredibly satisfying about watching simple equations, sketched on a whiteboard, come to life through clean, efficient code—without the need for tweaks, parameter tuning, or hand-engineered conditions. I have a deep appreciation for simplicity in robotics. Too often, we get caught up in overly complex solutions that we know, deep down, will only work under ideal lab conditions—perfect lighting, sunny days, and plenty of coffee. As Dijkstra put it, “Simplicity is a prerequisite for reliability.” Simple methods are often overlooked, but they are the ones that stand the test of time, proving to be the most robust and reliable in real-world scenarios. I love the challenge of taking powerful AI and data-driven methods and grounding them in straightforward, reliable robot behaviours—harnessing the depth of learning without sacrificing the fundamental stability that simple model-based and robot control solutions provide.
Tell us about your research (include an explicitly stated long-term goal)
My research focuses on developing intelligent robots that seamlessly integrate into human environments, physically interacting with their surroundings while assisting and collaborating with people in real-world scenarios. To achieve this, I take a comprehensive multi-layered approach built on three core pillars: (i) ensuring safety through reactive planning, pre-collision control strategies, post-collision feedforward actions, and certified controllers that guarantee expected human-safety behaviors; (ii) enabling robots to understand, learn, and execute complex tasks by leveraging coordinate-invariant representations and Riemannian structures for skill learning and transfer across domains; and (iii) developing AI-based multimodal strategies for intuitive, natural, and adaptable human-robot interaction, anchored in efficient task representations and safety-certified controllers. My work spans both foundational control and high-level adaptation, ensuring robots meet strict safety and performance requirements while seamlessly collaborating with and adjusting to individual human needs. At the core of my research is the goal of simplifying robot control and building more efficient models that enable robots to perform complex tasks while interacting physically with both the environment and humans. My long-term vision is to eliminate the knowledge barrier for human-robot interaction, creating modular and intuitive tools that allow non-experts to explicitly or implicity describe tasks, demonstrate complex behaviors, and collaborate with robots under a layer of robust certified controllers—without programming, tuning, pre-training, adaptation, or an understanding of the underlying control and geometric principles.
What led you to pursue a research career in this field?
Originally from Brazil, my academic path began in pure theoretical control—analyzing the stability of time-delayed systems. In Brazil, mathematics and control theory are strong foundations, but access to modern robotic hardware has traditionally been limited. While I was fascinated by the challenge of constructing complex Lyapunov functions, I always felt something was missing. Moving to MIT was transformative. Suddenly, I was surrounded by brilliant minds who weren’t just proving theorems but turning abstract math into real, breathing robots. For the first time, I saw my control solutions shaping robotic motion in a way that was both robust and reactive. That was my point of no return! I began applying robust and hybrid-based control methods to robotic manipulators, and seeing those ideas take physical form was exciting. It reminded me of Da Vinci’s perspective on mechanics, “Mechanics is the paradise of the mathematical sciences because by means of it, one comes to the fruits of mathematics.” Robotics, I realized, offered that same paradise where control theory was no longer just an elegant framework. Now, I see robotics as also a gateway to countless other fields, bridging AI, biomechanics, human interaction, and engineering, constantly pushing researchers beyond their domain.
What working achievement and/or initiative are you most proud of?
Without a doubt, the achievement I’m most proud of is creating an environment where my team can thrive—helping the students I mentor develop strong careers, grow into independent researchers, and, in many cases, become collaborators and friends. Over the years, I’ve supervised students from diverse backgrounds, with 9 MSc students successfully completing their degrees—nearly 90% publishing in top conferences and journals. Many have gone on to leading institutions like CMU, MIT, KTH, KCL, and Microsoft, while my current PhD students have received notable awards for their research contributions. Seeing my students gain confidence, refine their ideas, and shape their research paths together has been one of the most rewarding aspects of my still early career.
What’s next on the research horizon for you?
This is an exciting time for robotics—our field is at a turning point where we need to refine our methods, establish more formal definitions, and improve how we experiment and evaluate robotic systems. I’m deeply passionate about these discussions, but for now, one of the most natural directions for my research is pushing robotics toward greater adaptability and usability, making complex behaviors more intuitive and accessible for both experts and non-experts.
A key step in this direction is certified geometric control driven by multimodal AI, enabling robots to interpret human intent through visual demonstrations, natural language, or implicit constraints. My goal is to create systems where robots can not only capture human demonstrations with formal guarantees but also translate them into robust, explainable representations—without tedious tuning or manual scenario engineering. From the human perspective, I am increasingly focusing on personalized, adaptive physical human-robot interaction, where robots dynamically adjust their control strategies based on individual biomechanics and preferences. This will allow robots to generate legible, robust behaviors that optimize for human comfort, making them not just functional, but truly collaborative and intuitive partners.
Can you share some interesting work that you read about recently?
Robotics is an incredibly active field, with new and exciting research emerging all the time. Lately, I’ve been exploring tactile modalities, focusing on how to mathematically represent physical and spatial variables, understand their interplay, and leverage this information for estimation and control. While diving into this area, I came across the works of Zixi Liu and Robert Howe on the effects of tactile sensor resolution, a topic I had been thinking about myself. You know, when you think you have a brilliant idea and someone has already done it! Those are really good papers though. Some might find it discouraging to see their ideas already explored, but it should be exactly the other way around. It just means it was indeed a good idea, and reading you are sure to find new avenues, providing a foundation to build upon rather than a limitation. I’m now looking at how to expand on their insights using my own geometric perspective, pushing toward better representations for control and perception.
*How do you balance research with the other demands of your position?
Who does?! It’s more of a nonlinear optimization problem with way too many constraints! Still, with a good team, and some slack here and there, everything is possible.