
Kento KAWAHARAZUKA
Univ. of Tokyo
Designing Intelligence from Embodiment: Musculoskeletal, Wire-driven, and Open-Source Robots
Abstract: To understand and design intelligence, we have taken embodiment as our starting point in robot development. Our research began with complex and redundant musculoskeletal humanoids that mimic human muscle structures, evolved into wire-driven robots that extract and utilize the functional advantages of muscles, and has further developed into open-source robotic platforms that allow anyone to learn seamlessly from body to intelligence. In this talk, I will introduce a series of robots and intelligent systems developed through this embodiment-centered approach, and discuss future directions in robotics from the perspective of designing intelligence through the body.
Biography:
K. Kawaharazuka received his B.E., M.S., and Ph.D. in Information Science and Technology from The University of Tokyo in 2017, 2019, and 2022, respectively. He became a Project Assistant Professor in 2022 and a Lecturer at the Next Generation AI Research Center, The University of Tokyo, in 2025. His research focuses on musculoskeletal humanoid design and control, and intelligent robotic systems using deep learning and foundation models.

Sungjoon CHOI
Korea Univ.
Towards Human-Centered Robotics
Abstract: Recent advancements in large language models and vision-language models have opened up new possibilities for human-centered robotics. This presentation aims to explore the potential of these cutting-edge technologies in enhancing human-robot interaction and enabling robots to better serve human needs. Firstly, we will see the rapid development of humanoid robots and their potential to replace human labor in various domains. Secondly, we will see the integration of artificial intelligence techniques with these humanoid robots to facilitate their deployment in everyday life scenarios. Furthermore, I will showcase our laboratory’s ongoing research projects including utilizing Vision Language Models (VLMs) for human-robot interaction, combining VLMs with simulators, and developing robot agents with distinct personas.
Biography:
2020-09 ~ Present: Assistant/Associate Professor at Korea University
2025-01 ~ Present: Principal Scientist at RLWRLD
2018-06 ~ 2020-08: Postdoctoral Associate at Disney Research
2018-02 ~ 2018-05: Research Scientist at Kakao Brain
2012-09 ~ 2018-02: Ph.D. in EECS, Seoul National University

Jung Yun Bae
Michigan Technological Univ.
Deploying Multi-Robot Systems for Resilient Human Ecosystems
Abstract: As autonomous systems transition from controlled laboratory settings into the unpredictable “wilds” of real-world deployment, such as agricultural automation and deep-sea exploration, the central challenge shifts from individual navigation to robust, high-stakes coordination among heterogeneous robotic teams. This keynote explores the multi-robot systems designed to address two pressing global challenges: ensuring food security under climate volatility and enabling exploration in remote, extreme environments.
Drawing on recent advances in modular robotic fleets for specialty crop harvesting and entanglement-free navigation for tethered underwater vehicles, Dr. Bae will examine the shift from classical heuristic-based planning toward integrating Large Language Models (LLMs) for multi-robot task allocation and path planning. The talk highlights how generative AI can be leveraged to manage both structural and functional heterogeneity across diverse applications, ranging from autonomous lavender farms in Michigan to large-scale search-and-rescue operations.
The keynote will present scalable, computationally efficient algorithms that enable real-time workload balancing, conflict avoidance, and optimized resource utilization. By bridging theoretical multi-agent coordination with practical, low-cost deployment, this session outlines a roadmap for robotic systems that are not only ubiquitous but also resilient, adaptive, and essential to a sustainable human future.
Biography:
Dr. Jung Yun Bae is an Assistant Professor in the Departments of Mechanical and Aerospace Engineering and Applied Computing at Michigan Technological University, where she leads research in multi-agent systems and autonomous navigation. Her work focuses on coordinating heterogeneous autonomous vehicles and developing algorithms that enable robotic fleets to operate reliably in complex, real-world environments.
Dr. Bae has led multiple research projects funded by the U.S. Department of Energy (DOE), NASA, NVIDIA, and the Michigan Department of Agriculture and Rural Development. Her research addresses critical challenges in climate resilience, economic sustainability, and infrastructure optimization through scalable multi-robot systems, with a strong emphasis on cost-effective deployment.
She currently serves as an Associate Editor for the International Conference on Robotics and Automation (ICRA) and the journal Intelligent Service Robotics. Dr. Bae earned her Ph.D. in Mechanical Engineering from Texas A&M University, College Station, TX, and previously held a Research Professor position at Korea University, Seoul, Korea.

Diego Paez-Granados
ETH Zürich
From Episodic Autonomy to Continuous Co-Adaptation: Exploring Reinforcement Learning for Long-Term Human-Robot Interaction
Abstract: Assistive robots are increasingly deployed in real-world environments, yet most shared-control and learning-based systems remain designed for short-term interaction, static users, and episodic decision-making. This mismatch becomes critical in long-term assistive scenarios, where human behavior, capabilities, and physiological state evolve over weeks, months, or years.
In this talk, I will argue that long-term human–robot interaction cannot be solved by improved perception or policy optimization alone but requires a shift toward continuous co-adaptation between humans and robots. I will present our work on shared-control reinforcement learning frameworks that embed human behavioral state—captured through multimodal sensing—directly into the learning and control loop.
Using examples from assistive mobility and healthcare robotics, I will show how physiological signals, movement quality, and contextual behavior provide slow but critical state variables that enable robots to adapt policies over long time scales. By grounding shared control in continuous human-state estimation, reinforcement learning moves from short-horizon assistance to sustained, personalized interaction.
Biography:
Dr. Paez is the Head of the Spinal Cord Injury and Artificial Intelligence Lab (SCAI Lab) at ETH Zürich and Swiss Paraplegic Research (SPF) in Switzerland. With a focus on personalised healthcare, his lab utilizes advanced machine learning techniques and wearable sensing to develop assistive decision-making systems that model disease onset and develop digital biomarkers.
Dr. Paez holds a PhD in Bioengineering and Robotics from Tohoku University, Japan, and has contributed his expertise to renowned institutions including the University of Tsukuba, Japan and EPFL, Switzerland. Additionally, he holds a visiting faculty position at the Univeristy of Tsukuba, in Japan.
Passionate about enhancing healthcare, Dr. Paez is dedicated to creating patient digital twins and leveraging them to develop preventive technologies that support healthcare workers and caregivers. His diverse research interests span human modelling, human-robot interaction control, explainable models for machine learning in healthcare applications, and biosignal processing.
Driven by his research and tech transfer achievements, Dr. Paez has co-founded Qolo Inc., a startup based in Japan specialising in personal mobility and rehabilitation devices.
https://scai.ethz.ch/people/diego-paez-granados.html
