Introduction
This workshop aims to explore the pathway toward building “Embodied Humans”—intelligent humanoid agents capable of both physical action and cognitive reasoning like humans—where the boundary between digital avatars and physical humanoid robots could be dissolved through their co-evolution across virtual and real worlds. We will examine this synergy‘s possibility through three core dimensions: 1) how humanoid robots learning foundational “genes” from avatars? 2) how virtual humans gain physical plausibility from robots‘ embodiment to enrich realism and interactivity? and 3) how both systems develop self-autonomy to perceive, plan, and act in dynamic, open-ended environments? Featuring academic researchers and industry experts as invited speakers and panelists, the workshop brings together perspectives from virtual avatar modeling and humanoid robot learning to explore how systems on both ends are progressing toward human-like capacities for perception, reasoning, and movement. Through advanced techniques—such as reinforcement learning, cognition modeling, motion and structure perception, geometric representations, multimodal simulation, and large language/vision/action models—we aim to understand how virtual humans are evolving beyond surface-level realism, and how humanoid robots are advancing beyond pre-scripted skills—enabling both to engage the world with situational understanding, behavioral adaptability, and autonomous intent. At the heart of this workshop lie two essential questions: What makes a virtual human real—not just to see, but to know? And what does it take for a humanoid robot to not just move, but to become?