Human beings, are the most centric and core elements in the real world. Intelligent machines (e.g., autonomous vehicles and robots) should be social-aware in the human-populated world. Perceiving humans is thus critical for robotics and autonomous driving. However, training autonomous machines in the real world with humans involved is impossible in scales – the difficulties in capturing the diversity of human behaviors and consequent changes of environments, and the considerations in human safety issues when interacting. In recent years, simulation environments emerged as a promising way to train autonomous systems. However, the environments act like ghost cities – human simulation is not included. Now, the robotics and autonomous driving domains face a common inflection point. On the one hand, the advent of new data, representations, and methodologies to build virtual humans has opened up new avenues of human simulation. The relatively innovative approach has seen great progress in various computer vision and computer graphics tasks, particularly in human rendering, reconstruction, animation, and motion synthesis with realism and fast speed. On the other hand, there is no paradigm for the combination of virtual humans with autonomous systems. An intriguing question has surfaced — can the progress in virtual humans bring a new revolution in robotics and autonomous driving? In this workshop, we aim to unfold the pioneer opinions and discussions about how will virtual humans function in robotics and autonomous driving. Two critical questions centered on virtual humans will be discussed – (1) What are the present and the future of virtual humans? (2) Will and how virtual humans play roles in robotics and autonomous driving?


Short paper submission is open!

May 1, 2024

Long paper submission is open!

March 11, 2024

Workshop website is launched.

March 10, 2024

POETS write: Hello CVPR'24!

Jan 20, 2024


The workshop will take place on 17 June 2024 from 12:00 - 17:45 PDT.

Poster session will be held in Arch Building Exhibit Hall. Oral/Keynote/Panel sessions will take place in Summit 334.

  • 12:00 - 13:00 Poster Presentations
  • 12:50 - 13:00 Welcome & Introduction
  • 13:00 - 13:20 Oral Session-1
  • 13:20 - 13:50 Keynote-1: Steve Seitz
  • 13:50 - 14:20 Keynote-2: Siyu Tang
  • 14:20 - 14:50 Keynote-3: Roozbeh Mottaghi
  • 14:50 - 15:05 Oral Session-2 & Poster Presentations
  • 15:05 - 15:20 Poster Presentations & Coffee Break
  • 15:20 - 15:50 Keynote-4: Jiajun Wu/Jiaman Li
  • 15:50 - 16:20 Keynote-5: Justus Thies
  • 16:20 - 16:50 Hightlight: Wayne Wu/Kwan-Yee Lin
  • 16:50 - 17:05 Oral Session-3
  • 17:20 - 17:45 Panel: Bolei Zhou; Roozbeh Mottaghi, Jiajun Wu, Micheal J. Black

Open Questions

Invited Speakers and Panelists

Steve Seitz

University of Washington

Siyu Tang

ETH Zürich

Jiajun Wu

Stanford University

Justus Thies

TU Darmstadt

Michael J. Black

Max Planck Institute for Intelligent Systems



Kwan-Yee Lin

University of Michigan

Wayne Wu

University of California, Los Angeles

Bolei Zhou

University of California, Los Angeles

Matthias Nießner

Technical University of Munich

Stella X. Yu

University of Michigan