You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<p>The physical world is inherently deformable and dynamic. However, many of the modern robotic modeling and perception stacks operate under assumptions of rigid, static environments, limiting their robustness and generality in the real world. Deformable objects such as ropes, cloth, plants, and soft containers are common in daily life, and environments themselves—ranging from sand and fluids to flexible structures and dynamic scenes—exhibit non-rigid, history-dependent behavior that challenges traditional robotic pipelines.</p>
114
+
<p>The physical world is inherently non-rigid and dynamic. However, many modern robotic modeling and perception stacks assume rigid and static environments, limiting their robustness and generality in the real world. Non-rigid objects such as ropes, cloth, plants, and soft containers are common in daily life, and many environments, including sand, fluids, flexible structures, and dynamic scenes, exhibit deformability and history-dependency that challenges traditional assumptions in robotics.</p>
115
115
116
-
<p>This workshop aims to bring together researchers working across robotics, computer vision, and machine learning to address the challenges of perception, representation, and interaction in deformable worlds. Core questions include:</p>
117
-
<ul>
118
-
<li>How should we represent and reconstruct deformable objects in 3D?</li>
119
-
<li>How can learned models reason about dynamic changes in shape and topology?</li>
120
-
<li>What role can physics-based models and differentiable simulators play in generalizing prediction and control?</li>
121
-
<li>And how do we design active perception strategies that account for the uncertainty and shifting nature of deformable environments?</li>
122
-
</ul>
123
-
124
-
<p>Now is a critical time for this workshop. Recent advances in foundation models, 3D Gaussian splatting, differentiable physics, and graph-based representations provide unprecedented tools to address long-standing challenges in deformable world modeling. At the same time, real-world robotics applications—from household manipulation to surgical automation—demand robust systems that can sense, reason, and act in the presence of soft, complex, and dynamic materials.</p>
125
-
126
-
<p>This workshop will provide a forum to discuss shared challenges, emerging solutions, and opportunities for collaboration across domains. We welcome contributions from robotics, 3D vision, physics simulation, material modeling, and learning communities interested in deformable and dynamic interaction.</p>
127
-
</div>
116
+
<p>This workshop comes at a pivotal moment: advances in foundation models, scalable data collection, differentiable physics, and 3D modeling and reconstruction create new opportunities to represent and interact in non-rigid, dynamic worlds. At the same time, real-world applications increasingly demand systems for handling soft, articulated, or granular dynamic objects. The workshop will convene researchers from robotics, computer vision, and machine learning to tackle shared challenges in perception, representation, and interaction in non-rigid worlds. By surfacing emerging solutions and promoting cross-disciplinary collaboration, the workshop aims to advance the development of more generalizable models grounded in data and physics for real-world robotic interaction.</p>
<li>How can we robustly perceive, reconstruct, and represent non-rigid objects in 3D from sparse or noisy sensor data?</li>
134
-
<li>How can we balance simulation tools, foundation-model-based approaches, and scene-specific reconstruction methods (e.g., 3D Gaussian splatting) for representing non-rigid, dynamic worlds?</li>
135
-
<li>What strategies enable active and adaptive perception in environments with high uncertainty, non-rigidity, and history-dependent dynamics?</li>
136
-
<li>How can robotic systems achieve reliable manipulation and interaction in complex, real-world scenarios involving non-rigid objects with varying topology, material properties and appearance?</li>
137
-
<li>What representations and perception strategies are needed to handle complex object appearance or material properties such as translucency (e.g., glass), high reflectance (e.g., metal), or particulate behavior (e.g., sand, grains)?</li>
122
+
<li>How might we learn to robustly perceive, reconstruct, and represent non-rigid objects in 3D, particularly from sparse or noisy sensor data?</li>
123
+
<li>How might simulation tools, foundation models, and scene-specific reconstruction methods (e.g., 3D Gaussian splatting) be used to represent non-rigid, dynamic worlds?</li>
124
+
<li>How might we actively, interactively, or adaptively perceive the world to reveal highly-uncertain, history-dependent object or environment properties?</li>
125
+
<li>How might we achieve reliable robot manipulation and interaction in complex, real-world scenarios involving non-rigid objects with varying topology, material properties, and appearance?</li>
126
+
<li>How might we design representation and perception strategies to handle complex object appearance or material properties such as translucency (e.g., glass), high reflectance (e.g., metal), or particulate behavior (e.g., sand)?</li>
0 commit comments