Bibliography | Balachandra Midlagajni, Niteesh: Learning object affordances using human motion capture data. University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Master Thesis No. 111 (2019). 58 pages, english.
|
Abstract | When interacting with their environment, humans model the action possibilities directly in the product space of their own capabilities using the spatial configuration of their body and the environment. This idea of the existence of an intuitive and perceptual representation of the possibilities in an environment has been hypothesized and discussed by psychologist JJ Gibons, and is called affordances. The goal of this thesis is to build an algorithmic framework to learn and encode human object affordances from motion capture data. In this regard, we collect motion capture data, wherein, the human subjects perform pick and place activities in the scene. Using the collected data, we develop models using neural network architecture to learn graspability and placeability affordances, while also capturing the uncertainty in predictions. We achieve this by modeling affordances within the probabilistic framework of Deep Learning. Our models predict grasp densities and place densities accurately, in the sense that the ground truth is always within the confidence interval. Furthermore, we develop a system and integrate our models for real-time application, in order to produce affordance features in live setting and visualize the densities as heatmaps in real-time.
|
Full text and other links | Volltext
|
Department(s) | University of Stuttgart, Institute of Parallel and Distributed Systems, Machine Learning und Robotics
|
Superviser(s) | Toussaint, Prof. Marc; Mainprice, Dr. Jim; Kratzer, Philipp |
Entry date | March 21, 2022 |
---|