Masterarbeit MSTR-2019-111

Bibliograph.
Daten
Balachandra Midlagajni, Niteesh: Learning object affordances using human motion capture data.
Universität Stuttgart, Fakultät Informatik, Elektrotechnik und Informationstechnik, Masterarbeit Nr. 111 (2019).
58 Seiten, englisch.
Kurzfassung

When interacting with their environment, humans model the action possibilities directly in the product space of their own capabilities using the spatial configuration of their body and the environment. This idea of the existence of an intuitive and perceptual representation of the possibilities in an environment has been hypothesized and discussed by psychologist JJ Gibons, and is called affordances. The goal of this thesis is to build an algorithmic framework to learn and encode human object affordances from motion capture data. In this regard, we collect motion capture data, wherein, the human subjects perform pick and place activities in the scene. Using the collected data, we develop models using neural network architecture to learn graspability and placeability affordances, while also capturing the uncertainty in predictions. We achieve this by modeling affordances within the probabilistic framework of Deep Learning. Our models predict grasp densities and place densities accurately, in the sense that the ground truth is always within the confidence interval. Furthermore, we develop a system and integrate our models for real-time application, in order to produce affordance features in live setting and visualize the densities as heatmaps in real-time.

Volltext und
andere Links
Volltext
Abteilung(en)Universität Stuttgart, Institut für Parallele und Verteilte Systeme, Maschinelles Lernen und Robotik
BetreuerToussaint, Prof. Marc; Mainprice, Dr. Jim; Kratzer, Philipp
Eingabedatum21. März 2022
   Publ. Informatik