Given a stream of raw, multi-modal sensory input data, an autonomous robot has to continuously make decisions on how to act for achieving a specific task. This requires the robot to map a very high-dimensional space (sensory data) to another high-dimensional space (motor commands). The non-linear relationship between these can only be captured if we introduce suitable biases and task-specific prior knowledge that structures this mapping. At the same time, these biases have to provide enough flexibility to cope with the expected variability in the robot task. However, increased model flexibility comes at a price: more open parameters which either need to be manually tuned or learned from a suitable amount of data.
In this talk, I illustrate this trade-off by analyzing two problems in the area of perception for autonomous robotic grasping and manipulation: (i) learning to grasp objects given only partial and noisy sensory data and (ii) visual object tracking. I present different approaches towards each of these problems. They are located at different ends of the spectrum between the amount of prior knowledge that is incorporated in the model and the number of open parameters that are learned from data. Based on these examples, I conclude my talk by discussing the different possibilities to include biases and prior knowledge in a model and how to choose a suitable, task-specific trade-off with respect to the number of remaining open parameters.