摘要:
Household robots need to perform tasks specific for the owner. With Learning from demon- stration (LfD) a robot can learn new tasks from human demonstrations, without requiring programming skills. This thesis investigates a novel representation of actions that can be learned by using only a 3d camera and an object tracker. The action representation is object- based so it is independent of the morphology of the robot. The actions are represented using the average and standard deviation of multiple demon- strated trajectories with six degrees of freedom. The standard deviation serves as a weight factor for the required accuracy of the recognized or synthesized trajectory. Three novel meth- ods proposed in this thesis aim to reduce variances in the demonstration that are not specific to the action. First the demonstrations are aligned in time using a novel action signature and a novel time warp algorithm. The time warp algorithm can approximate the alignment of multiple multidimensional signals in quadratic computing time. The third novel technique is a dynamically optimized choice of reference frame so variations in start and end position have little influence on the variance in trajectory. This method has been tested on a database of five actions repeatedly demonstrated by six subjects. The results show that it is possible to have a 90 percent action recognition rate with only three demonstrations in the database. It is also shown that a robot can use this action representation to synthesize four out of five actions with varying object positions.