One-Shot Learning Techniques for the Detection of Specific Objects
Most object detection algorithms are based on the use of classifiers, which require a large amount of training images of the object of interest. As a result, the training of these classifiers is costly and time consuming. In this thesis an algorithm to reduce the necessary amount of training images is to be implemented and evaluated. The goal is to detect the object of interest robustly under varying conditions, even if only a single image is provided as input for the training.
The algorithm to be implemented in this thesis will use geometric image transformations on available training images to generate artifical training data. The artificial training data therefore simulates different views of the object of interest. By adding artifical training data to the pool of training images, the accuracy of the detection will be improved.
The classifier will be trained with features extracted from training images, generated training data, and negative examples generated from the background within input images. Initially, only global features like HOG (Histogram of oriented gradients) will be considered. If applicable, local features will be added at a later stage. The algorithm will be implemented in C/C++.