Autonomous Exploration Using 3D Maps Semantically Augmented By Deep Learning
Autonomous exploration is a well-known problem in robotics and has proven to be highly relevant in urban search and rescue (USAR) scenarios. For a robot to explore its surroundings, a map of the current environment is necessary. Currently, the rescue robot GETjag can build a detailed 3D map of the environment in the form of a point cloud. However, this map does not contain semantic information about the scene. For example, it is not possible for the robot to distinguish a door from a wall. Since such semantic knowledge will open up new possibilities, this thesis aims to develop a system that can provide scene understanding by adding semantic information to a 3D map and use this semantically enriched map to enhance the exploration behavior. The system is intended to be validated by testing the point-wise classification and exploration behavior in simulated and real-world environments.