The complexity of natural scenes and the amount of information acquired by terrestrial laser scanners turn the registration among scans into a complex problem. This problem becomes even more challenging when two individual scans captured at significantly changed viewpoints (wide baseline). Since laser-scanning instruments nowadays are often equipped with an additional image sensor, it stands to reason making use of the image content to improve the registration process of 3D scanning data. In this paper, we present a novel improvement to the existing feature techniques to enable automatic alignment between two widely separated 3D scans. The key idea consists of extracting dominant planar structures from 3D point clouds and then utilizing the recovered 3D geometry to improve the performance of 2D image feature extraction and matching. The resulting features are very discriminative and robust to perspective distortions and viewpoint changes due to exploiting the underlying 3D structure. Using this novel viewpoint invariant feature, the corresponding 3D points are automatically linked in terms of wide baseline image matching. Initial experiments with real data demonstrate the potential of the proposed method for the challenging wide baseline 3D scanning data alignment tasks. Â© 2010 IEEE.