In this paper we present a novel approach for generating viewpoint invariant features from single images and demonstrate its application to robust matching over widely separated views in urban environments. Our approach exploits the fact that many man-made environments contain a large number of parallel linear features along several principal directions. We identify the projections of these parallel lines to recover a number of dominant scene planes and subsequently compute viewpoint invariant features within the rectified views of these planes. We present a set of comprehensive experiments to evaluate the performance of the proposed viewpoint invariant features. It is demonstrated that: (1) the resulting feature descriptors become more distinctive and more robust to camera viewpoint changes after the procedure of 3D viewpoint normalization; and (2) the features provide robust local feature information including patch scale and dominant orientation which can be effectively used to provide geometric constraints between views. Targeted at applications in urban environments, where many repetitive structures exist, we further propose an effective framework to use this novel feature for the challenging wide baseline matching tasks. (C) 2011 Elsevier Inc. All rights reserved.