Image-based Facade Modeling


We propose in this paper a semi-automatic image-based approach to facade modeling that uses images captured along streets and relies on structure from motion to recover camera positions and point clouds automatically as the initial stage for modeling. We start by considering a building facade as a flat rectangular plane or a developable surface with an associated texture image composited from the multiple visible images. A facade is then decomposed and structured into a Directed Acyclic Graph of rectilinear elementary patches. The decomposition is carried out top-down by a recursive subdivision, and followed by a bottom-up merging with the detection of the architectural bilateral symmetry and repetitive patterns. Each subdivided patch of the flat facade is augmented with a depth optimized using the 3D points cloud. Our system also allows for an easy user feedback in the 2D image space for the proposed decomposition and augmentation. Finally, our approach is demonstrated on a large number of facades from a variety of street-side images.





This work was supported by Hong Kong RGC Grants 618908, 619107, 619006, and RGC/NSFC N-HKUST602/05. Ping Tan is supported by Singapore FRC Grant R-263-000-477-112. We acknowledge University of North Carolina at Chapel Hill and University of Kentucky for the data set on Baity Hill Drive in Chapel Hill.