The Photogrammetric Record | 2024年第3期上线

学术   2024-09-21 08:30   中国香港  

# 设为星标 走进我们 #

The Photogrammetric Record 2024年第3期(第39卷,187期)已正式出版。本期一共收录学术论文8篇。欢迎各位专家学者登录期刊网站以了解期刊最新信息,欢迎大家查询、交流和积极投稿。

Photogrammetric Record 期刊由Wiley出版社负责出版发行,是英国摄影测量与遥感学会会刊之一。期刊包含了原创的、独立的摄影测量、三维成像、遥感、计算机视觉、激光扫描、地理信息以及其他与地理信息相关领域的文章,充分体现了现代地理信息学的进步。期刊由武汉大学遥感信息工程学院院长张永军教授和纽约大学 Debra F. Laefer 教授任共同主编,由40余名国际资深专家组成期刊编委团队,共同保障高效、公正的稿件审理工作。本期刊出的学术论文整理如下:


扫码访问期刊页面

https://onlinelibrary.wiley.com/toc/14779730/2024/39/187 


FRONTISPIECE

Comparison of 3D modelswith texture before and after restoration.

Real-scene 3D building models were reconstructed using three oblique photogrammetric image datasets from the official website of the Wingtra drone.

Top: Example 1: Comparison of a 3D building model in Solothurn, Switzerland before and after texture restoration. (a) represents the model before optimization. (b) shows a comparison of details. (c) represents the optimized model.

Middle: Example 2: Comparison of a 3D building model in the Digital Twin of Zurich City before and after texture restoration. (d) represents the model before optimization. (e) shows a comparison of details. (f) represents the optimized model.

Bottom: Example 3: Comparison of a 3D building model in Solothurn, Switzerland before and after texture restoration. (g) represents the model before optimization. (h) shows a comparison of details. (i) represents the optimized model.

For a full report see: Lv, K., Chen, L., He, H., Zhou, F. & Yu, S., 2024. Optimisation of real-scene 3D building models based on straight-line constraints. The Photogrammetric Record, 39, 680-704.

ORIGINAL ARTICLES

A photogrammetric approach for real-time visual SLAM applied to an omnidirectional system

Garcia, T.A.C., Tommaselli, A.M.G., Castanheiro, L.F. & Campos, M.B. (2024) A photogrammetric approach for real-time visual SLAM applied to an omnidirectional system. The Photogrammetric Record, 39, 577–599. Available from: https://doi.org/10.1111/phor.12494

Abstract: The problem of sequential estimation of the exterior orientation of imaging sensors and the three-dimensional environment reconstruction in real time is commonly known as visual simultaneous localisation and mapping (vSLAM). Omnidirectional optical sensors have been increasingly used in vSLAM solutions, mainly for providing a wider view of the scene, allowing the extraction of more features. However, dealing with unmodelled points in the hyperhemispherical field poses challenges, mainly due to the complex lens geometry entailed in the image formation process. To address these challenges, the use of rigorous photogrammetric models that appropriately handle the geometry of fisheye lens cameras can overcome these challenges. Thus, this study presents a real-time vSLAM approach for omnidirectional systems adapting ORB-SLAM with a rigorous projection model (equisolid-angle). The implementation was conducted on the Nvidia Jetson TX2 board, and the approach was evaluated using hyperhemispherical images captured by a dual-fisheye camera (Ricoh Theta S) embedded into a mobile backpack platform. The trajectory covered a distance of 140 m, with the approach demonstrating accuracy better than 0.12 m at the beginning and achieving metre-level accuracy at the end of the trajectory. Additionally, we compared the performance of our proposed approach with a generic model for fisheye lens cameras.

摘要:实时顺序估算成像传感器的外部方位以及三维环境重建的问题,通常被称作视觉SLAM(同步定位与建图)。在视觉SLAM解决方案中,全向光学传感器的使用日益增加,主要因其提供了更广阔的场景视野,允许提取更多特征。然而,在处理超半球视场中未模拟的点时面临挑战,主要由于影像形成过程中复杂的镜头几何结构。为了应对这些挑战,采用精确的摄影测量学模型,妥善处理鱼眼镜头相机的几何结构,可以克服这些难题。因此,本研究提出了一种适用于全向系统的实时视觉SLAM方法,该方法利用了一种严格的投影模型(等立体角)对ORB-SLAM进行了调整。实验在Nvidia Jetson TX2板上进行,通过使用嵌入到移动背包平台的双鱼眼相机(Ricoh Theta S)捕获的超半球影像来评估该方法。该轨迹覆盖了140米距离,方法在起始处的精度优于0.12米,并在轨迹终点实现了米级精度。此外,我们将所提出的方法与鱼眼镜头相机的通用模型的性能进行了比较。

This study presents a real-time vSLAM foromnidirectional systems, adapting ORB-SLAM with a rigorous equisolid-angleprojection model. Analysing a 140 m trajectory, it shows fewer discrepanciesusing hyperhemispherical images, outperforming a generic model like EUCM.Accuracy started at sub-0.12 m, reaching metre-level at the end.

Keywords: backpack systems,fisheye lenses, omnidirectional system, ORB-SLAM, real time, Ricoh Theta S,SLAM

A disparity-aware Siamese network for building change detection in bi-temporal remote sensing images

Yansheng Li,  Xinwei Li,  Wei Chen & Yongjun Zhang. (2024) A disparity-aware Siamese network for building change detection in bi-temporal remote sensing images. The Photogrammetric Record, 39, 528–548. Available from: https://doi.org/10.1111/phor.12495

Abstract: Building change detection has various applications, such as urban management and disaster assessment. Along with the exponential growth of remote sensing data and computing power, an increasing number of deep-learning-based remote sensing building change detection methods have been proposed in recent years. Objectively, the overwhelming majority of existing methods can perfectly deal with the change detection of low-rise buildings. By contrast, high-rise buildings often present a large disparity in multitemporal high-resolution remote sensing images, which degrades the performance of existing methods dramatically. To alleviate this problem, we propose a disparity-aware Siamese network for detecting building changes in bi-temporal high-resolution remote sensing images. The proposed network utilises a cycle-alignment module to address the disparity problem at both the image and feature levels. A multi-task learning framework with joint semantic segmentation and change detection loss is used to train the entire deep network, including the cycle-alignment module in an end-to-end manner. Extensive experiments on three publicly open building change detection datasets demonstrate that our method achieves significant improvements on datasets with severe building disparity and state-of-the-art performance on datasets with minimal building disparity simultaneously.

摘要:建筑物变化检测具有多种重要应用,如城市管理和灾害评估。随着遥感数据和计算能力的指数级增长,越来越多基于深度学习的遥感建筑物变化检测方法被提出。客观地说,多数现有方法能够有效处理低层建筑的变化检测。相比之下,高层建筑在多时相高分辨率遥感图像中经常呈现出较大的视差,这严重降低了现有方法的性能。为了缓解这一问题,我们提出了一种用于在双时相高分辨率遥感影像中检测建筑物变化的视差感知孪生网络。所提出的网络利用循环对齐模块来解决影像和特征级别的视差问题。采用联合语义分割和变化检测损失的多任务学习框架来训练整个深度网络,包括端对端的循环对齐模块。在三个公开建筑物变化检测数据集上进行的大量实验表明,我们的方法在存在严重建筑视差的数据集上取得了显著改进,并同时在建筑视差较小的数据集上表现出优秀的性能。

Keywords: bi-temporal high-resolution remote sensing images, building change detection, disparity-aware Siamese network

Detecting change ingraffiti using a hybrid framework

Wild, B., Verhoeven, G., Muszyński, R. & Pfeifer, N. (2024) Detecting change in graffiti using a hybrid framework. The Photogrammetric Record, 39, 549–576. Available from: https://doi.org/10.1111/phor.12496

Abstract: Graffiti, by their very nature, are ephemeral, sometimes even vanishing beforecreators finish them. This transience is part of graffiti's allure yetsignifies the continuous loss of this often disputed form of cultural heritage.To counteract this, graffiti documentation efforts have steadily increased overthe past decade. One of the primary challenges in any documentation endeavouris identifying and recording new creations. Image-based change detection cangreatly help in this process, effectuating more comprehensive documentation,less biased digital safeguarding and improved understanding of graffiti. Thispaper introduces a novel and largely automated image-based graffiti changedetection method. The methodology uses an incremental structure-from-motionapproach and synthetic cameras to generate co-registered graffiti images fromdifferent areas. These synthetic images are fed into a hybrid change detectionpipeline combining a new pixel-based change detection method with afeature-based one. The approach was tested on a large and publicly availablereference dataset captured along the Donaukanal (Eng. Danube Canal), one ofVienna's graffiti hotspots. With a precision of 87% and a recall of 77%, theresults reveal that the proposed change detection workflow can indicate newlyadded graffiti in a monitored graffiti-scape, thus supporting a morecomprehensive graffiti documentation.

摘要:涂鸦的本质是短暂的,有时甚至在创作者完成前就消失了。这种短暂性是涂鸦吸引力的一部分,但也意味着这种经常有争议的文化遗产不断丢失。为了对抗这一点,过去十年中,涂鸦文献记录的努力稳步增加。在任何记录尝试中的一个主要挑战是识别和记录新的创作。基于影像的变化检测在这个过程中可以大有帮助,实现更全面的记录、更少偏见的数字化保护以及对涂鸦更好的理解。

本文介绍了一种新颖的、大部分自动化的基于影像的涂鸦变化检测方法。该方法使用增量结构从运动方法和合成相机来生成不同区域的共同配准涂鸦影像。这些合成影像被输入到一个结合了新的基于像素的变化检测方法和基于特征的变化检测方法的混合变化检测流程中。该方法在一个大型且公开可用的参考数据集上进行了测试,该数据集沿维也纳的涂鸦热点地区多瑙河运河(英语:Danube Canal)捕获。结果显示,以87%的精度和77%的召回率,提出的变化检测工作流程可以指示在被监控的涂鸦景观中新添加的涂鸦,从而支持更全面的涂鸦记录。

This paper presents an image-based graffitichange detection method, employing an incremental structure-from-motionapproach and a novel hybrid change detection method. Tested on Vienna'sDonaukanal graffiti hotspot, the approach achieved 87% precision and 77%recall, facilitating more comprehensive documentation of this dynamic form ofcultural heritage.

Keywords: 3Dmodelling, change detection, colour difference, cultural heritage, digitalimaging, edge-aware smoothing, feature matching, graffiti

A hierarchical occupancynetwork with multi-height attention for vision-centric 3D occupancy prediction

Can Li,  Zhi Gao,  Zhipeng Lin,  Tonghui Ye & Ziyao Li. (2024) A hierarchical occupancy network with multi-height attention for vision-centric 3D occupancy prediction. The Photogrammetric Record, 39, 600 – 614. Available from: https://doi.org/10.1111/phor.12500

Abstract: The precise geometric representation andability to handle long-tail targets have led to the increasing attentiontowards vision-centric 3D occupancy prediction, which models the real world asa voxel-wise model solely through visual inputs. Despite some notableachievements in this field, many prior or concurrent approaches simply adaptexisting spatial cross-attention (SCA) as their 2D–3D transformation module,which may lead to informative coupling or compromise the global receptive fieldalong the height dimension. To overcome these limitations, we propose ahierarchical occupancy (HierOcc) network featuring our innovative height-awarecross-attention (HACA) and hierarchical self-attention (HSA) as its coremodules to achieve enhanced precision and completeness in 3D occupancyprediction. The former module enables 2D–3D transformation, while the latterpromotes voxels’ intercommunication. The key insight behind both modules is ourmulti-height attention mechanism which ensures each attention head correspondsexplicitly to a specific height, thereby decoupling height information whilemaintaining global attention across the height dimension. Extensive experimentsshow that our method brings significant improvements compared to baseline andsurpasses all concurrent methods, demonstrating its superiority.

摘要:视觉三维占据预测仅通过视觉输入即可将真实世界建模为一个三维体素模型。凭借着精确的几何表达以及对长尾目标的处理能力,视觉三维占据预测任务已经吸引了越来越多研究者的注意。尽管现有方法在该领域已经取得了令人瞩目的效果,但大多仍简单地采用鸟瞰图感知任务中的空间交叉注意力作为二维影像特征到三维体素特征的转换模块。将这一模块直接用于建立三维体素模型可能导致高度维度上的体素特征耦合或是全局感受野的丢失。为了克服上述问题,本文提出了多高度注意力机制,旨在利用多头注意力机制通过为各个注意力头分配独立且固定的高度区域以实现高度维度上信息的解耦。本文围绕这一思想设计了可感知高度的交叉注意力模块以及层级自注意力模块,并在此基础上搭建了端到端的分层三维占据预测网络。其中,可感知高度的交叉注意力模块在保证体素特征解耦的条件下完成了影像特征向三维空间的传播,构建了完整的三维体素特征,且为网络提供了高度维度的全局注意力;层级自注意力模块将三维体素模型视为多个不同高度的二维平面,在各个平面上为各个体素实现了上下文信息的获取。对比现有方法,本文网络在Occ3d-nuScenes数据集上取得了最高的预测准确率。

As shown on the left of thegraphical abstract, we propose a new 3D occupancy prediction model called theHierOcc network. The inputs of HierOcc are multi-view temporal images. Afterpassing through the backbone, a feature pyramid network is used to obtainmulti-scale features of images. The image features and a set of voxel featuresinitialized by learnable parameters are fed into transformer blocks composed ofHSA and HACA. After this, the 3D feature volume corresponding to each group oftemporal images will be registered and concatenated together, and further fusedthrough a module composed of 3D convolutions. Finally, we up-sample the fused3D feature volume to the same resolution as the ground truth, and then use aclassification head to assign semantic label to each voxel. The core modules inour HierOcc are height-aware cross-attention and hierarchy self-attention. Asshown in the right part of the graphical abstract, HACA transforms visualfeatures from 2D image to 3D space and maintains global perception in theheight dimension while decoupling features from different heights, whereas HSAenables dynamic information exchange among voxels on the same height plane,enhancing the results’ completeness for planar categories.

Keywords: 3D occupancy prediction,autonomous driving, transformer

Forest canopy heightmodelling based on photogrammetric data and machine learning methods

Xingsheng Deng,  Yujing Liu & Xingdong Cheng. (2024) Forest canopy height modelling based on photogrammetric data and machine learning methods. The Photogrammetric Record, 39, 615 – 640. Available from: https://doi.org/10.1111/phor.12507

Abstract: Forest topographic survey is a problem that photogrammetry has not solved for a long time. Forest canopy height is a crucial forest biophysical parameter which is used to derive essential information about forest ecosystems. In order to construct a canopy height model in forest areas, this study extracts spectral feature factors from digital orthophoto map and geometric feature factors from digital surface model, which are generated through aerial photogrammetry and LiDAR (light detection and ranging). The maximum information coefficient, Pearson, Kendall, Spearman correlation coefficients, and a new proposed index of relative importance are employed to assess the correlation between each feature factor and forest vertical heights. Gradient boosting decision tree regression is introduced and utilised to construct a canopy height model, which enables the prediction of unknown canopy height in forest areas. Two additional machine learning techniques, namely random forest regression and support vector machine regression, are employed to construct canopy height model for comparative analysis. The data sets from two study areas have been processed for model training and prediction, yielding encouraging experimental results that demonstrate the potential of canopy height model to achieve prediction accuracies of 0.3 m in forested areas with 50% vegetation coverage and 0.8 m in areas with 99% vegetation coverage, even when only a mere 10% of the available data sets are selected as model training data. The above approaches present techniques for modelling canopy height in forested areas with varying conditions, which have been shown to be both feasible and reliable.

摘要:森林地形测量是摄影测量长期以来没有解决的问题。森林冠层高度是一个重要的森林生物物理参数,可用于获取森林生态系统的重要信息。为了构建林区冠层高度模型,本研究从航空摄影测量产生的数字正射影中提取光谱特征因子,从数字表面模型中提取几何特征因子,从激光雷达数据中提取冠层高度控制点。利用最大信息系数、Pearson、Kendall、Spearman相关系数和一个新提出的相对重要性指数来评价各特征因子与森林冠层高度的相关性。引入梯度提升决策树回归方法,构建了森林冠层高度模型,实现了对林区未知冠层高度的预测。另外利用随机森林回归和支持向量机回归等两种机器学习技术,构建冠层高度模型进行对比分析。对两个研究区的数据集进行了模型训练和预测,得到了令人鼓舞的实验结果。实验结果表明,即使只选择10%的可用数据集作为模型训练数据,在植被覆盖率为50%的林区,冠层高度模型的预测精度为0.3m;在植被覆盖率为99%的林区,冠层高度模型的预测精度为0.8m。本文提出了不同条件下林区冠层高度建模方法,这些方法已被证明是可行和可靠的。

The study employs threemachine learning techniques, namely gradient boosting decision tree regression,random forest regression and support vector machine regression, to constructhigh-resolution canopy height models in forested areas. These models are basedon spectral feature factors extracted from digital orthophoto maps andgeometric feature factors derived from digital surface models. Experimentalresults demonstrate the potential of the canopy height models constructed bythe gradient boosting decision tree regression to achieve prediction accuraciesof 0.2 m in areas with 50% canopy coverage and 0.6 m in areas with 99% canopycoverage, even when only utilising a subset 20% of the available data sets formodel training purposes.

Keywords: canopy height modelling,gradient boosting decision tree, LiDAR, photogrammetry, random forest

MoLO: Drift-free lidar odometry using a 3D model

Zhao, H., Zhao, Y., Tomko, M. & Khoshelham, K. (2024) MoLO: Drift-free lidar odometry using a 3D model. The Photogrammetric Record, 39, 641 – 663. Available from: https://doi.org/10.1111/phor.12509

Abstract: LiDAR odometry enables localising vehicles and robots in the environments where global navigation satellite systems (GNSS) are not available. An inherent limitation of LiDAR odometry is the accumulation of local motion estimation errors. Current approaches heavily rely on loop closure to optimise the estimated sensor poses and to eliminate the drift of the estimated trajectory. Consequently, these systems cannot perform real-time localization and are therefore not practical for a navigation task. This paper presents MoLO, a novel model-based LiDAR odometry approach to achieve real-time and drift-free localization using a 3D model of the environment containing planar surfaces, namely the structural elements of buildings. The proposed approach uses a 3D model of the environment to initialise the LiDAR pose and includes a scan-to-scan registration to estimate the pose for consecutive LiDAR scans. Re-registering LiDAR scans to the 3D model at a certain frequency provides the global sensor pose and eliminates the drift of the trajectory. Pose graphs are built frequently to acquire a smooth and accurate trajectory. A geometry-based method and a learning-based method to register LiDAR scans with the 3D model are tested and compared. Experimental results show that MoLO can eliminate drift and achieve real-time localization while providing an accuracy equivalent to loop closure optimization.

摘要:LiDAR里程计在全球导航卫星系统(GNSS)不可用的环境中实现车辆和机器人定位。然而,LiDAR里程计的一个固有限制是局部运动估计误差的积累。目前的方法严重依赖回环闭合来优化传感器位姿估计并消除估计轨迹的漂移。因此,这些系统无法进行实时定位,从而不适用于导航任务。本文提出了一种名为MoLO的新型的基于3D模型的LiDAR里程计方法,利用环境的3D模型(包含建筑物主要结构元素的表面)实现实时且无漂移的定位。该方法使用环境的3D模型实现LiDAR的位姿初始化,并匹配帧间位移以估计连续LiDAR扫描的位姿。以一定频率将LiDAR扫描重新匹配到3D模型可以提供全局传感器位姿并消除轨迹漂移。通过频繁构建位姿图来获得平滑且准确的轨迹。本文测试并比较了一种基于几何的方法和一种基于学习的方法来将LiDAR数据与3D模型进行匹配。实验结果表明,MoLO可以消除漂移,实现实时定位,同时提供与回环闭合优化等效的精度。

Each LiDAR scan isregistered with its previous scan to estimate the transformation. Periodically,a LiDAR scan will be registered with the 3D model and pose graph optimizationis then used to optimise the pose of each LiDAR scan between the two scans registeredwith the 3D model.

Keywords: drift elimination,localization, pose graph optimization, registration, sensor pose

Optical flow matching with automatically correcting the scale difference of tunnel parallel photogrammetry

Hao Li,  Bohao Gao,  Xiufeng He & Pengfei Yu. (2024) Optical flow matching with automatically correcting the scale difference of tunnel parallel photogrammetry. The Photogrammetric Record, 39, 664 – 679. Available from: https://doi.org/10.1111/phor.12511

Abstract: Using parallel photography to model tunnels is an efficient method for real scene modelling. Aiming at the problem that the accuracy of optical flow matching in tunnel parallel photography sequence photos is severely affected by the scale deformation of stereo images, a novel optical flow matching method with automatically correcting the scale difference of tunnel parallel photography stereo images is proposed from the perspective of imaging relationships. By analysing the distribution pattern of scale difference in stereo images, a model is obtained in which the scale difference of image points is symmetrically distributed radially on the image and follows a power function growth. Introduce it into traditional optical flow matching to correct image scale differences based on the model to improve matching accuracy. The mean square error of the optical flow matching after correcting scale difference in the experiment is less than 0.3 pixels, which is at least 34.3% higher than before correction and a maximum improvement of 45.5% in the experimental results. The research result indicates that the proposed optical flow matching method with automatically correcting the scale difference has a significant effect on improving the accuracy of tunnel parallel photography image matching and modelling.

摘要:针对隧洞平行序列影像的光流法匹配过程中,匹配精度受立体像对尺度形变的影响而降低的问题,从成像关系出发,提出一种基于隧洞平行摄影立体像对尺度差校正的光流法。通过对立体像对尺度差的分布规律进行分析,得到像点尺度差在图像上呈对称性径向分布且符合幂函数增长的尺度差模型,将其引入传统的光流法匹配中,根据模型校正影像尺度差以提高匹配效果。校正尺度差后的光流法匹配取得了匹配中误差小于0.3个像素,较校正前至少提升34.3%,最大提升45.5%的实验结果。研究表明,本文提出的基于平行摄影隧洞立体影像尺度差校正的光流法对提升隧洞平行摄影影像匹配精度作用显著。

Aiming at the problem thatthe matching accuracy of the optical flow method is low due to the influence ofscale deformation on tunnel parallel photography sequence images, this paperproposes a scale difference correction model. This model establishes the scaledifference of the whole image by image matching and automatically corrects thescale of the pre-sequence image. In Lucas Kanade optical flow matching, thefeature windows corresponding to the feature points of the pre-sequence imageand the post-sequence image are extracted, and the scale deformation of thewindow is automatically corrected by the scale difference model, which caneffectively improve the accuracy of the optical flow matching.

Keywords: optical flow matching,parallel photogrammetry, scale difference model, tunnel image

Frontispiece (封面图片文章)

Optimisation of real-scene 3D building models based on straight-line constraints

Kaiyun Lv,  Longyu Chen,  Haiqing He,  Fuyang Zhou,  Shixun Yu. (2024) Optimisation of real-scene 3D building models based on straight-line constraints. The Photogrammetric Record, 39, 680 – 704. Available from: https://doi.org/10.1111/phor.12514

Abstract: Due to the influence of repeated textures or edge perspective transformations on building facades, building modelling based on unmanned aerial vehicle (UAV) photogrammetry often suffers geometric deformation and distortion when using existing methods or commercial software. To address this issue, a real-scene three-dimensional (3D) building model optimisation method based on straight-line constraints is proposed. First, point clouds generated by unmanned aerial vehicle (UAV) photogrammetry are down-sampled based on local curvature characteristics, and structural point clouds located at the edges of buildings are extracted. Subsequently, an improved random sample consensus (RANSAC) algorithm, considering distance and angle constraints on lines, known as co-constrained RANSAC, is applied to further extract point clouds with straight-line features from the structural point clouds. Finally, point clouds with straight-line features are optimised and updated using sampled points on the fitted straight lines. Experimental results demonstrate that the proposed method can effectively eliminate redundant 3D points or noise while retaining the fundamental structure of buildings. Compared to popular methods and commercial software, the proposed method significantly enhances the accuracy of building modelling. The average reduction in error is 59.2%, including the optimisation of deviations in the original model's contour projection.

摘要:由于建筑立面上存在重复纹理或边缘透视变换的影响,基于无人机摄影测量进行建筑建模的现有方法或商业软件,构建的建筑实景三维模型往往存在局部几何变形和失真现象。针对这一问题,提出了一种基于直线约束的建筑实景三维模型优化方法。首先,基于局部曲率特征对无人机摄影测量生成的点云进行降采样,并提取位于建筑边缘的结构点云。随后,顾及建筑模型边缘线之间距离和角度约束关系,提出一种协同约束随机采样一致性算法,用于从结构点云中提取具有直线特征的点云。最后,基于拟合的三维直线函数关系优化更新建筑物结构点云。实验结果表明,所提出的方法能够较好地保留建筑轮廓基本结构,并有效消除冗余的三维点或噪声。与主流的方法和商业软件相比,所提出的方法显著提高了建筑边缘建模的精度,平均误差减少了59.2%,大为减少了建筑物的锯齿状边缘,实现了对原始模型轮廓投影偏差的优化。

This study presents areal-scene three-dimensional (3D) building model optimisation method based onstraight-line constraints. The main workflow of this method consists of fourstages: real-scene 3D building model generation, neighbourhood search andbuilding edge extraction, co-constraints based on distance and angle for 3Dline extraction, and building model optimisation.

Keywords: building facade, buildingmodelling, co-constraint RANSAC, point clouds, straight line

Wiley 生态环境 




Wiley生态环境
生态环境科学研究前沿热点,和最新文献导读。
 最新文章