3D data are first characterized with their relative positioning into the global
point cloud. This positioning is expressed as three basic coordinates into the
Cartesian coordinate system, along x
-, y
- and z
-axis.
See Wikipedia for more details on this coordinate system.
By considering a local neighborhood with respect to a point within the point
cloud, one can compute the set covariance matrix over the three dimensions x
,
y
and z
. Some of the following sets of features are defined with the
covariance matrix eigenvalues
As a remark, PCA software outputs correspond to singular values sklearn
in Python).
The importance of these features was first highlighted by Brodu et al (2011).
By assuming that pca
represents the output of a principle component analysis
over the point local neighborhood, one can compute such features by doing:
geo3dfeatures.features.triangle_variance_space(pca)
Given the nb_neighbors
the number of neighbors, neighbor_z
the z
coordinates of neighboring points, the distances dist
between the point of
interest and its neighbors and pca
the resulting PCA of the neighboring set,
one can compute a set of 3D properties that are detailed below.
geo3dfeatures.features.val_range(neighbor_z)
geo3dfeatures.features.std_deviation(neighbor_z)
geo3dfeatures.features.radius_3D(dist)
geo3dfeatures.features.density_3D(radius3D, nb_neighbors)
geo3dfeatures.features.verticality_coefficient(pca)
See Weinmann et al (2015) for theoretical details.
Given the covariance matrix eigenvalues lambda
, and their normalized version
e
, one can compute the feature set with:
geo3dfeatures.features.curvature_change(e)
geo3dfeatures.features.linearity(e)
geo3dfeatures.features.planarity(e)
geo3dfeatures.features.omnivariance(e)
geo3dfeatures.features.anisotropy(e)
geo3dfeatures.features.eigenentropy(e)
geo3dfeatures.features.curvature_change(e)
geo3dfeatures.features.val_sum(lambda)
It returns a list with the features detailed below.
For more theoretical details, see Weinmann et al (2015) citing West et al. (2004) and Pauly et al. (2003).
équivalent à
If focusing on man-made structure, there may be some symmetric and orthogonal patterns into the point cloud. In this way, considering the point cloud 2D projection may highlight new elements. Radius and density are computed on the same manner than for the 3D point cloud.
See Weinmann et al (2015) for further explanations.
Considering point
the 2D coordinates of the point of interest, neighbors
the 2D coordinates of neighboring points, and nb_neighbors
the number of
neighbors, one gets the 2D properties with following functions:
geo3dfeature.features.radius_2D(point, neighbors)
geo3dfeature.features.density_2D(radius2D, nb_neighbors)
Weinmann et al (2015) propose two additional 2D features built on the model of 3D features: the sum and the ratio of eigenvalues computed over 2D data.
Given the covariance matrix eigenvalues lambdas
, computed on 2D data
projection, one can compute the feature set with:
geo3dfeatures.features.val_sum(lambdas)
geo3dfeatures.features.eigenvalue_ratio_2D(lambdas)
The accumulation features are 2D-based features, as they are based on an
alternative neighborhood definition. Instead of considering the kNN
), one has to design bins of fixed size, and to sort each
point into its corresponding bin regarding the 2D projection of the point
cloud.
This neighborhood definition has been used in and Monnier et al. (2012) Weinmann et al (2015), for instance.
In order to compute this set of features, one has to assign each point to its corresponding bin with:
geo3dfeatures.features.accumulation_2d_neighborhood(point_cloud, bin_size, buf)
where point_cloud
represents the point 3D coordinates into the point cloud,
bin
the bin size (expressed in the same unity than those of the point cloud)
and buf
a buffer that helps managing the extrem points in the point cloud. As
this method returns a DataFrame with accumulated density, z-range and
z-standard-deviation associated to each point in the cloud, accessing the
feature value is straightforward.
In order to manage color differences and improve automatic point classification algorithms we store red, green and blue pixel components.
-
Nicolas Brodu, Dimitri Lague, 2011. 3D Terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: applications in geomorphology. arXiv:1107.0550.
-
Fabrice Monnier, Bruno Vallet, Bahman Soheilian, 2012. Trees detection from laser point clouds acquired in dense urban areas by a mobile mapping system. ISPRS Annals of the Phogrammetry, Remote Sensing and Spatial Information Sciences. 1-3, pp.245-250.
-
Mark Pauly, Richard Keiser, Markus Gross, 2003. Multi‐scale Feature Extraction on Point‐Sampled Surfaces. Computer Graphics Forum, 22(3):281-289.
-
Martin Weinmann, Boris Jutzi, Stefan Hinz, Clément Mallet, 2015. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS Journal of Photogrammetry and Remote Sensing, vol 105, pp 286-304.
-
Karen F. West, Brian N. Webb, James R. Lersch, Steven Pothier, 2004. Context-driven automated target detection in 3D Data. Proceedings of SPIE - The International Society for Optical Engineering 5426, September 2004.