This is a personal memo of this survey paper.
Introduction
Automatic change detection is an essential research subject.
Originated from Photogrammetry and Remote sensing.
- Image-based methods deduced 3D changes from 1D or 2D measurements.
Point clouds are promising.
- Can be obtained in a short time and a low cost
- Free from perspective distortions
Four major domains cover ten categories of application topics.
Domains | Categories |
---|---|
Urban monitoring | land use and land cover (LULC) |
building investigation | |
indoor variation analysis | |
vegetation surveys | |
Construction automation | construction monitoring |
infrastructure maintenance | |
historical heritage preservation | |
Hazard identification | natural hazard monitoring |
water body and flood monitoring | |
Cadaster | cadaster updating |
Three fundamental tasks must be addressed.
- Coordinate system alignment
- Spatial and spectral comparison
- Change representation and analysis
Theoretical primer
Define and distinguish the term "change".
- Binary definition: changed / no change
- Triple definition: new / demolished / no change
- Additions for partial change: partially new / partially demolished
However, definitions above are not sufficient to describe some complicated situations.
Define "change" semantically at object-level.
From the aspect of one object, these types follow the natural and intuitive way of human observation.
- appeared / disappeared / fully moved
- partially moved / deformed
- unchanged
Change detection is still challenging.
- Inconsistent sampling
- Why? Point clouds with different densities / distributions / covered regions.
- Solvable with comprehensive observation
- Limited visibility
- Why? Inevitable generation error due to occlusion / perspective.
- Missing semantics
- Point cloud partition is feasible in some cases.
- Semantics are required for object-level analysis.
The general objective of change detection is twofold.
- Spatial changes between two datasets at different timestamp from the same site.
- Change types derived from spatial changes.
- $P, Q$: input point clouds
- $P^\ast, Q^\ast$: overlapped subsets from $P, Q$
- $D_{p,q}$: spatial difference
- $E(\cdot)$: evaluation function
- $L$: change types
General workflow
Change detection generally follows threes stages.
- Reference frame registration: Point clouds are aligned into the same coordinate system for comparison.
- Geometric difference estimation: Spatial differences obtained via occupancy analysis or distance measures.
- Spectral and attribute analysis: Changes identified by analyzing geometric differences and attribute shifts.
1st: Reference frame registration
Reference frame registration is an optimization problem for estimating transformation parameters for point clouds without markers.
Approaches can be divided into three primary classes:
- Geometric-constraint-based methods
- Feature-similarity-based methods
- Global-information-based methods
Deep-learning techniques are useful for:
- generating more robust feature descriptions than conventional methods
- estimating transformation parameters by embedding to the network
However, lack of training datasets limits the usage in large outdoor scenarios.
2nd: Spatial difference estimation
According to the estimation metric, there are three types of methods:
- Point-based difference estimation
- Voxel- or occupancy-grid-based difference estimation
- Segment- or object-based difference estimation
3rd: Geometric and attribute analysis
Two properties should be derived at this stage:
- Geometric property: change type, uncertainties, …
- Attribute property: semantics, …
Discussion
Gaps remain between techniques and demands
- Dataset reliability: The complexity brought by pseudo-changes is noteworthy.
- Pseudo-change: changes in non-investigated objects
- seasonal changes of vegetation covers the surface of buildings
- scaffolds during construction
- …
- TODOs: remove the influence of pseudo-changes
- shape completion methods
- filtering methods
- …
- Pseudo-change: changes in non-investigated objects
- Result uncertainty: Occlusion is troublesome.
- Contributions of semantics: Geometric changes cannot present all possible changes in an observed scene.
Some directions are possible to fill up the gaps
- Multi-source point clouds
- Object-level semantics
- Collaboration with Computer Vision