-
Notifications
You must be signed in to change notification settings - Fork 0
Feat/add z ref in pointcloud #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
mdupaysign
wants to merge
10
commits into
dev
Choose a base branch
from
feat/add_z_ref_in_pointcloud
base: dev
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 9 commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
288cdd5
create new function : filter deviation_day
mdupaysign d965cce
add data for test : normalize
mdupaysign 20b2e5b
add filter config for pretreatment
mdupaysign 3667a0c
add functions : filter and normalize
mdupaysign 9a93142
add scripts for pretreatment points: check, filter deviation day and …
mdupaysign 03b9d32
refacto function filter_deviation_day and test
mdupaysign e136d53
refacto filter_deviation_day and update config
mdupaysign 585dfa3
refacto normalize function with test
mdupaysign 3792838
refacto normalize_height: use dtm_marker==1 for DTM, filter after HAG…
mdupaysign a9291ad
chore: downgrade version to 0.1.0 -> 0.2.0
mdupaysign File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,2 +1,3 @@ | ||
| [flake8] | ||
| max-line-length = 119 | ||
| max-line-length = 119 | ||
| extend-ignore = E203 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
| __version__ = "0.0.2" | ||
| __version__ = "1.0.0" | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,100 @@ | ||
| """Keep points within a ±deviation_days window around the most densely sampled acquisition day.""" | ||
|
|
||
| import json | ||
| import logging | ||
| import math | ||
| import warnings | ||
| from datetime import datetime, timezone | ||
|
|
||
| import numpy as np | ||
| import pdal | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
| _SECONDS_PER_DAY = 86_400.0 | ||
| _EPSILON = 1e-3 # 1 ms — smaller than any realistic GpsTime resolution | ||
|
|
||
|
|
||
| def filter_deviation_day( | ||
| pipeline: pdal.Pipeline, | ||
| deviation_days: int | float = 14, | ||
| gpstime_ref: str = "2011-09-14 01:46:40", | ||
| ) -> pdal.Pipeline: | ||
| """Filter a LiDAR point cloud keeping only points acquired within ±deviation_days | ||
| around the most densely sampled calendar day. | ||
|
|
||
| Equivalent to the R function ``filter_date_mode()`` from lidR. | ||
|
|
||
| Args: | ||
| pipeline (pdal.Pipeline): Executed PDAL Pipeline object. | ||
| deviation_days (int | float): Half-width of the retention window in days. | ||
| Pass ``math.inf`` to skip filtering entirely. Default: 14. | ||
| gpstime_ref (str): ISO-8601 UTC string of the GPS time reference epoch. | ||
| Default: "2011-09-14 01:46:40". | ||
|
|
||
| Returns: | ||
| pdal.Pipeline: A new, configured-but-not-yet-executed PDAL Pipeline restricted | ||
| to the selected time window, or the original pipeline unchanged if | ||
| ``deviation_days`` is infinite. | ||
|
|
||
| Raises: | ||
| ValueError: If the pipeline has no arrays, lacks a ``GpsTime`` dimension, | ||
| or if ``deviation_days`` is negative. | ||
| """ | ||
| if not math.isinf(deviation_days) and deviation_days < 0: | ||
| raise ValueError(f"deviation_days must be >= 0 or math.inf, got {deviation_days!r}") | ||
|
|
||
| arrays = pipeline.arrays | ||
| if not arrays: | ||
| raise ValueError("No arrays produced by the pipeline.") | ||
|
|
||
| points = arrays[0] | ||
|
|
||
| if "GpsTime" not in points.dtype.names: | ||
| raise ValueError("Point cloud does not contain a 'GpsTime' dimension.") | ||
|
|
||
| if math.isinf(deviation_days): | ||
| logger.debug("deviation_days is Inf — no filtering applied.") | ||
| return pipeline | ||
|
|
||
| gpstime_ref_unix = datetime.fromisoformat(gpstime_ref).replace(tzinfo=timezone.utc).timestamp() | ||
| n_total = len(points) | ||
|
|
||
| # Convert GpsTime to absolute UNIX time and floor to calendar day | ||
| unix_time = points["GpsTime"] + gpstime_ref_unix | ||
| day_index = np.floor(unix_time / _SECONDS_PER_DAY).astype(np.int64) | ||
|
|
||
| # Find the modal day (most abundantly sampled) | ||
| unique_days, counts = np.unique(day_index, return_counts=True) | ||
| modal_day = int(unique_days[counts.argmax()]) | ||
|
|
||
| # Compute the GpsTime filter window [t_min, t_max] | ||
| day_lo = modal_day - int(deviation_days) | ||
| day_hi = modal_day + int(deviation_days) | ||
| t_min = max(day_lo, 0) * _SECONDS_PER_DAY - gpstime_ref_unix | ||
| t_max = (day_hi + 1) * _SECONDS_PER_DAY - gpstime_ref_unix - _EPSILON | ||
|
|
||
| # Warn about removed points | ||
| n_retained = int( | ||
| np.sum((unix_time >= max(day_lo, 0) * _SECONDS_PER_DAY) & (unix_time < (day_hi + 1) * _SECONDS_PER_DAY)) | ||
| ) | ||
| pct_removed = (1 - n_retained / n_total) * 100 | ||
| if pct_removed > 0: | ||
| warnings.warn( | ||
| f"Careful {round(pct_removed)} % of the returns were removed because they had a " | ||
| f"deviation of days around the most abundant date greater than your threshold " | ||
| f"({deviation_days} days).", | ||
| UserWarning, | ||
| stacklevel=2, | ||
| ) | ||
|
|
||
| logger.debug( | ||
| "Modal day: %d | GpsTime window [%.1f, %.1f] | %.1f%% points removed", | ||
| modal_day, | ||
| t_min, | ||
| t_max, | ||
| pct_removed, | ||
| ) | ||
|
|
||
| filter_json = {"pipeline": [{"type": "filters.range", "limits": f"GpsTime[{t_min}:{t_max}]"}]} | ||
| return pdal.Pipeline(json.dumps(filter_json), arrays=[points]) |
132 changes: 132 additions & 0 deletions
132
lidar_for_fuel/pretreatment/normalize_height_in_pointcloud.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,132 @@ | ||
| """ | ||
| Calculate normalized height in pointcloud, i.e. height above ground. | ||
| """ | ||
| import json | ||
| import logging | ||
| from typing import List | ||
|
|
||
| import numpy as np | ||
| import pdal | ||
| from numpy.lib import recfunctions as rfn | ||
| from scipy.interpolate import LinearNDInterpolator, NearestNDInterpolator | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| def normalize_height( | ||
| input_pipeline: pdal.Pipeline, | ||
| output_file: str, | ||
| filter_dimension: str, | ||
| filter_values: List[int], | ||
| height_filter: float = 60.0, | ||
| min_height: float = -3.0, | ||
| ) -> None: | ||
| """ | ||
| Normalize heights using a Delaunay TIN built from ground points, and write result. | ||
|
|
||
| Equivalent to the R pipeline: | ||
| normalize_height(algorithm = tin()) | ||
| filter_poi((Classification <= 5 | Classification == 9) & Z < Height_filter) | ||
|
|
||
| Steps: | ||
| 1. Execute the input pipeline. | ||
| 2. Build a Delaunay TIN from all points with dtm_marker == 1. | ||
| 3. Interpolate ground elevation for every point. Points outside the TIN convex | ||
| hull are handled by nearest-neighbour extrapolation. | ||
| 4. Compute Z_ref = Z - interpolated_ground_Z. | ||
| 5. Filter points by filter_dimension/filter_values. | ||
| 6. Remove points outside [min_height, height_filter]. | ||
| 7. Write output LAS/LAZ with Z_ref as an extra dimension. | ||
|
|
||
| Args: | ||
| input_pipeline (pdal.Pipeline): PDAL Pipeline object (may be unexecuted). | ||
| output_file (str): Output LAS/LAZ path. | ||
| filter_dimension (str): Dimension name used to filter output points. | ||
| filter_values (List[int]): Values to keep along filter_dimension. | ||
| height_filter (float): Upper height threshold in metres (default: 60 m). | ||
| min_height (float): Lower height threshold in metres (default: -3 m). | ||
|
|
||
| Raises: | ||
| ValueError: If the pipeline produces no arrays, or if there are fewer than | ||
| 3 ground points to build a TIN. | ||
| """ | ||
| # Execute the pipeline on all points | ||
| pipeline = input_pipeline if isinstance(input_pipeline, pdal.Pipeline) else pdal.Pipeline() | input_pipeline | ||
| pipeline.execute() | ||
|
|
||
| arrays = pipeline.arrays | ||
| if not arrays: | ||
| raise ValueError("No arrays produced by the pipeline.") | ||
| points = arrays[0] | ||
|
|
||
| # Select ground points for DTM computation: all points with dtm_marker == 1 | ||
| ground_mask = points["dtm_marker"] == 1 | ||
| logger.debug("%d ground points with dtm_marker=1", int(ground_mask.sum())) | ||
| ground_points = points[ground_mask] | ||
|
|
||
| if len(ground_points) < 3: | ||
| raise ValueError(f"Not enough ground points to build a TIN (found {len(ground_points)}, need at least 3).") | ||
|
|
||
| # Build Delaunay TIN interpolator from ground points (X, Y) → Z | ||
| # LinearNDInterpolator: builds a Delaunay triangulation from ground points. | ||
| # For each point (X, Y), finds the triangle it falls in and computes Z by | ||
| # barycentric interpolation between the 3 vertices. Equivalent to lidR::tin(). | ||
| # Returns NaN for points outside the convex hull of ground points. | ||
| # | ||
| # NearestNDInterpolator: fallback for points outside the convex hull (e.g. at | ||
| # tile edges where vegetation points exceed the ground point extent). | ||
| # Returns the Z of the nearest ground point. | ||
| ground_xy = np.column_stack([ground_points["X"], ground_points["Y"]]) | ||
| ground_z = ground_points["Z"] | ||
| lin_interp = LinearNDInterpolator(ground_xy, ground_z) | ||
| nn_interp = NearestNDInterpolator(ground_xy, ground_z) | ||
|
|
||
| # Interpolate ground elevation for all points | ||
| all_xy = np.column_stack([points["X"], points["Y"]]) | ||
| ground_z_interp = lin_interp(all_xy) | ||
|
|
||
| # Fill NaN (outside convex hull) with nearest-neighbour extrapolation | ||
| nan_mask = np.isnan(ground_z_interp) | ||
| if np.any(nan_mask): | ||
| logger.debug("%d points outside TIN convex hull — using nearest-neighbour extrapolation", int(nan_mask.sum())) | ||
| ground_z_interp[nan_mask] = nn_interp(all_xy[nan_mask]) | ||
|
|
||
| # Compute height above ground | ||
| hag = points["Z"] - ground_z_interp | ||
|
|
||
| # Filter points by filter_dimension/filter_values | ||
| if filter_dimension and filter_values: | ||
| dim_mask = np.isin(points[filter_dimension], filter_values) | ||
| points = points[dim_mask] | ||
| hag = hag[dim_mask] | ||
| logger.debug("Dimension filter on %s %s: %d points kept", filter_dimension, filter_values, len(points)) | ||
|
|
||
| # Filter by height bounds [min_height, height_filter] | ||
| valid = (hag >= min_height) & (hag <= height_filter) | ||
| pct_removed = (1 - valid.sum() / len(points)) * 100 | ||
| logger.debug("Height filter [%.1f, %.1f]: %.1f%% points removed", min_height, height_filter, pct_removed) | ||
|
|
||
| points_out = points[valid] | ||
| hag_out = hag[valid].astype(np.float64) | ||
|
|
||
| # Add Z_ref dimension (= HeightAboveGround) | ||
| points_with_zref = rfn.append_fields(points_out, "Z_ref", hag_out, dtypes=np.float64, usemask=False) | ||
|
|
||
| # Write output | ||
| writer_pipeline = pdal.Pipeline( | ||
| json.dumps( | ||
| { | ||
| "pipeline": [ | ||
| { | ||
| "type": "writers.las", | ||
| "filename": str(output_file), | ||
| "extra_dims": "all", | ||
| "forward": "all", | ||
| "minor_version": "4", | ||
| } | ||
| ] | ||
| } | ||
| ), | ||
| arrays=[points_with_zref], | ||
| ) | ||
| writer_pipeline.execute() |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pas besoin d'incrémenter le n° de version majeur. Le n° de version mineur est suffisant. Passe plutôt en 0.1.0.
Et il faut que ce soit dans un commit à part, car c'est un moment important et unitaire un changement de version.