You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/NDLPwriteup.jl
+25-30Lines changed: 25 additions & 30 deletions
Original file line number
Diff line number
Diff line change
@@ -16,18 +16,15 @@ md"""
16
16
17
17
## Motivation: Efficient one-shot curve sampling with tight deviation bounds
18
18
19
-
Many numerical methods require representing smooth curves by piecewise-linear segments. Examples include panel methods, filament methods, and boundary integral discretizations. The goal is typically to place points along a curve such that:
20
-
- geometric error is controlled,
21
-
- unnecessary points are avoided,
22
-
- and refinement behaves predictably.
19
+
Many numerical methods require representing smooth curves by points sampled along the curve. Examples include panel methods, filament methods, and boundary integral discretizations. The goal is typically to place points along a curve such that geometric error is controlled, unnecessary points are avoided, and refinement behaves predictably.
23
20
24
-
The most common approach in practice is recursive subdivision: split segments until local error criteria are satisfied. While robust, subdivision is inefficient in the number of segments required and highly sensitive to small changes in curve definition and segmentation parameters.
21
+
The most common approach in practice is recursive subdivision which splits segments until local error criteria are satisfied. While robust, subdivision is prone to oversample points and is sensitive to small changes in curve and segmentation parameters.
25
22
26
-
The most common alternative is to construct a re-parameterization for the curve to heuristically control accuracy. These methods typically weight the arc-speed with local curvature in some manner, but such methods lack guarantees and must be tuned iteratively *for each case* to achieve any required error limit.
23
+
The most common alternative is to construct a re-parameterization for the curve to heuristically control accuracy. Curvature-weighted reparameterizations produce smooth segmentations but offer no direct control over maximum deviation and therefore require iterative tuning.
27
24
28
-
In this notebook we explore an alternative: Normal-Deviation–Limited Parameterization (NDLP). NDLP hard-codes a pointwise geometric deviation bound directly into a parameterization using the curve's normal acceleration. We will show that the resulting segmentation has smoothly varying density and exhibits a maximum deviation from the curve which is tightly bound to a prescribed tolerance, enforcing the bound while minimizing the number of segments. This is achieved without iteration or heuristic tuning - a true one-shot method.
25
+
In this notebook we propose Normal-Deviation–Limited Parameterization (NDLP) which embeds a pointwise geometric deviation bound in the parameterization using the curve's normal acceleration. The resulting segmentation has smoothly varying density and exhibits a maximum deviation from the curve which is tightly bound to a prescribed tolerance, enforcing the bound while minimizing the number of segments. This is achieved without iteration or heuristic tuning - a true one-shot method.
29
26
30
-
All results below are fully reproducible.
27
+
All results below are fully reproducible. Click the "Download as Pluto notebook" button at the top right to run locally.
31
28
32
29
## Key idea
33
30
@@ -43,7 +40,7 @@ where $\Delta s$ is the maximum segment length and $d_n$ is a specified normal d
43
40
44
41
## Illustration
45
42
46
-
We give an initial illustration of the the performance over the new approach using a cubic spline geometry and a coarse sampling $\Delta s=1/4$ and $d_n=9$% to highlight the differences. The new NDLP sampling is tight to the prescribed deviation limit and uses the fewest number of points.
43
+
We give an initial illustration of the the performance over the new approach using a cubic spline geometry and a coarse sampling $\Delta s=1/4$ and $d_n=9\%$ to highlight the differences. The new NDLP sampling is tight to the prescribed deviation limit and uses the fewest number of points.
47
44
"""
48
45
49
46
# ╔═╡ e1bc7e76-a497-41db-8f91-b8912e359e0e
@@ -58,7 +55,7 @@ We begin by defining a small set of representative curves:
58
55
1. a three-dimensional helix with varying pitch and radius,
59
56
1. and a V-shaped "curve" with a corner that violates smoothness assumptions.
60
57
61
-
These are deliberately chosen to expose both strengths and failure modes of the methods. The final example (the V-shape) is included as a negative control as local curvature-based methods can not handle curvature discontinuities gracefully.
58
+
These are deliberately chosen to expose both strengths and failure modes of the methods. The final example (the V-shape) is included as a negative control as local curvature-based methods cannot handle curvature discontinuities gracefully.
62
59
"""
63
60
64
61
# ╔═╡ 28abbec4-f1ea-4edb-8bd7-bcdc63c7cd82
@@ -107,11 +104,11 @@ We begin by defining three segmentation methods: 1. adaptive subdivision, 2. cur
107
104
108
105
The adaptive subdivision method recursively splits segments until local deviation and length criteria are met. For each segment $[u_i,u_{i+1}]$, we approximate the segment curve length $\Delta l$ and the maximum deviation $\delta$ by sampling points along the segment. If either limit is exceeded the segment is bisected at $u_m = (u_i + u_{i+1})/2$. This process continues until all segments satisfy the criteria.
109
106
110
-
The limitation of this method is that it is inherently binary: segments are either split or not based on local criteria. Therefore, any violation of the tolerance, no matter how small, forces a full bisection, resulting in up to 50% local oversampling. The discrete nature of the approach also makes the final segmentation sensitive to small changes in the curve are limits, adding discretization noise to methods using the segmentation, such as convergence and optimization studies.
107
+
The limitation of this method is that it is inherently binary: segments are either split or not based on local criteria. Therefore, any violation of the tolerance, no matter how small, forces a full bisection, resulting in up to 50% local oversampling. The discrete nature of the approach also makes the final segmentation sensitive to small changes in the curve or limits, adding discretization noise to methods using the segmentation, such as convergence and optimization studies.
111
108
112
-
### 2. Curvature-weighted sampling
109
+
### 2. Curvatureweighted sampling
113
110
114
-
A common method to smoothly control the deviation is to reparameterize the curve with the local curvature $\kappa$ to increase sampling density in high-curvature regions. A simple choice is to define a reparameterized speed function $s' = l'\sqrt{1 + \tilde C \kappa}$, where $\tilde C$ is a tunable constant. This results in smooth segmentations, but there is no direct control over the maximum deviation from the curve, and the method requires iterative tuning of $\tilde C$ to achieve any required deviation limits.
111
+
A common method to smoothly control the deviation is to reparameterize the curve with the local curvature $\kappa$ to increase sampling density in high-curvature regions. A simple choice is to define a reparameterized speed function $s' = l'\sqrt{1 + \tilde C \kappa}$, where $l' = \lVert r'(u) \rVert$ is the arc-speed and $\tilde C$ is a tunable constant. This results in smooth segmentations, but there is no direct control over the maximum deviation from the curve, and the method requires iterative tuning of $\tilde C$ to achieve any required deviation limits.
115
112
116
113
Note that the units of $\tilde C$ are length, so it is sensible to set $\tilde C = C \Delta s$ for some dimensionless $C$. However, this parameter still must be tuned and won't generalize across curves or sampling densities, as demonstrated here.
117
114
@@ -123,8 +120,8 @@ Starting from the curvature-based deviation estimate $\delta\approx \frac 1 8 \D
123
120
124
121
$s' \geq l' \sqrt{\frac{\Delta s \kappa}{8d_n}}.$
125
122
126
-
Defining the geometric normal arc-acceleration $a_n=\sqrt{|r''|^2-l''^2} = l'^2 \kappa$, and
127
-
demanding also that $s' \geq l'$ such that $\Delta l \leq \Delta s$ implies that the tightest speed parameterization to the two bounds is:
123
+
Defining the curve's geometric normal acceleration $a_n\equiv\sqrt{\lVert r''\rVert^2-l''^2} = l'^2 \kappa$, and
124
+
demanding also that $s' \geq l'$ such that $\Delta l \leq \Delta s$ yields the speed parameterization which is tight to both limits:
128
125
129
126
$s' = \max\left(l', \sqrt{\frac{\Delta s a_n}{8 d_n}}\right)$
130
127
@@ -137,15 +134,13 @@ Note that while the deviation estimate that is the basis of this parameterizatio
137
134
md"""
138
135
### Results
139
136
140
-
We evaluate each method on the test curves defined above, using a maximum segment length of $\Delta s=L/20$ and a deviation limit of $d_n=1/100$. The results are summarized in the tables below, reporting the following metrics:
141
-
- `δ∞`=$\max(\delta)/(d_n\Delta s)$: the scaled normal deviation,
137
+
We evaluate each method on the test curves defined above, using a maximum segment length of $\Delta s=L/33$ and a deviation limit of $d_n=1/100$. The results are summarized in the tables below, reporting the following metrics:
138
+
- `δ∞`=$\max(\delta)/(d_n\Delta s)$: the scaled normal deviation (target is 1),
142
139
- `σ`=$N\Delta s/L-1$: the scaled number of extra segments needed to hit the deviation limit (lower is better),
143
140
- `Rₜᵥ`=$\sum(R)/L$ where $R_i=|Δl_{i+1}-Δl_i|$: the scaled total variation of segment lengths (lower is better),
144
141
- `R∞`=$\max(R)/\Delta s$: the scaled maximum variation of segment lengths (lower is better).
145
142
146
-
The NDLP segmentation consistently produces a max deviation which is tight to the prescribed limit, resulting in with minimal excess segments, outperforming both adaptive subdivision and curvature-weighted sampling. In particular the measured deviation is within ±4% of the deviation limit, while the curvature weighted method has a ±15% variation **even after tuning**, and the subdivision method can be up to 50% oversampled.
147
-
148
-
Adjacent segment lengths under NDLP sampling differ by at most $O(\Delta s)$, indicating Lipschitz-continuous spacing adaptation with respect to arclength. Moreover, for fixed deviation limit, the parameterization converges to uniform arclength as $\Delta s\rightarrow 0$ by construction, so both the total variation and the maximum local spacing variation vanish under refinement.
143
+
The NDLP segmentation consistently produces a max deviation which is tight to the prescribed limit, resulting in minimal excess segments, outperforming both adaptive subdivision and curvature-weighted sampling. In particular the deviation from NDLP sampling is within 4% of the deviation limit, while the curvature weighted method has a ±20% variation **even after tuning**, and the subdivision method can be up to 50% oversampled. Adjacent segment lengths under NDLP sampling differ by at most $O(\Delta s)$, indicating Lipschitz-continuous spacing adaptation with respect to arclength.
We also evaluate the convergence of the NDLP segmentation metrics as the $\Delta s$ and $d_n$ limits vary. We use the cubic spline fish as a representative curve.
249
244
250
-
Holding $d_n=1$% constant and *reducing* $\Delta s$ shows two distance phases.
245
+
Holding $d_n=1\%$ constant and *reducing* $\Delta s$ shows two distance phases.
251
246
- In the first phase, the deviation $\max(\delta)$ goes rapidly to the limit $d_n\Delta s$ and holds steady while the excess number of segments and total variation in the panel lengths drops to zero with $\Delta s$.
252
247
- In the second phase the deviation limit is no longer active, so $\max(\delta)$ goes to zero without any additional segments or any length variation.
253
248
"""
@@ -270,12 +265,12 @@ end
270
265
# ╔═╡ 67eba604-640b-4388-bc08-928f4e5e62ca
271
266
md"""
272
267
Holding $\Delta s=L/100$ and _increasing_ $d_n$ shows a similar trend.
273
-
- Again, in the first phase, the deviation limit is honored while the excess segments and total variation in segment lengths drop to zero. The key difference is that the $\max(R)$ remains roughly constant through this range; approximately $\Delta s/3$ for this example. Again, this indicates Lipschitz-continuous spacing, even for finite $\Delta s$ and large $d_n$.
268
+
- Again, in the first phase, the deviation limit is honored while the excess segments and total variation in segment lengths drop to zero. The key difference is that the $\max(R)/\Delta s$ remains roughly constant through this range; approximately $1/3$ for this example. Again, this indicates Lipschitz-continuous spacing, even for finite $\Delta s$.
274
269
- In the second phase, the $d_n$ limit deactivates, letting $\max(\delta)$ scaled by $d_n$ and $\max(R)$ both drop to zero as the sampling becomes uniformly spaced.
0 commit comments