Skip to content

Commit 08f22ed

Browse files
committed
Added theory, framework, and api documentation from legacy repo
1 parent 5719982 commit 08f22ed

25 files changed

+1635
-0
lines changed
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
title: ForceFinder Documentation
2+
author: National Technology & Engineering Solutions of Sandia, LLC (NTESS)
3+
copyright: '2025'
4+
execute:
5+
execute_notebooks: auto
6+
timeout: 600
7+
8+
sphinx:
9+
extra_extensions:
10+
- autoapi.extension
11+
- sphinx.ext.napoleon
12+
- sphinx.ext.viewcode
13+
config:
14+
autoapi_dirs: ["../../src/forcefinder"]
15+
autoapi_python_use_implicit_namespaces: true
16+
autoapi_keep_files: false
17+
add_module_names: false
18+
autoapi_add_toctree_entry: true
19+
toc_object_entries_show_parents: "hide"
20+
autoapi_generate_api_docs: true
21+
autoapi_file_patterns: ["*.py"]
22+
autoapi_ignore: ["*/demo/"]
23+
autoapi_options:
24+
- members
25+
# - undoc-members
26+
- show-inheritance
27+
- show-module-summary
28+
- no-init-method
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
format: jb-book
2+
root: theory_api_documentation_intro
3+
chapters:
4+
- file: forcefinder_framework
5+
sections:
6+
- file: spr_types
7+
- file: automatic_bookkeeping
8+
- file: inverse_method_code
9+
- file: object_attribute_definitions
10+
- file: method_theory_intro
11+
sections:
12+
- file: inverse_problem_form
13+
- file: regularization_methods
14+
- file: hyperparameter_tuning_methods
15+
- file: transformation_theory
16+
- file: miscellaneous_techniques
17+
- file: error_metrics
18+
- file: example_utilization
19+
- file: autoapi/index
20+
title: API Reference
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# Automatic Bookkeeping, Sample Splitting, and Inverse Processing
2+
In most cases, the data for the SPR object is passed to the initializer/constructor function as SDynPy objects, which allows ForceFinder to automatically organize the data. Once in the SPR object, the data is stored as NumPy arrays to reduce overhead and simplify the process for performing computations. As a result, the data is prepared for fundamental ISE operations upon object creation, meaning that the practitioner does not need to consider any bookkeeping operations.
3+
4+
```{note}
5+
Individual pieces of data (FRFs, response, etc.) are recalled as class attributes for the SPR object, which are returned as SDynPy arrays. For example, `spr_object.frfs` will return a SDynPy `TransferFunctionArray` of the FRFs in the SPR object.
6+
```
7+
8+
## Sample Splitting
9+
The SPR object initializer function includes methods for splitting the response degrees of freedom (DOFs) into so-called "training" and "validation" DOFs. This allows the practitioner to split the response and FRF data so only the training data is used for the ISE and the validation data is held out for optional quality evaluations. The training and validation DOFs are concatenated to create a superset of "target" response DOFs, which are DOFs in the FRFs that have accompanying response data. The difference between the training and validation DOFs is intuited by the initializer function with one of two ways:
10+
11+
1. The practitioner can supply the target and training response data as separate SDynPy objects. The function will determine the validation DOFs based on the DOFs that are not in the intersection between the target and training data.
12+
2. The user can supply the target response as a single SDynPy object and specify the training response DOFS with a SDynPy `CoordinateArray`. The function will split the supplied `target_response` into the training and validation responses accordingly. The validation DOFs are identified based on the DOFs that are not in the intersection between the target and training `CoordinateArrays`.
13+
14+
```{tip}
15+
Accompanying response data is not required for all the response DOFs in the FRFs, meaning that responses can be predicted at locations where measured `target_response` data is unavailable.
16+
```
17+
```{tip}
18+
The practitioner does not need to explicitly supply separate training and target response data or DOFs. The initializer will assume that the target and training DOFs are the same (i.e., there are not any validation DOFs) if it cannot intuit the sample split with the methods that are described above.
19+
```
20+
```{note}
21+
The training and target data (for either the FRFs or responses) does not need to have the same ordinate, for cases where the data has been processed differently.
22+
```
23+
24+
## Inverse Processing Decorator Functions
25+
A so-called `inverse_processing` decorator function has been applied to every inverse method in ForceFinder (where there are different decorator functions for the different SPR types). These functions handle all the pre/post processing tasks that are common to every inverse method. These tasks include:
26+
- Applying transformations
27+
- Applying the buzz method (for `PowerSourcePathReceiver` objects)
28+
- Applying constant overlap and add (COLA) processing for the `TransientSourcePathReceiver`
29+
```{note}
30+
Optional kwargs exist in the function signature for the inverse methods to enable/disable or modify default parameters for some of the pre/post processing in the `inverse_processing` decorator functions.
31+
```
32+
33+
These `inverse_processing` decorator functions follow the same general process for each SPR type:
34+
1. Collect the FRF and response data from the SPR object and preprocess it for the inverse method
35+
2. Supply the preprocessed FRF and response data to the inverse method
36+
3. Collect the estimated source data from the inverse method and convert it back to a physical quantity (if a transformation was applied)
37+
4. Store the estimated sources (as a physical quantity) to the SPR object
38+
39+
```{note}
40+
The `inverse_processing` decorator functions require that additional kwargs be added to the function signature for the inverse methods, as described in [Anatomy of an Inverse Method](inverse_method_code). Further, the use of the `inverse_processing` decorator functions should be transparent in most basic uses of ForceFinder. However, it is useful to understand the layout of the functions when reviewing code or implementing a new inverse method.
41+
```
Lines changed: 176 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,176 @@
1+
# Error Metrics
2+
Several error metrics have been implemented in ForceFinder to evaluate the accuracy of the source estimation via the accuracy of the reconstructed response compared to a truth response. Most of these metrics compute a summary curve that attempts to represent the errors for all the DOFs in a single spectra or time trace. Additional methods are available to _plot_ the errors in the reconstructed responses, but they are not described here.
3+
4+
Each of the error metric methods in ForceFinder have an optional kwarg that is called `channel_set`, which determines the DOFs for the truth and reconstructed responses that the metric is computed for. The options for this kwarg are:
5+
6+
- `training` - This computes the error metric between the `transformed_training_response` and `transformed_reconstructed_response` attributes of the SPR object.
7+
- `validation` - This computes the error metric between the `validation_response` and `reconstructed_validation_response` attributes of the SPR object.
8+
- `target` - This computes the error metric between the `target_response` and `reconstructed_target_response` attributes of the SPR object.
9+
10+
```{note}
11+
The error metrics in ForceFinder are used to understand how well the estimated sources reconstruct the training/validation/target responses, which may not be indicative of the sources ability to predict responses at unseen locations or on unseen systems (in the case of component based TPA)
12+
```
13+
```{tip}
14+
Unless otherwise noted, the error metrics are implemented as class methods in ForceFinder and are used with method call on the SPR object, such as: `spr_object.error_metric()`.
15+
```
16+
17+
## Error Metrics for Spectral ISE Problems
18+
The `LinearSourcePathReceiver` and `PowerSourcePathReceiver` use the same metrics, which evaluate the error in the PSDs for the different DOF sets. The responses for the `LinearSourcePathReceiver` must be converted from linear spectra to PSDs prior to computing the error metric. This is done with the following operation:
19+
20+
$$
21+
G_{xx} = \frac{1}{\Delta f}\lvert X \rvert^2
22+
$$
23+
24+
Where $G_{xx}$ is a PSD for a single DOF, $X$ is a spectra for a single DOF, and $\Delta f$ is the frequency resolution of the SPR object, which is given by the `abscissa_spacing` attribute.
25+
26+
```{note}
27+
The error metrics for spectral ISE problems commonly use the ASD acronym, which stands for auto-spectral density and is equivalent to a PSD. The ASD acronym is used here to follow the convention for MIMO vibration testing standards.
28+
```
29+
```{note}
30+
All the equations for spectral ISE problems have a frequency dependency, but this has been left ouf for brevity.
31+
```
32+
33+
(sec:global_asd_error)=
34+
### Global ASD Error
35+
The global ASD error, which is computed with the `global_asd_error` method, is a summary metric that is defined in MIL-STD 810. It sums the dB error for all the response DOFs while applying weights that are based on the relative response amplitudes. These weights are used to make the metric sensitive to errors in responses with large amplitudes and insensitive to errors in responses that have small amplitudes. As such, the global ASD error metric helps determine if the estimated sources apply sufficient vibration energy to a system in MIMO vibration testing, but may not be useful for a detailed investigation of the errors, since low responding DOFs may be ignored.
36+
37+
The global ASD error is computed via a four step process:
38+
39+
1. A normalizing factor, $\eta$, is first computed, by taking the L2 norm of the truth response PSDs:
40+
41+
$$
42+
\eta = \lVert diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{truth} \end{bmatrix}\end{pmatrix} \rVert_2
43+
$$
44+
45+
2. A weighting vector, $\begin{Bmatrix}W\end{Bmatrix}$ is computed by dividing the PSD amplitude for each DOF by $\eta$:
46+
47+
$$
48+
\begin{Bmatrix}W\end{Bmatrix} = \frac{diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{truth} \end{bmatrix}\end{pmatrix}^2}{\eta^2}
49+
$$
50+
51+
3. The dB error is computed for each DOF:
52+
53+
$$
54+
\begin{Bmatrix}E_{dB}\end{Bmatrix} = 10*log_{10}\begin{pmatrix} \frac{diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{reconstructed} \end{bmatrix}\end{pmatrix}}{diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{truth} \end{bmatrix}\end{pmatrix}} \end{pmatrix}
55+
$$
56+
57+
4. Finally, the global ASD error is computed by summing the element wise multiplication of $\begin{Bmatrix}E_{dB}\end{Bmatrix}$ and $\begin{Bmatrix}W\end{Bmatrix}$:
58+
59+
$$
60+
E_{global} = \sum{\begin{Bmatrix}E_{dB}\end{Bmatrix}*\begin{Bmatrix}W\end{Bmatrix}}
61+
$$
62+
63+
### Average ASD Error
64+
The average ASD error, which is computed with the `average_asd_error` method, is a simple average of the dB error spectra for the all the response DOFs. The metric is computed with:
65+
66+
$$
67+
E_{average} = 10*log_{10}\begin{pmatrix} \frac{1}{n}\sum{\frac{diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{reconstructed} \end{bmatrix}\end{pmatrix}}{diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{truth} \end{bmatrix}\end{pmatrix}}} \end{pmatrix}
68+
$$
69+
70+
Where $n$ is the number of response DOFs for the metric computation.
71+
72+
```{note}
73+
Decibel values are averaged on the corresponding linear values, which is why the average is done on the ratio of the reconstructed and truth PSDs.
74+
```
75+
```{note}
76+
Many ISE problems are computed as least squares problems, which tend to result in an similar quantities of positive and negative errors. Consequently, the average ASD error may show less error than a subjective perception of the DOF by DOF error. However, it can be useful for quickly identifying large bias errors.
77+
```
78+
79+
### RMS ASD Error
80+
The RMS ASD error, which is computed with a `rms_asd_error` method, is summary metric that computes the RMS value of the dB error spectra for all the response DOFs. This is done to have an error metric that treats the positive and negative errors the same, which may potentially be a better match to a subjective perception of the DOF by DOF error that the `average_asd_error`. The metric is computed with:
81+
82+
$$
83+
\begin{Bmatrix}E_{dB}\end{Bmatrix} = 10*log_{10}\begin{pmatrix} \frac{diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{reconstructed} \end{bmatrix}\end{pmatrix}}{diag \begin{pmatrix}\begin{bmatrix} G_{xx}^{truth} \end{bmatrix}\end{pmatrix}} \end{pmatrix}
84+
$$
85+
86+
$$
87+
E_{rms} = \sqrt{\frac{1}{n}\sum{\begin{Bmatrix}E_{dB}\end{Bmatrix}^2}}
88+
$$
89+
90+
Where $\begin{Bmatrix} E_{dB} \end{Bmatrix}$ is the DOF by DOF dB error spectra and $n$ is the number of response DOFs for the metric computation.
91+
92+
## Error Metrics for Transient Problems
93+
Several error metrics have been implemented for transient problems, which attempt to evaluate the errors in response level, waveform shape, and spectral content. All of these metrics are time varying and are computed by splitting the full time trace into segments and computing the error on a segment-by-segment basis. The segmentation is specified with two parameters:
94+
95+
- Segment duration - This is the duration of the segment, which is specified as a time with the `frame_length` kwarg or an integer number of samples with the `samples_per_frame` kwarg.
96+
- Overlap between segments - This is the overlap between the segments, which is specified as a percentage (in decimal format) with the `overlap` kwarg or as an integer number of samples with the `overlap_samples` kwarg.
97+
98+
### Global RMS Error
99+
The global RMS error, which is computed with the `global_rms_error` method, is a summary metric that is defined in MIL-STD 810. It sums the RMS errors for all the response DOFs while applying weights that are based on the relative response amplitudes. These weights are used to make the metric sensitive to errors in responses with large amplitudes and insensitive to errors in responses that have small amplitudes. As such, the global RMS error metric helps determine if the estimated sources apply sufficient vibration energy to a system in MIMO vibration testing, but may not be useful for a detailed investigation of the errors, since low responding DOFs may be ignored.
100+
101+
The global RMS error is computed with the same four step process that is used for the [global ASD error](sec:global_asd_error), but applied to RMS levels vs. time instead of PSD amplitudes vs. frequency:
102+
103+
1. A normalizing factor, $\eta$, is first computed, by taking the L2 norm of the time varying RMS level for the truth response, $\begin{Bmatrix} {RMS}^{truth} \end{Bmatrix}$:
104+
105+
$$
106+
\eta = \lVert \begin{Bmatrix} {RMS}^{truth} \end{Bmatrix} \rVert_2
107+
$$
108+
109+
2. A weighting vector, $\begin{Bmatrix}W\end{Bmatrix}$ is computed by dividing the time varying RMS level for the truth response, $\begin{Bmatrix} {RMS}^{truth} \end{Bmatrix}$, by $\eta$:
110+
111+
$$
112+
\begin{Bmatrix}W\end{Bmatrix} = \frac{\begin{Bmatrix} {RMS}^{truth} \end{Bmatrix}^2}{\eta^2}
113+
$$
114+
115+
3. The dB error between the time varying RMS level for the truth response, $\begin{Bmatrix} RMS^{truth} \end{Bmatrix}$, and reconstructed response, $\begin{Bmatrix} RMS^{reconstructed} \end{Bmatrix}$, is computed for each DOF:
116+
117+
$$
118+
\begin{Bmatrix}E_{dB}\end{Bmatrix} = 20*log_{10}\begin{pmatrix} \frac{\begin{Bmatrix} {RMS}^{reconstructed} \end{Bmatrix}}{\begin{Bmatrix} {RMS}^{truth} \end{Bmatrix}} \end{pmatrix}
119+
$$
120+
121+
4. Finally, the global RMS error is computed by summing the element wise multiplication of $\begin{Bmatrix}E_{dB}\end{Bmatrix}$ and $\begin{Bmatrix}W\end{Bmatrix}$:
122+
123+
$$
124+
E_{global} = \sum{\begin{Bmatrix}E_{dB}\end{Bmatrix}*\begin{Bmatrix}W\end{Bmatrix}}
125+
$$
126+
127+
### Average RMS Error
128+
The average RMS error, which is computed with the `average_rms_error` method, is a simple average of the dB RMS level error time traces for the all the response DOFs. The metric is computed for each time segment with the following expression:
129+
130+
$$
131+
E_{average} = 20*log_{10}\begin{pmatrix} \frac{1}{n}\sum{\frac{\begin{Bmatrix} {RMS}^{reconstructed} \end{Bmatrix}}{\begin{Bmatrix} {RMS}^{truth} \end{Bmatrix}}} \end{pmatrix}
132+
$$
133+
134+
Where $n$ is the number of response DOFs for the metric computation, $\begin{Bmatrix}RMS^{reconstructed}\end{Bmatrix}$ is the time varying RMS level for the reconstructed response, and $\begin{Bmatrix}RMS^{truth}\end{Bmatrix}$ is the time varying RMS level for the truth response.
135+
136+
```{note}
137+
Decibel values are averaged on the corresponding linear values, which is why the average is done on the ratio of the reconstructed and truth RMS level errors.
138+
```
139+
```{note}
140+
Many ISE problems are computed as least squares problems, which tend to result in an similar quantities of positive and negative errors. Consequently, the average RMS error may show less error than a subjective perception of the DOF by DOF error. However, it can be useful for quickly identifying large bias errors.
141+
```
142+
143+
### Time Varying TRAC
144+
As the name implies, this metric computes a TRAC error time trace (based on the segmentation) for all the response DOFs and is computed with the `time_varying_trac` method. The TRAC error is computed for each DOF (at each segment) with:
145+
146+
$$
147+
{TRAC} = \frac{\begin{pmatrix}\begin{Bmatrix}x_n^{truth}\end{Bmatrix} \cdot \begin{Bmatrix}x_n^{reconstructed}\end{Bmatrix}\end{pmatrix}^2}{\begin{pmatrix}\begin{Bmatrix}x_n^{truth}\end{Bmatrix} \cdot \begin{Bmatrix}x_n^{truth}\end{Bmatrix}\end{pmatrix}*\begin{pmatrix}\begin{Bmatrix}x_n^{reconstructed}\end{Bmatrix} \cdot \begin{Bmatrix}x_n^{reconstructed}\end{Bmatrix}\end{pmatrix}}
148+
$$
149+
150+
Where $n$ represents the response DOF index and the response vectors, $\begin{Bmatrix}x_n^{truth}\end{Bmatrix}$ and $\begin{Bmatrix}x_n^{reconstructed}\end{Bmatrix}$, are the time traces for a specific time segment and DOF (i.e., each entry in the vector is a different time sample).
151+
152+
```{note}
153+
The `time_varying_trac` method returns a SDynPy `TimeHistoryArray` with the time varying TRAC for each DOF and does not attempt to summarize the TRACs for the different DOFs into a single curve.
154+
```
155+
156+
### Time Varying Level Error
157+
The time varying level error, which is computed with the `time_varying_level_error` method, computes the response level error in dB for all the DOFs rather than computing a single summary curve (like the `global_rms_error`, etc.). Currently, two types of levels are supported: the segment RMS level error and the segment maximum level error.
158+
159+
### Spectrogram Error
160+
The spectrogram error computes a short-time Fourier transform (STFT) PSD for all the DOFs, then computes the dB error between the truth and reconstructed STFTs. This metric is computed with the `compute_error_stft` function that is in the `transient_quality_metrics` module. The spectrogram error attempts to show the spectral errors as a function of time and can be useful to develop a thorough understanding of the errors in the ISE problem.
161+
162+
It can be difficult to interpret the spectrograms to determine if the errors are significant to the overall response. This is because the dB error calculation makes it impossible to understand the response amplitude vs. time. For example, the spectrogram error could show high error but that error might occur at a time or frequency that has a low response amplitude, meaning that the error is insignificant to the overall response. An RMS level normalization, which is called with the `normalize_by_rms` kwarg, was added to the `compute_error_stft` function in an attempt to mitigate this issue.
163+
164+
The normalization is computed for a specific DOF with:
165+
166+
$$
167+
\eta = \frac{{RMS}^{truth}}{max\begin{pmatrix} {RMS}^{truth} \end{pmatrix}}
168+
$$
169+
170+
Where $\eta$ is the time varying normalization factor (for a specific DOF), ${RMS}^{truth}$ is the RMS value vs. time (based on the segmentation), and $max\begin{pmatrix} {RMS}^{truth} \end{pmatrix}$ computes the maximum RMS value for the whole time trace. This normalization is applied to the spectrogram error for a specific DOF with:
171+
172+
$$
173+
E_{STFT, normalized} = E_{STFT} * \eta
174+
$$
175+
176+
Where the normalization for each segment is computed for the dB error spectrum for each time segment.
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Example Utilization
2+
This section of the documentation is currently under development and will show some basic examples of how to use ForceFinder.

0 commit comments

Comments
 (0)