Skip to content

Commit 6df4906

Browse files
author
Maximilian Maahn
committed
docs: fix spelling errors in documentation (with AI)
1 parent 1eb4f26 commit 6df4906

5 files changed

Lines changed: 46 additions & 47 deletions

File tree

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,5 @@ pyOptimalEstimation/examples/*.png
88
.pytest_cache
99
.coverage
1010
.DS_Store
11-
.eggs
11+
.eggs
12+
.aider*

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Maahn, M., D. D. Turner, U. Löhnert, D. J. Posselt, K. Ebell, G. G. Mace, and J
2828
## Examples
2929

3030
* A minimal working example can be found at https://github.com/maahn/pyOptimalEstimation/blob/master/pyOptimalEstimation/examples/dsd_radar.py
31-
* Two fullly annotated examples (microwave temperature/humidity retrieval & radar drops size distribution retrieval) are available at https://github.com/maahn/pyOptimalEstimation_examples. They can be run online using binder.
31+
* Two fullly annotated examples (microwave temperature/humidity retrieval & radar drop size distribution retrieval) are available at https://github.com/maahn/pyOptimalEstimation_examples. They can be run online using binder.
3232
* A retrieval for retrieving surface winds from satellites using RTTOV is available at https://github.com/deweatherman/RadEst
3333

3434
## API documentation

docs/index.rst

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,11 @@
1-
21
:mod:`pyOptimalEstimation` Package
32
==================================
43

54

65
.. toctree::
76
:maxdepth: 3
87

9-
Python package to solve an inverse problem using Optimal Estimation and an arbritrary Forward model following Rodgers, 2000.
8+
Python package to solve an inverse problem using Optimal Estimation and an arbitrary Forward model following Rodgers, 2000.
109

1110

1211
Download
@@ -57,4 +56,3 @@ API documentation
5756
:undoc-members:
5857
:show-inheritance:
5958
:member-order: bysource
60-

pyOptimalEstimation/examples/dsd_radar.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
pyOptimalEstimation minimal working example
44
55
Retrieve N0 and lambda of the drop size distribution N(D) = N0 * exp(-lambda*D)
6-
given a refletivity measurement and prior knowledge about N0 and lambda.
6+
given a reflectivity measurement and prior knowledge about N0 and lambda.
77
Rayleigh scattering is assumed.
88
99
# Copyright (C) 2014-21 Maximilian Maahn, Leipzig University

pyOptimalEstimation/pyOEcore.py

Lines changed: 41 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -44,34 +44,34 @@ class optimalEstimation(object):
4444
observed measurement vector y.
4545
S_y : pd.DataFrame or list or np.ndarray
4646
covariance matrix of measurement y. If there is no b vector, S_y
47-
is sequal to S_e
47+
is equal to S_e
4848
forward : function
4949
forward model expected as ``forward(xb,**forwardKwArgs): return y``
5050
with xb = pd.concat((x,b)).
5151
userJacobian : function, optional
52-
For forwarld models that can calculate the Jacobian internally (e.g.
53-
RTTOV), a call to estiamte the Jacobian can be added. Otherwise, the
54-
Jacobian is estimated by pyOEusing the standard 'forward' call. The
55-
fucntion is expected as ``self.userJacobian(xb, self.perturbation, \
52+
For forward models that can calculate the Jacobian internally (e.g.
53+
RTTOV), a call to estimate the Jacobian can be added. Otherwise, the
54+
Jacobian is estimated by pyOE using the standard 'forward' call. The
55+
function is expected as ``self.userJacobian(xb, self.perturbation, \
5656
self.y_vars, **self.forwardKwArgs): return jacobian``
5757
with xb = pd.concat((x,b)). Defaults to None
5858
x_truth : pd.Series or list or np.ndarray, optional
5959
If truth of state x is known, it can added to the data object. If
6060
provided, the value will be used for the routines linearityTest and
61-
plotIterations, but _not_ by the retrieval itself. Defaults to None/
61+
plotIterations, but _not_ by the retrieval itself. Defaults to None.
6262
b_vars : list of str, optional
6363
names of the elements of parameter vector b. Defaults to [].
6464
b_p : pd.Series or list or np.ndarray.
6565
parameter vector b. defaults to []. Note that defining b_p makes
66-
only sence if S_b != 0. Otherwise it is easier (and cheaper) to
66+
only sense if S_b != 0. Otherwise it is easier (and cheaper) to
6767
hardcode b into the forward operator.
6868
S_b : pd.DataFrame or list or np.ndarray
6969
covariance matrix of parameter b. Defaults to [[]].
7070
forwardKwArgs : dict,optional
7171
additional keyword arguments for ``forward`` function.
7272
multipleForwardKwArgs : dict,optional
7373
additional keyword arguments for forward function in case multiple
74-
profiles should be provided to the forward operator at once. If not .
74+
profiles should be provided to the forward operator at once. If not
7575
defined, ``forwardKwArgs`` is used instead and ``forward`` is called
7676
for every profile separately
7777
x_lowerLimit : dict, optional
@@ -81,15 +81,15 @@ class optimalEstimation(object):
8181
reset state vector x[key] to x_upperLimit[key] in case x_upperLimit is
8282
exceeded. Defaults to {}.
8383
perturbation : float or dict of floats, optional
84-
relative perturbation of statet vector x to estimate the Jacobian. Can
85-
be specified for every element of x seperately. Defaults to 0.1 of
84+
relative perturbation of state vector x to estimate the Jacobian. Can
85+
be specified for every element of x separately. Defaults to 0.1 of
8686
prior.
8787
disturbance : float or dict of floats, optional
88-
DEPRECATED: Identical to ``perturbation`` option. If both option are
89-
provided, ``perturbation`` is used instead.
88+
DEPRECATED: Identical to ``perturbation`` option. If both options are
89+
provided, ``perturbation`` is used instead.
9090
useFactorInJac : bool,optional
9191
True if disturbance should be applied by multiplication, False if it
92-
should by applied by addition of fraction of prior. Defaults to False.
92+
should be applied by addition of fraction of prior. Defaults to False.
9393
gammaFactor : list of floats, optional
9494
Use additional gamma parameter for retrieval, see [3]_.
9595
convergenceTest : {'x', 'y', 'auto'}, optional
@@ -107,7 +107,7 @@ class optimalEstimation(object):
107107
Attributes
108108
----------
109109
converged : boolean
110-
True if retriveal converged successfully
110+
True if retrieval converged successfully
111111
x_op : pd.Series
112112
optimal state given the observations, i.e. retrieval solution
113113
y_op : pd.Series
@@ -192,7 +192,7 @@ def __init__(self,
192192
verbose=None
193193
):
194194

195-
# some initital tests
195+
# some initial tests
196196
assert np.linalg.matrix_rank(S_a) == S_a.shape[-1],\
197197
'S_a must not be singular'
198198
assert np.linalg.matrix_rank(S_y) == S_y.shape[-1],\
@@ -693,8 +693,8 @@ def doRetrieval(self, maxIter=10, x_0=None, maxTime=1e7):
693693
raise ValueError('Do not understand convergenceTest %s' %
694694
self.convergenceTest)
695695

696-
assert not self.d_i2[i] < 0, 'a negative convergence cirterion'
697-
' means someting has gotten really wrong'
696+
assert not self.d_i2[i] < 0, 'a negative convergence criterion'
697+
' means something has gotten really wrong'
698698

699699
# stop if we converged in the step before
700700
if self.converged:
@@ -741,7 +741,7 @@ def doRetrieval(self, maxIter=10, x_0=None, maxTime=1e7):
741741
self.converged = True
742742
elif (i > 1) and (self.dgf_i[i] == 0):
743743
print("%.2f s, iteration %i, degrees of freedom: %.2f of "
744-
"%i.degrees of freedom 0! STOP %.3f" % (
744+
"%i. degrees of freedom 0! STOP %.3f" % (
745745
time.time() -
746746
startTime, i, self.dgf_i[i], self.x_n,
747747
self.d_i2[i]))
@@ -839,7 +839,7 @@ def linearityTest(
839839
against x_truth.
840840
atol : float (default 1e-5)
841841
The absolute tolerance for comparing eigen values to zero. We
842-
found that values should be than the numpy.isclose defualt value
842+
found that values should be than the numpy.isclose default value
843843
of 1e-8.
844844
845845
Returns
@@ -849,7 +849,7 @@ def linearityTest(
849849
size. Should be below 1 for all.
850850
self.trueLinearityChi2: float
851851
Chi2 value that model is moderately linear based on 'self.x_truth'.
852-
Must be smaller than critical value to conclude thast model is
852+
Must be smaller than critical value to conclude that model is
853853
linear.
854854
self.trueLinearityChi2Critical: float
855855
Corresponding critical Chi2 value.
@@ -902,7 +902,7 @@ def chiSquareTest(self, significance=0.05):
902902
A) optimal solution agrees with observation in Y space
903903
B) observation agrees with prior in Y space
904904
C) optimal solution agrees with prior in Y space
905-
D) optimal solution agrees with priot in X space
905+
D) optimal solution agrees with prior in X space
906906
907907
Parameters
908908
----------
@@ -915,7 +915,7 @@ def chiSquareTest(self, significance=0.05):
915915
Pandas Series (dtype bool):
916916
True if test is passed
917917
Pandas Series (dtype float):
918-
Chi2 value for tests. Must be smaler than critical value to pass
918+
Chi2 value for tests. Must be smaller than critical value to pass
919919
tests.
920920
Pandas Series (dtype float):
921921
Critical Chi2 value for tests
@@ -978,17 +978,17 @@ def chiSquareTestYOptimalObservation(self, significance=0.05, atol=1e-5):
978978
correct null hypothesis is rejected.
979979
atol : float (default 1e-5)
980980
The absolute tolerance for comparing eigen values to zero. We
981-
found that values should be than the numpy.isclose defualt value
981+
found that values should be than the numpy.isclose default value
982982
of 1e-8.
983983
Returns
984984
-------
985985
chi2Passed : bool
986-
True if chi² test passed, i.e. OE retrieval agrees with
986+
True if chi² test passed, i.e. OE retrieval agrees with
987987
measurements and null hypothesis is NOT rejected.
988988
chi2 : real
989989
chi² value
990990
chi2TestY : real
991-
chi² cutoff value with significance 'significance'
991+
chi² cutoff value with significance 'significance'
992992
993993
"""
994994
assert self.converged
@@ -1018,17 +1018,17 @@ def chiSquareTestYObservationPrior(self, significance=0.05, atol=1e-5):
10181018
correct null hypothesis is rejected.
10191019
atol : float (default 1e-5)
10201020
The absolute tolerance for comparing eigen values to zero. We
1021-
found that values should be than the numpy.isclose defualt value
1021+
found that values should be than the numpy.isclose default value
10221022
of 1e-8.
10231023
Returns
10241024
-------
10251025
YObservationPrior : bool
1026-
True if chi² test passed, i.e. OE retrieval agrees with
1026+
True if chi² test passed, i.e. OE retrieval agrees with
10271027
measurements and null hypothesis is NOT rejected.
10281028
YObservationPrior: real
10291029
chi² value
10301030
chi2TestY : real
1031-
chi² cutoff value with significance 'significance'
1031+
chi² cutoff value with significance 'significance'
10321032
10331033
"""
10341034

@@ -1056,18 +1056,18 @@ def chiSquareTestYOptimalPrior(self, significance=0.05, atol=1e-5):
10561056
correct null hypothesis is rejected.
10571057
atol : float (default 1e-5)
10581058
The absolute tolerance for comparing eigen values to zero. We
1059-
found that values should be than the numpy.isclose defualt value
1059+
found that values should be than the numpy.isclose default value
10601060
of 1e-8.
10611061
10621062
Returns
10631063
-------
1064-
chi2Passe : bool
1065-
True if chi² test passed, i.e. OE retrieval agrees with
1064+
chi2Passed : bool
1065+
True if chi² test passed, i.e. OE retrieval agrees with
10661066
Prior and null hypothesis is NOT rejected.
10671067
chi2: real
10681068
chi² value
10691069
chi2TestY : real
1070-
chi² cutoff value with significance 'significance'
1070+
chi² cutoff value with significance 'significance'
10711071
10721072
"""
10731073

@@ -1085,7 +1085,7 @@ def chiSquareTestYOptimalPrior(self, significance=0.05, atol=1e-5):
10851085

10861086
chi, chi2TestY = _testChi2(Syd, delta_y, significance, atol)
10871087

1088-
####### Alternative based on execise Rodgers 12.1 #######
1088+
####### Alternative based on exercise Rodgers 12.1 #######
10891089

10901090
# Se = y_cov.values
10911091
# K = self.K_i[self.convI].values
@@ -1133,13 +1133,13 @@ def chiSquareTestXOptimalPrior(self, significance=0.05, atol=1e-5):
11331133
correct null hypothesis is rejected.
11341134
atol : float (default 1e-5)
11351135
The absolute tolerance for comparing eigen values to zero. We
1136-
found that values should be than the numpy.isclose defualt value
1136+
found that values should be than the numpy.isclose default value
11371137
of 1e-8.
11381138
11391139
Returns
11401140
-------
11411141
chi2Passed : bool
1142-
True if chi² test passed, i.e. OE retrieval agrees with
1142+
True if chi² test passed, i.e. OE retrieval agrees with
11431143
Prior and null hypothesis is NOT rejected.
11441144
chi2 : real
11451145
chi² value
@@ -1160,7 +1160,7 @@ def chiSquareTestXOptimalPrior(self, significance=0.05, atol=1e-5):
11601160
Sxd = Sa.dot(K.T).dot(KSaKSep_inv).dot(K).dot(Sa)
11611161
chi2, chi2TestX = _testChi2(Sxd, delta_x, significance, atol)
11621162

1163-
####### Alternative based on execise Rodgers 12.1 #######
1163+
####### Alternative based on exercise Rodgers 12.1 #######
11641164

11651165
# Se = y_cov.values
11661166
# K = self.K_i[self.convI].values
@@ -1262,7 +1262,7 @@ def plotIterations(
12621262
ind = 0
12631263

12641264
if self.converged:
1265-
fig.suptitle('Sucessfully converged. Convergence criterion: %.3g'
1265+
fig.suptitle('Successfully converged. Convergence criterion: %.3g'
12661266
' Degrees of freedom: %.3g' % (d_i2[ind], dgf_i[ind]))
12671267
else:
12681268
fig.suptitle('Not converged. Convergence criterion: %.3g Degrees'
@@ -1449,7 +1449,7 @@ def optimalEstimation_loadResults(fname, allow_pickle=True):
14491449

14501450
def invertMatrix(A, raise_error=True):
14511451
'''
1452-
Wrapper funtion for np.linalg.inv, because original function reports
1452+
Wrapper function for np.linalg.inv, because original function reports
14531453
LinAlgError if nan in array for some numpy versions. We want that the
14541454
retrieval is robust with respect to that. Also, checks for singular
14551455
matrices were added.
@@ -1553,7 +1553,7 @@ def _estimateChi2(S, z, atol=1e-5):
15531553
Vector to test
15541554
atol : float (default 1e-5)
15551555
The absolute tolerance for comparing eigen values to zero. We
1556-
found that values should be than the numpy.isclose defualt value
1556+
found that values should be than the numpy.isclose default value
15571557
of 1e-8.
15581558
15591559
Returns
@@ -1588,10 +1588,10 @@ def _testChi2(S, z, significance, atol=1e-5):
15881588
z : {array}
15891589
Vector to test
15901590
significance : {float}
1591-
Significane level
1591+
Significance level
15921592
atol : float (default 1e-5)
15931593
The absolute tolerance for comparing eigen values to zero. We
1594-
found that values should be than the numpy.isclose defualt value
1594+
found that values should be than the numpy.isclose default value
15951595
of 1e-8.
15961596
15971597
Returns

0 commit comments

Comments
 (0)