Skip to content

Commit 1a81f36

Browse files
committed
Updating pyCSEP docs for commit 2f9f66b from refs/heads/main by pabloitu
0 parents  commit 1a81f36

File tree

499 files changed

+75909
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

499 files changed

+75909
-0
lines changed

.buildinfo

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# Sphinx build info version 1
2+
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3+
config: fdb06693f5fa6f6812ae8be4d3f289b1
4+
tags: 645f666f9bcd5a90fca523b33c5a78b7

.nojekyll

Whitespace-only changes.

CNAME

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
docs.cseptesting.org

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Empty README.md for documentation cache.
Lines changed: 248 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,248 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n\n# Grid-based Forecast Evaluation\n\nThis example demonstrates how to evaluate a grid-based and time-independent forecast. Grid-based\nforecasts assume the variability of the forecasts is Poissonian. Therefore, Poisson-based evaluations\nshould be used to evaluate grid-based forecasts.\n\nOverview:\n 1. Define forecast properties (time horizon, spatial region, etc).\n 2. Obtain evaluation catalog\n 3. Apply Poissonian evaluations for grid-based forecasts\n 4. Store evaluation results using JSON format\n 5. Visualize evaluation results\n"
8+
]
9+
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
"## Load required libraries\n\nMost of the core functionality can be imported from the top-level :mod:`csep` package. Utilities are available from the\n:mod:`csep.utils` subpackage.\n\n"
15+
]
16+
},
17+
{
18+
"cell_type": "code",
19+
"execution_count": null,
20+
"metadata": {
21+
"collapsed": false
22+
},
23+
"outputs": [],
24+
"source": [
25+
"import csep\nfrom csep.core import poisson_evaluations as poisson\nfrom csep.utils import datasets, time_utils\nfrom csep import plots"
26+
]
27+
},
28+
{
29+
"cell_type": "markdown",
30+
"metadata": {},
31+
"source": [
32+
"## Define forecast properties\n\nWe choose a `time-independent-forecast` to show how to evaluate a grid-based earthquake forecast using PyCSEP. Note,\nthe start and end date should be chosen based on the creation of the forecast. This is important for time-independent forecasts\nbecause they can be rescaled to any arbitrary time period.\n\n"
33+
]
34+
},
35+
{
36+
"cell_type": "code",
37+
"execution_count": null,
38+
"metadata": {
39+
"collapsed": false
40+
},
41+
"outputs": [],
42+
"source": [
43+
"start_date = time_utils.strptime_to_utc_datetime('2006-11-12 00:00:00.0')\nend_date = time_utils.strptime_to_utc_datetime('2011-11-12 00:00:00.0')"
44+
]
45+
},
46+
{
47+
"cell_type": "markdown",
48+
"metadata": {},
49+
"source": [
50+
"## Load forecast\n\nFor this example, we provide the example forecast data set along with the main repository. The filepath is relative\nto the root directory of the package. You can specify any file location for your forecasts.\n\n"
51+
]
52+
},
53+
{
54+
"cell_type": "code",
55+
"execution_count": null,
56+
"metadata": {
57+
"collapsed": false
58+
},
59+
"outputs": [],
60+
"source": [
61+
"forecast = csep.load_gridded_forecast(datasets.helmstetter_aftershock_fname,\n start_date=start_date,\n end_date=end_date,\n name='helmstetter_aftershock')"
62+
]
63+
},
64+
{
65+
"cell_type": "markdown",
66+
"metadata": {},
67+
"source": [
68+
"## Load evaluation catalog\n\nWe will download the evaluation catalog from ComCat (this step requires an internet connection). We can use the ComCat API\nto filter the catalog in both time and magnitude. See the catalog filtering example, for more information on how to\nfilter the catalog in space and time manually.\n\n"
69+
]
70+
},
71+
{
72+
"cell_type": "code",
73+
"execution_count": null,
74+
"metadata": {
75+
"collapsed": false
76+
},
77+
"outputs": [],
78+
"source": [
79+
"print(\"Querying comcat catalog\")\ncatalog = csep.query_comcat(forecast.start_time, forecast.end_time, min_magnitude=forecast.min_magnitude)\nprint(catalog)"
80+
]
81+
},
82+
{
83+
"cell_type": "markdown",
84+
"metadata": {},
85+
"source": [
86+
"## Filter evaluation catalog in space\n\nWe need to remove events in the evaluation catalog outside the valid region specified by the forecast.\n\n"
87+
]
88+
},
89+
{
90+
"cell_type": "code",
91+
"execution_count": null,
92+
"metadata": {
93+
"collapsed": false
94+
},
95+
"outputs": [],
96+
"source": [
97+
"catalog = catalog.filter_spatial(forecast.region)\nprint(catalog)"
98+
]
99+
},
100+
{
101+
"cell_type": "markdown",
102+
"metadata": {},
103+
"source": [
104+
"## Compute Poisson spatial test\n\nSimply call the :func:`csep.core.poisson_evaluations.spatial_test` function to evaluate the forecast using the specified\nevaluation catalog. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose\noption prints the status of the simulations to the standard output.\n\n"
105+
]
106+
},
107+
{
108+
"cell_type": "code",
109+
"execution_count": null,
110+
"metadata": {
111+
"collapsed": false
112+
},
113+
"outputs": [],
114+
"source": [
115+
"spatial_test_result = poisson.spatial_test(forecast, catalog)"
116+
]
117+
},
118+
{
119+
"cell_type": "markdown",
120+
"metadata": {},
121+
"source": [
122+
"## Store evaluation results\n\nPyCSEP provides easy ways of storing objects to a JSON format using :func:`csep.write_json`. The evaluations can be read\nback into the program for plotting using :func:`csep.load_evaluation_result`.\n\n"
123+
]
124+
},
125+
{
126+
"cell_type": "code",
127+
"execution_count": null,
128+
"metadata": {
129+
"collapsed": false
130+
},
131+
"outputs": [],
132+
"source": [
133+
"csep.write_json(spatial_test_result, 'example_spatial_test.json')"
134+
]
135+
},
136+
{
137+
"cell_type": "markdown",
138+
"metadata": {},
139+
"source": [
140+
"\n## Plot spatial test results\n\nWe provide the function :func:`csep.utils.plotting.plot_poisson_consistency_test` to visualize the evaluation results from\nconsistency tests.\n\n"
141+
]
142+
},
143+
{
144+
"cell_type": "code",
145+
"execution_count": null,
146+
"metadata": {
147+
"collapsed": false
148+
},
149+
"outputs": [],
150+
"source": [
151+
"ax = plots.plot_consistency_test(spatial_test_result,\n xlabel='Spatial likelihood',\n show=True)"
152+
]
153+
},
154+
{
155+
"cell_type": "markdown",
156+
"metadata": {},
157+
"source": [
158+
"\n## Performing a comparative test\n\nComparative tests assess the relative performance of a forecasts against a reference forecast. We load a baseline version of the\nHelmstetter forecasts that does not account for the influence of aftershocks. We perform the paired T-test to calculate the\nInformation Gain and its significance (See `forecast-comparison-tests` for more information).\n\n\n"
159+
]
160+
},
161+
{
162+
"cell_type": "code",
163+
"execution_count": null,
164+
"metadata": {
165+
"collapsed": false
166+
},
167+
"outputs": [],
168+
"source": [
169+
"ref_forecast = csep.load_gridded_forecast(datasets.helmstetter_mainshock_fname,\n start_date=start_date,\n end_date=end_date,\n name='helmstetter_mainshock')\n\nt_test = poisson.paired_t_test(forecast=forecast,\n benchmark_forecast=ref_forecast,\n observed_catalog=catalog)\n\nplots.plot_comparison_test(t_test, show=True)"
170+
]
171+
},
172+
{
173+
"cell_type": "markdown",
174+
"metadata": {},
175+
"source": [
176+
"\n## Plot ROC Curves\n\nWe can also plot the Receiver operating characteristic (ROC) Curves based on forecast and testing-catalog.\nIn the figure below, True Positive Rate is the normalized cumulative forecast rate, after sorting cells in decreasing order of rate.\nThe \u201cFalse Positive Rate\u201d is the normalized cumulative area.\nThe dashed line is the ROC curve for a uniform forecast, meaning the likelihood for an earthquake to occur at any position is the same.\nThe further the ROC curve of a forecast is to the uniform forecast, the specific the forecast is.\nWhen comparing the forecast ROC curve against a catalog, one can evaluate if the forecast is more or less specific (or smooth) at different level or seismic rate.\n\nNote: This figure just shows an example of plotting an ROC curve with a catalog forecast.\n If \"linear=True\" the diagram is represented using a linear x-axis.\n If \"linear=False\" the diagram is represented using a logarithmic x-axis.\n\n\n"
177+
]
178+
},
179+
{
180+
"cell_type": "code",
181+
"execution_count": null,
182+
"metadata": {
183+
"collapsed": false
184+
},
185+
"outputs": [],
186+
"source": [
187+
"print(\"Plotting concentration ROC curve\")\n_= plots.plot_concentration_ROC_diagram(forecast, catalog, linear=True)"
188+
]
189+
},
190+
{
191+
"cell_type": "markdown",
192+
"metadata": {},
193+
"source": [
194+
"\n## Plot ROC and Molchan curves using the alarm-based approach\n\n"
195+
]
196+
},
197+
{
198+
"cell_type": "code",
199+
"execution_count": null,
200+
"metadata": {
201+
"collapsed": false
202+
},
203+
"outputs": [],
204+
"source": [
205+
"#In this script, we generate ROC diagrams and Molchan diagrams using the alarm-based approach to evaluate the predictive\n#performance of models. This method exploits contingency table analysis to evaluate the predictive capabilities of\n#forecasting models. By analysing the contingency table data, we determine the ROC curve and Molchan trajectory and\n#estimate the Area Skill Score to assess the accuracy and reliability of the prediction models. The generated graphs\n#visually represent the prediction performance.\n\n# Note: If \"linear=True\" the diagram is represented using a linear x-axis.\n# If \"linear=False\" the diagram is represented using a logarithmic x-axis.\n\nprint(\"Plotting ROC curve from the contingency table\")\n# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.\n_ = plots.plot_ROC_diagram(forecast, catalog, linear=True)\n\nprint(\"Plotting Molchan curve from the contingency table and the Area Skill Score\")\n# Set linear True to obtain a linear x-axis, False to obtain a logical x-axis.\n_ = plots.plot_Molchan_diagram(forecast, catalog, linear=True)"
206+
]
207+
},
208+
{
209+
"cell_type": "markdown",
210+
"metadata": {},
211+
"source": [
212+
"## Calculate Kagan's I_1 score\n\nWe can also get the Kagan's I_1 score for a gridded forecast\n(see Kagan, YanY. [2009] Testing long-term earthquake forecasts: likelihood methods and error diagrams, Geophys. J. Int., v.177, pages 532-542).\n\n"
213+
]
214+
},
215+
{
216+
"cell_type": "code",
217+
"execution_count": null,
218+
"metadata": {
219+
"collapsed": false
220+
},
221+
"outputs": [],
222+
"source": [
223+
"from csep.utils.stats import get_Kagan_I1_score\nI_1 = get_Kagan_I1_score(forecast, catalog)\nprint(\"I_1score is: \", I_1)"
224+
]
225+
}
226+
],
227+
"metadata": {
228+
"kernelspec": {
229+
"display_name": "Python 3",
230+
"language": "python",
231+
"name": "python3"
232+
},
233+
"language_info": {
234+
"codemirror_mode": {
235+
"name": "ipython",
236+
"version": 3
237+
},
238+
"file_extension": ".py",
239+
"mimetype": "text/x-python",
240+
"name": "python",
241+
"nbconvert_exporter": "python",
242+
"pygments_lexer": "ipython3",
243+
"version": "3.9.23"
244+
}
245+
},
246+
"nbformat": 4,
247+
"nbformat_minor": 0
248+
}
Binary file not shown.

0 commit comments

Comments
 (0)