diff --git a/docs/intro/installation.rst b/docs/intro/installation.rst index fb51bfd..8764016 100644 --- a/docs/intro/installation.rst +++ b/docs/intro/installation.rst @@ -9,35 +9,52 @@ Installation Latest Version -------------- -Recommended to learn the software, run the tutorials, and drafting **Testing Experiments**. +This option is recommended to learn the software, run the tutorials, and drafting **Testing Experiments**. 1. Using ``conda`` ~~~~~~~~~~~~~~~~~~ -To install **floatCSEP**, first a ``conda`` manager should be installed (https://conda.io). Checkout `Anaconda`, `Miniconda` or `Miniforge` (recommended). Once installed, create an environment with: +First, clone the **floatCSEP** source code into a new directory by typing into a terminal: + + .. code-block:: console + + $ git clone https://github.com/cseptesting/floatcsep + $ cd floatcsep + +Then, let ``conda`` automatically install all required dependencies of **floatCSEP** (from its ``environment.yml`` file) into a new environment, and activate it: .. code-block:: console $ conda create -n csep_env $ conda activate csep_env -Then, clone and install the floatCSEP source code using ``pip`` +.. note:: + + For this to work, you need to have ``conda`` installed (see `conda.io `_), either by installing the `Anaconda Distribution `_, + or its more minimal variants `Miniconda `_ or `Miniforge `_ (recommended). + If you install `Miniforge`, we further recommend to use the ``mamba`` command instead of ``conda`` (a faster drop-in replacement). + + +Lastly, install **floatCSEP** into the new environment using ``pip``: .. code-block:: console - $ git clone https://github.com/cseptesting/floatcsep - $ cd floatcsep $ pip install . .. note:: - Use the ``mamba`` command instead of ``conda`` if `Miniforge` was installed. + To *update* **floatCSEP** and its dependencies at a later date, simply execute: + .. code-block:: console -2. Using ``pip`` only + $ conda env update --file environment.yml + $ pip install . -U + + +2. Using only ``pip`` ~~~~~~~~~~~~~~~~~~~~~ -To install using the ``pip`` manager only, we require to install the binary dependencies of **pyCSEP** (see `Installing pyCSEP `_}. The **floatCSEP** latest version can then be installed as: +To install using the ``pip`` manager only, we require to install the binary dependencies of **pyCSEP** (see `Installing pyCSEP `_). The **floatCSEP** latest version can then be installed as: .. code-block:: console @@ -50,13 +67,12 @@ To install using the ``pip`` manager only, we require to install the binary depe Latest Stable Release --------------------- -Recommended for deploying live Floating Testing Experiments +This option is recommended for deploying *Floating Testing Experiments* live. 1. From the ``conda-forge`` channel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Having a ``conda`` manager installed (https://conda.io), type in a console: - +Having a ``conda`` manager installed (see **Note** box above), type in a console: .. code-block:: console @@ -91,7 +107,7 @@ Having installed the binary dependencies of **pyCSEP** (see `Installing pyCSEP < For Developers -------------- -It is recommended (not obligatory) to use a ``conda`` environment to make sure your contributions do not depend on your system local libraries. For contributions to the **floatCSEP** codebase, please consider using a `fork `_ and creating pull-requests from there. +It is recommended (not obligatory) to use a ``conda`` environment to make sure your contributions do not depend on your system local libraries. For contributing to the **floatCSEP** codebase, please consider `forking the repository `_ and `create pull-requests `_ from there. .. code-block:: console @@ -101,4 +117,4 @@ It is recommended (not obligatory) to use a ``conda`` environment to make sure y $ cd floatcsep $ pip install .[dev] -This will install and configure all the unit-testing, linting and documentation packages. +This will install and configure all the unit-testing, linting, and documentation packages. diff --git a/docs/tutorials/case_e.rst b/docs/tutorials/case_e.rst index 061e6e0..e27cb30 100644 --- a/docs/tutorials/case_e.rst +++ b/docs/tutorials/case_e.rst @@ -3,7 +3,7 @@ E - A Time-Independent Experiment ================================= -This example shows how to run a realistic testing experiment (based on https://doi.org/10.4401/ag-4844) while summarizing the concepts from the previous tutorials. +This example shows how to run a realistic testing experiment (based on :ref:`Schorlemmer et al. 2010`) while summarizing the concepts from the previous tutorials. .. currentmodule:: floatcsep @@ -25,7 +25,7 @@ This example shows how to run a realistic testing experiment (based on https://d Experiment Components --------------------- -The source code can be found in the ``tutorials/case_e`` folder or in `GitHub `_. The input structure of the experiment is: +The source code can be found in the ``tutorials/case_e`` folder or in `the GitHub repository `_. The input structure of the experiment is: :: @@ -137,3 +137,9 @@ Plot command colormap: magma and re-run with the ``plot`` command. A forecast figure will re-appear in ``results/{window}/forecasts`` with a different colormap. Additional forecast and catalog plotting options can be found in the :func:`csep.utils.plots.plot_spatial_dataset` and :func:`csep.utils.plots.plot_catalog` ``pycsep`` functions. + + +References +---------- + + * Schorlemmer, D., Christophersen, A., Rovida, A., Mele, F., Stucchi, M. and Marzocchi, W. (2010). Setting up an earthquake forecast experiment in Italy. Annals of Geophysics, 53(3), 1–9. doi: `10.4401/ag-4844 `_ diff --git a/docs/tutorials/case_f.rst b/docs/tutorials/case_f.rst index 26438a3..0131449 100644 --- a/docs/tutorials/case_f.rst +++ b/docs/tutorials/case_f.rst @@ -25,7 +25,7 @@ Experiment Components --------------------- -The source files can be found in the ``tutorials/case_e`` folder or in `GitHub `_. The experiment structure is as follows: +The source files can be found in the ``tutorials/case_e`` folder or in `the GitHub repository `_. The experiment structure is as follows: :: @@ -49,7 +49,7 @@ The source files can be found in the ``tutorials/case_e`` folder or in `GitHub Model ----- -The time-dependency of a model is manifested here by the provision of different forecasts, i.e., statistical descriptions of seismicity, for different time-windows. In this example, the forecasts were created from an external model https://github.com/lmizrahi/etas (`doi:10.1785/0220200231 `_), with which the experiment has no interface. This means that we use **only the forecast files** and no source code. We leave the handling of a model source code for subsequent tutorials. +The time-dependency of a model is manifested here by the provision of different forecasts, i.e., statistical descriptions of seismicity, for different time-windows. In this example, the forecasts were created from an external model https://github.com/lmizrahi/etas (:ref:`Mizrahi et al. 2021`_), with which the experiment has no interface. This means that we use **only the forecast files** and no source code. We leave the handling of a model source code for subsequent tutorials. @@ -73,7 +73,7 @@ Time Catalog ~~~~~~~ - The catalog ``catalog.json`` was obtained *previously* by using ``query_geonet`` and it was filtered to the testing period. However, it can be re-queried by changing its definition to: + The catalog ``catalog.json`` was obtained *prior* to the experiment by using ``query_geonet`` and it was filtered to the testing period. However, it can be re-queried by changing its definition to: .. code-block:: yaml @@ -93,25 +93,25 @@ Models For consistency with time-dependent models that will create forecasts from a source code, the ``path`` should point to the folder of the model, which itself should contain a sub-folder named ``{path}/forecasts`` where the files are located. .. important:: - Note that for catalog-based forecasts, the model should explicit the number of simulations. This is meant for forecast files that contain synthetic catalogs with zero-event simulations, and therefore do not contain the total number of synthetic catalogs used. + Note that for catalog-based forecast models, the number of catalog simulations (``n_sims``) must be specified – because a forecast may contain synthetic catalogs with zero-event simulations and therefore does not imply the total number of simulated synthetic catalogs. Tests ~~~~~ - With time-dependent models, now catalog evaluations found in :obj:`csep.core.catalog_evaluations` can be used. + Having a time-dependent and catalog-based forecast model, catalog-based evaluations found in :obj:`csep.core.catalog_evaluations` can now be used. .. literalinclude:: ../../tutorials/case_f/tests.yml :language: yaml .. note:: - It is possible to assign two plotting functions to a test, whose ``plot_args`` and ``plot_kwargs`` can be placed indented beneath + It is possible to assign two plotting functions to a test, whose ``plot_args`` and ``plot_kwargs`` can be placed indented beneath. Running the experiment ---------------------- - The experiment can be run by simply navigating to the ``tutorials/case_h`` folder in the terminal and typing. + The experiment can be run by simply navigating to the ``tutorials/case_h`` folder in the terminal and typing: .. code-block:: console @@ -119,3 +119,8 @@ Running the experiment This will automatically set all the calculation paths (testing catalogs, evaluation results, figures) and will create a summarized report in ``results/report.md``. + +References +---------- + + * Mizrahi, L., Nandan, S., & Wiemer, S. (2021). The effect of declustering on the size distribution of mainshocks. _Seismological Research Letters, 92_(4), 2333–2342. doi: `10.1785/0220200231 `_ \ No newline at end of file diff --git a/docs/tutorials/case_g.rst b/docs/tutorials/case_g.rst index 1edfaa7..a04e37a 100644 --- a/docs/tutorials/case_g.rst +++ b/docs/tutorials/case_g.rst @@ -26,7 +26,7 @@ Here, we set up a time-dependent model from its **source code** for an experimen Experiment Components --------------------- -The example folder contains also, along with the already known components (configurations, catalog), a sub-folder for the **source code** of the model ``pymock``. The components of the experiment (and model) are: +The example folder contains also, along with the already known components (configurations, catalog), a sub-folder for the **source code** of the model `pymock `_. The components of the experiment (and model) are: :: @@ -63,7 +63,7 @@ The example folder contains also, along with the already known components (confi Model ----- -The experiment's complexity increases from time-independent to dependent mostly because we now need a **Model** (source code) to generate forecasts that changes for every time-window. The model main components are: +Transitioning from time-independent to dependent models increases an experiment's complexity because we now need a **Model** (source code) to generate forecasts that change for every time-window. A **Model**'s main components are: * **Input**: The input consists in input **data** and **arguments**. @@ -76,35 +76,33 @@ The experiment's complexity increases from time-independent to dependent mostly 2. The **input arguments** controls how the model's source code works. The minimum arguments to run a model are the forecast ``start_date`` and ``end_date``, which will be modified dynamically during an experiment with multiple time-windows. The experiment system will access `{model}/input/args.txt` and change the values of ``start_date = {datetime}`` and ``end_date = {datetime}`` before the model is run. Additional arguments can be set by convenience, such as (not limited to) ``catalog`` (the input catalog name), ``n_sims`` (number of synthetic catalogs) and random ``seed`` for reproducibility. -* **Output**: The model's output are the synthetic catalogs, which should be allocated in `{model}/forecasts/{filename}.csv` by the source code after each rone. The format is identically to ``csep_ascii``, but unlike in an input catalog, the ``catalog_id`` column should be modified for each synthetic catalog starting from 0. The file name follows the convention `{model_name}_{start}_{end}.csv`, where ``start`` and ``end`` folows the `%Y-%m-%dT%H:%M:%S.%f` - ISO861 FORMAT +* **Output**: The model's output are synthetic catalogs, which should be allocated in `{model}/forecasts/{filename}.csv` by the source code after each run. The format is identically to ``csep_ascii``, but unlike in an input catalog, the ``catalog_id`` column should be modified for each synthetic catalog starting from 0. The file name follows the convention `{model_name}_{start}_{end}.csv`, where ``start`` and ``end`` follows the `%Y-%m-%dT%H:%M:%S.%f` - ISO861 FORMAT. -* **Model build**: Inside the model source code, there are multiple options to build it. A standard python ``setup.cfg`` is given, which can be built inside a python ``venv`` or ``conda`` managers. This is created and built automatically by ``floatCSEP``, as long as the the model build instructions are correctly set up. +* **Model build**: Inside the model source code, there are multiple options to build it. A standard Python ``setup.cfg`` is given, which can be built inside a Python ``venv`` or ``conda`` managers. This is created and built automatically by ``floatCSEP``, as long as the the model build instructions are correctly set up. * **Model run**: The model should be run with a simple command, e.g. **entrypoint**, to which only ``arguments`` could be passed if desired. The ``pymock`` model contains multiple example of entrypoints, but the modeler should use only one for clarity. - 1. A `python` call with arguments + 1. A ``python`` call with arguments: .. code-block:: console $ python run.py input/args.txt - 2. Using a binary entrypoint with arguments (for instance, defined in the python build instructions: ``pymock/setup.cfg:entry_point``) + 2. Using a binary entrypoint with arguments (for instance, defined in the Python build instructions: ``pymock/setup.cfg:entry_point``): .. code-block:: console $ pymock input/args.txt - 3. A single binary entrypoint without arguments . + 3. A single binary entrypoint without arguments, which means that the source code should internally read the input data and arguments (``input/catalog.csv`` and ``input/args.txt`` files, respectively): .. code-block:: console $ pymock - This means that the source code should internally read the input data and arguments, ``input/catalog.csv`` and ``input/args.txt`` files respectively. - .. important:: - The model should be conceptualized as a **black-box**, whose only interface/interaction with the ``floatcsep`` system is to receive an input (i.e., input catalog and arguments) and generates an output (the forecasts). + A **Model** can be conceptualized as a **black-box**, whose only interface/interaction with the ``floatcsep`` system is to receive an input (i.e., input catalog and arguments) and subsequently generate an output (the forecasts). Configuration @@ -124,7 +122,7 @@ Time Catalog ~~~~~~~ - The catalog was obtained `previous to the experiment` using ``query_bsi``, but it was filtered from 2006 onwards, so it has enough data for the model calibration. + The catalog was obtained *prior* to the experiment using ``query_bsi``, but it was filtered from 2006 onwards, so it has enough data for the model calibration. Models ~~~~~~ @@ -137,13 +135,13 @@ Models :lines: 1-7 1. Now ``path`` points to the folder where the source is installed. Therefore, the input and the forecasts should be allocated ``{path}/input`` and ``{path}/forecasts``, respectively. - 2. The ``func`` option is the shell command with which the model is run. As seen in the `Model`_ section, this could be either ``pymock``, ``pymock input/args.txt`` or ``python run.py input/args``. We use the simplest option ``pymock``, but you are welcome to try different entrypoints. + 2. The ``func`` option is the shell command with which the model is run. As seen in the `Model` section, this could be either ``pymock``, ``pymock input/args.txt`` or ``python run.py input/args``. We use the simplest option ``pymock``, but you are welcome to try different entrypoints. .. note:: The ``func`` command will be run from the model's directory and a model containerization (e.g., ``Dockerfile``, ``conda``). - 3. The ``func_kwargs`` are extra arguments that will annotated to the ``input/args.txt`` file every time the model is run, or will be passed as extra arguments to the ``func`` call (Note that the two options are identical). This is useful to define sub-classes of models (or flavours) that uses the same source code, but a different instantiation. - 4. The ``build`` option defines the style of container within which the model will be placed. Currently in **floatCSEP**, only the python module ``venv``, the package manager ``conda`` and the containerization manager ``Docker`` are currently supported. + 3. The ``func_kwargs`` are extra arguments that will be added to the ``input/args.txt`` file every time the model is run, or will be passed as extra arguments to the ``func`` call (Note that the two options are identical). This is useful to define sub-classes of models (or flavours) that uses the same source code, but a different instantiation. + 4. The ``build`` option defines the style of container within which the model will be placed. Currently in **floatCSEP**, only the Python module ``venv``, the package manager ``conda`` and the containerization manager ``Docker`` are currently supported. .. important:: For these tutorials, we use ``venv`` sub-environments, but we recommend ``Docker`` to set up real experiments. @@ -152,7 +150,7 @@ Models Tests ~~~~~ - With time-dependent models, now catalog evaluations found in :obj:`csep.core.catalog_evaluations` can be used. + Catalog-based evaluations found in :obj:`csep.core.catalog_evaluations` can be used. .. literalinclude:: ../../tutorials/case_g/tests.yml @@ -160,7 +158,7 @@ Tests :language: yaml .. note:: - It is possible to assign two plotting functions to a test, whose ``plot_args`` and ``plot_kwargs`` can be placed indented beneath + It is possible to assign two plotting functions to a test, whose ``plot_args`` and ``plot_kwargs`` can be placed indented beneath. Custom Post-Process @@ -173,13 +171,13 @@ Custom Post-Process :language: yaml :lines: 22-23 - This option provides `hook` for a python script and a function within as: + This option provides a `hook` for a Python script and a function within as: .. code-block:: console {python_sript}:{function_name} - The requirements are that the script to be located within the same directory as the configuration file, whereas the function must receive a :class:`floatcsep.experiment.Experiment` as argument + The script must be located within the same directory as the configuration file, whereas the function must receive a :class:`floatcsep.experiment.Experiment` as argument: .. literalinclude:: ../../tutorials/case_g/custom_plot_script.py :caption: tutorials/case_g/custom_plot_script.py @@ -188,13 +186,13 @@ Custom Post-Process - In this way, the plot function can use all the :class:`~floatcsep.experiment.Experiment` attributes/methods to access catalogs, forecasts and test results. The script ``tutorials/case_g/custom_plot_script.py`` can also be viewed directly on `GitHub `_, where it is exemplified how to access the experiment data in runtime. + In this way, the plot function can use all the :class:`~floatcsep.experiment.Experiment` attributes/methods to access catalogs, forecasts and test results. The script ``tutorials/case_g/custom_plot_script.py`` can also be viewed directly in `the GitHub repository `_, where it is exemplified how to access the experiment data at runtime. Running the experiment ---------------------- - The experiment can be run by simply navigating to the ``tutorials/case_g`` folder in the terminal and typing. + The experiment can be run by simply navigating to the ``tutorials/case_g`` folder in the terminal and typing: .. code-block:: console diff --git a/docs/tutorials/case_h.rst b/docs/tutorials/case_h.rst index 667c083..6a19886 100644 --- a/docs/tutorials/case_h.rst +++ b/docs/tutorials/case_h.rst @@ -3,7 +3,7 @@ H - A Time-Dependent Experiment =============================== -Here, we run an experiment that access, containerize and execute multiple **time-dependent models**, and then proceeds to evaluate the forecasts once they are created. +Here, we run an experiment that accesses, containerizes and executes multiple **time-dependent models**, and then proceeds to evaluate the forecasts once they are created. .. admonition:: **TL; DR** @@ -35,7 +35,7 @@ The experiment input files are: ├── tests.yml └── models.yml -* The ``models.yml`` contains the instructions to clone and build the source codes from software repositories (e.g., gitlab, Github), and how to interface them with **floatCSEP**. Once downloaded and built, the experiment structure should look like: +* The ``models.yml`` contains the instructions to clone and build the source codes from software repositories (e.g., gitlab, Github), and how to interface them with **floatCSEP**. Once downloaded and built, the experiment structure should look like this: :: @@ -139,7 +139,7 @@ As in :ref:`Tutorial G`, each **Model** requires to build and execute a where ``start`` and ``end`` follow either the ``%Y-%m-%dT%H:%M:%S.%f`` - ISO861 FORMAT, or the short date version ``%Y-%m-%d`` if the windows are set by midnight. -6. Additional function arguments can be passed to the model with the entry ``func_kwargs``. We perhaps noted that both Poisson Mock and Negbinom Mock use the same source code. With ``func_kwargs`` a different subclass can be defined for the same source code (in this case, a Negative-Binomial number distribution instead of Poisson). +6. Additional function arguments can be passed to the model with the entry ``func_kwargs``. Both `Poisson Mock` and `Negbinom Mock` use the same source code, but a different subclass can be defined with ``func_kwargs`` (in this case, a Negative-Binomial number distribution instead of Poisson). .. literalinclude:: ../../tutorials/case_h/models.yml :caption: tutorials/case_h/models.yml @@ -160,13 +160,13 @@ Time Catalog ~~~~~~~ - The catalog was obtained `previous to the experiment` using ``query_bsi``, but it was filtered from 2006 onwards, so it has enough data for the model calibration. + The catalog was obtained *prior* to the experiment using ``query_bsi``, but it was filtered from 2006 onwards, so it has enough data for the model calibration. Tests ~~~~~ - With time-dependent models, now catalog evaluations found in :obj:`csep.core.catalog_evaluations` can be used. + Catalog-based evaluations found in :obj:`csep.core.catalog_evaluations` can be used. .. literalinclude:: ../../tutorials/case_h/tests.yml @@ -174,7 +174,7 @@ Tests :language: yaml .. note:: - It is possible to assign two plotting functions to a test, whose ``plot_args`` and ``plot_kwargs`` can be placed indented beneath + It is possible to assign two plotting functions to a test, whose ``plot_args`` and ``plot_kwargs`` can be placed indented beneath. Custom Post-Process @@ -187,25 +187,25 @@ Custom Post-Process :language: yaml :lines: 22-23 - This option provides `hook` for a python script and a function within as: + This option provides `hook` for a Python script and a function within as: .. code-block:: console {python_sript}:{function_name} - The requirements are that the script to be located within the same directory as the configuration file, whereas the function must receive a :class:`floatcsep.experiment.Experiment` as argument + The script must be located within the same directory as the configuration file, whereas the function must receive a :class:`floatcsep.experiment.Experiment` as argument: .. literalinclude:: ../../tutorials/case_h/custom_report.py :language: yaml :lines: 5-11 - In this way, the report function use all the :class:`~floatcsep.experiment.Experiment` attributes/methods to access catalogs, forecasts and test results. The script ``tutorials/case_h/custom_report.py`` can also be viewed directly on `GitHub `_, where it is exemplified how to access the experiment artifacts. + In this way, the report function use all the :class:`~floatcsep.experiment.Experiment` attributes/methods to access catalogs, forecasts and test results. The script ``tutorials/case_h/custom_report.py`` can also be viewed directly in `the GitHub repository `_, where it is exemplified how to access the experiment artifacts. Running the experiment ---------------------- - The experiment can be run by simply navigating to the ``tutorials/case_h`` folder in the terminal and typing. + The experiment can be run by simply navigating to the ``tutorials/case_h`` folder in the terminal and typing: .. code-block:: console diff --git a/floatcsep/infrastructure/environments.py b/floatcsep/infrastructure/environments.py index 9147836..2ec64d0 100644 --- a/floatcsep/infrastructure/environments.py +++ b/floatcsep/infrastructure/environments.py @@ -378,7 +378,7 @@ def run_command(self, command, **kwargs) -> None: env["VIRTUAL_ENV"] = self.env_path env["PATH"] = os.path.join(self.env_path, "bin") + os.pathsep + env.get("PATH", "") - full_command = f"bash -lc 'source {activate_script} && {command}'" + full_command = f"bash -lc 'source \"{activate_script}\"' && {command}" process = subprocess.Popen( full_command, diff --git a/floatcsep/postprocess/reporting.py b/floatcsep/postprocess/reporting.py index 702a4ca..9c70027 100644 --- a/floatcsep/postprocess/reporting.py +++ b/floatcsep/postprocess/reporting.py @@ -35,6 +35,8 @@ def generate_report(experiment, timewindow=-1): custom_report(report_function, experiment) return + report_path = experiment.registry.run_dir / "report.md" + timewindow = experiment.time_windows[timewindow] timestr = timewindow2str(timewindow) @@ -56,18 +58,15 @@ def generate_report(experiment, timewindow=-1): # Generate catalog plot if experiment.catalog_repo.catalog is not None: + cat_map_path = os.path.relpath( + experiment.registry.get_figure_key("main_catalog_map"), report_path.parent + ) + cat_time_path = os.path.relpath( + experiment.registry.get_figure_key("main_catalog_time"), report_path.parent + ) report.add_figure( "Input catalog", - [ - os.path.relpath( - experiment.registry.get_figure_key("main_catalog_map"), - experiment.registry.run_dir, - ), - os.path.relpath( - experiment.registry.get_figure_key("main_catalog_time"), - experiment.registry.run_dir, - ), - ], + [cat_map_path, cat_time_path], level=3, ncols=1, caption="Evaluation catalog from " @@ -79,13 +78,17 @@ def generate_report(experiment, timewindow=-1): test_names = [test.name for test in experiment.tests] report.add_list(test_names) + report.add_heading("Test results", level=2) + # Include results from Experiment for test in experiment.tests: - fig_path = experiment.registry.get_figure_key(timestr, test) + fig_path = os.path.relpath( + experiment.registry.get_figure_key(timestr, test), report_path.parent + ) width = test.plot_args[0].get("figsize", [4])[0] * 96 report.add_figure( f"{test.name}", - os.path.relpath(fig_path, experiment.registry.run_dir), + fig_path, level=3, caption=test.markdown, add_ext=True, @@ -93,8 +96,9 @@ def generate_report(experiment, timewindow=-1): ) for model in experiment.models: try: - fig_path = experiment.registry.get_figure_key( - timestr, f"{test.name}_{model.name}" + fig_path = os.path.relpath( + experiment.registry.get_figure_key(timestr, f"{test.name}_{model.name}"), + report_path.parent, ) width = test.plot_args[0].get("figsize", [4])[0] * 96 report.add_figure( @@ -108,21 +112,21 @@ def generate_report(experiment, timewindow=-1): except KeyError: pass report.table_of_contents() - report.save(experiment.registry.abs(experiment.registry.run_dir)) + report.save(report_path) def reproducibility_report(exp_comparison: "ExperimentComparison"): numerical = exp_comparison.num_results data = exp_comparison.file_comp - outname = os.path.join("reproducibility_report.md") - save_path = os.path.dirname( - os.path.join( - exp_comparison.reproduced.registry.workdir, - exp_comparison.reproduced.registry.run_dir, - ) + + report_path = ( + exp_comparison.reproduced.registry.workdir + / exp_comparison.reproduced.registry.run_dir + / "reproducibility_report.md" ) - report = MarkdownReport(out_name=outname) + + report = MarkdownReport() report.add_title(f"Reproducibility Report - {exp_comparison.original.name}", "") report.add_heading("Objectives", level=2) @@ -203,7 +207,7 @@ def reproducibility_report(exp_comparison: "ExperimentComparison"): report.add_table(rows) report.table_of_contents() - report.save(save_path) + report.save(report_path) def custom_report(report_function: str, experiment: "Experiment"): @@ -268,8 +272,8 @@ def custom_report(report_function: str, experiment: "Experiment"): class MarkdownReport: """Class to generate a Markdown report from a study.""" - def __init__(self, out_name="report.md"): - self.out_name = out_name + def __init__(self): + self.toc = [] self.has_title = True self.has_introduction = False @@ -344,6 +348,9 @@ def add_figure( else: paths = relative_filepaths + # make "relative path" (to experiment dir) relative to report + paths = [p.replace("results/", "") for p in paths] + correct_paths = [] if add_ext: for fp in paths: @@ -473,8 +480,8 @@ def add_row(row_): table = "\n".join(table) self.markdown.append(table + "\n\n") - def save(self, save_dir): + def save(self, out_path): output = list(itertools.chain.from_iterable(self.markdown)) - full_md_fname = os.path.join(save_dir, self.out_name) - with open(full_md_fname, "w") as f: + + with open(out_path, "w") as f: f.writelines(output) diff --git a/tests/unit/test_environments.py b/tests/unit/test_environments.py index c7fe5d3..e50b31d 100644 --- a/tests/unit/test_environments.py +++ b/tests/unit/test_environments.py @@ -345,10 +345,7 @@ def test_run_command(self, mock_popen): self.manager.run_command(command) - output_cmd = ( - f"bash -lc 'source " - f"{os.path.join(self.manager.env_path, 'bin', 'activate')} && {command}'" - ) + output_cmd = f"bash -lc 'source \"{os.path.join(self.manager.env_path, 'bin', 'activate')}\"' && {command}" mock_popen.assert_called_once_with( output_cmd, diff --git a/tests/unit/test_reporting.py b/tests/unit/test_reporting.py index d3662ac..15d29a9 100644 --- a/tests/unit/test_reporting.py +++ b/tests/unit/test_reporting.py @@ -54,7 +54,7 @@ def test_save_report(self): report = reporting.MarkdownReport() report.markdown = [["# Test Title\n", "Some content\n"]] with patch("builtins.open", unittest.mock.mock_open()) as mock_file: - report.save("/path/to/save") + report.save("/path/to/save/report.md") mock_file.assert_called_with("/path/to/save/report.md", "w") mock_file().writelines.assert_called_with(["# Test Title\n", "Some content\n"])