-
Notifications
You must be signed in to change notification settings - Fork 2
Split global_time_series task and improve performance #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The first 4 commits form an initial implementation. The 4th commit, b0fb5f2, effectively replaces #22 (to close #21). Remaining TODO:
|
|
My latest plan is:
|
|
The Classic PDF and Viewer modes are now mostly separated. I also added legacy I'm currently testing the true all-land-variable case, where we use NCO to get every land variable we can, and then go through the Remaining TODO as of 6/6:
Major design decisions to make/confirm:
|
|
Another possible performance improvement: while the Classic PDF mode puts multiple plots on a single page, the Viewer mode is only plotting one variable per PNG. That means we could compute the |
To further expand, there are several points in the workflow where we can separate into the 2 modes. From least to most change:
(Please note that the new subtask functionality allows both a Classic PDF job and a Viewer job to run simultaneously.) |
|
Noting design decisions made after discussion with @chengzhuzhang @golaz @tomvothecoder :
Not visible to the user, so I can make a determination of what makes the most sense. Probably option 3 or 1.
Let's keep the change at the lowest level possible, as to not affect users -- that is, keep the current implementation.
Yes, it will always be that grid, as we want a standard view. By extension, we can ignore the
Correct. We want the Classic PDF to be pretty standard; we don’t anticipate adding arbitrary variables. One caveat is if we want to start doing more sophisticated plotting with arbitrary variables. The Classic PDF mode has the advantage of using custom formulas/plotting functions whereas the Viewer Mode code just plots the
No viewer needed for the classic PDF plots. |
forsyth2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chengzhuzhang This PR and its corresponding zppy PR E3SM-Project/zppy#722 are ready for review.
| @@ -1,184 +0,0 @@ | |||
| import os | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed this file because integration testing has been moved to zppy, per the design decision made at a recent meeting. That is, since zppy 1) produces input data needed for global_time_series and 2) has the image checker functionality, it makes sense to do integration testing there.
| # NOTE: PRODUCES OUTPUT IN THE CURRENT DIRECTORY (not necessarily the case directory) | ||
| # Creates the directory parameters.results_dir | ||
| coupled_global(parameters) | ||
| # Determine if we want the Classic PDF or the Viewer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where the mode split happens (as discussed in #26 (comment)).
| "shorten_year": True, | ||
| "title": "Change in sea level", | ||
| "use_getmoc": False, | ||
| "var": lambda exp: ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notice how "var" is often a formula for the Classic PDF mode plots. This is a major distinction from Viewer mode, which plots simple variables.
| ts_num_years: int, | ||
| ): | ||
| path_out = f"{case_dir}/post/ocn/glb/ts/monthly/{ts_num_years}yr" | ||
| path_out = f"{case_dir}/post/{subtask_name}/ocn/glb/ts/monthly/{ts_num_years}yr" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is crucial to allowing multiple subtasks of global_time_series to run in parallel. Without this, they'd be overwriting each other.
| self.dataset = xcdat.open_mfdataset(file_path_list, center_times=True) | ||
| else: | ||
| self.dataset = xcdat.open_mfdataset(f"{directory}*.nc", center_times=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Viewer mode, we look for specific files to load. In Classic PDF mode, we load everything, because we don't necessarily know what to look for (i.e., we may have to derive variables).
| exp["annual"][var_str] = {"glb": (data_array.isel(rgn=0), units)} | ||
| if data_array.sizes["rgn"] > 1: | ||
| # data_array.shape => number of years x 3 regions | ||
| # 3 regions = global, northern hemisphere, southern hemisphere | ||
| # We get here if we used the updated `ts` task | ||
| # (using `rgn_avg` rather than `glb_avg`). | ||
| exp["annual"][var_str]["n"] = (data_array.isel(rgn=1), units) | ||
| exp["annual"][var_str]["s"] = (data_array.isel(rgn=2), units) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We add another layer to the dict to hold all the region data, so we don't have to re-process for n and s.
| if self.plots_lnd: | ||
| logger.warning( | ||
| f"plots_lnd={self.plots_lnd} will not be plotted in Classic PDF mode." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also the reason for "case 3 above, the lnd plots aren't made in Classic PDF mode." "errors" noted in E3SM-Project/zppy#722 (comment). Expected test results will need to be updated to reflect that Classic and Viewer plots aren't placed together anymore.
|
@forsyth2 thanks for the PR, I have some initial review with questions: |
I put some initial performance numbers in E3SM-Project/zppy#722 (comment). For this latest run, we find: cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_comprehensive_v3_output/test-zppy-gts-split-20250610-try3/v3.LR.historical_0051/post/scripts/ && grep -H "Elapsed time" global_time_series_*.o*
# global_time_series_all_lnd_var_viewer_1985-1995.o759913:Elapsed time: 5526 seconds
# global_time_series_classic_original_8_1985-1995.o759915:Elapsed time: 65 seconds
# global_time_series_classic_original_8_no_ocn_1985-1995.o759914:Elapsed time: 63 seconds
# global_time_series_specific_var_viewers_1985-1995.o759912:Elapsed time: 79 seconds
# We can see here that the all_lnd var case took 5526 seconds = 92.1 minutes = 1.5 hours
# While we're here, let's also check how long it takes NCO to process all the lnd variables:
grep -H "Elapsed time" ts_lnd_monthly_glb_*.o*
# ts_lnd_monthly_glb_1985-1989-0005.o759899:Elapsed time 1m26s
# ts_lnd_monthly_glb_1985-1989-0005.o759899:Elapsed time: 91 seconds
# ts_lnd_monthly_glb_1990-1994-0005.o759900:Elapsed time 1m25s
# ts_lnd_monthly_glb_1990-1994-0005.o759900:Elapsed time: 90 seconds
# None of the other tests have the subtasks, so they're only running the Classic PDF mode.
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_comprehensive_v2_output/test-zppy-gts-split-20250610-try3/v2.LR.historical_0201/post/scripts/ && grep -H "Elapsed time" global_time_series_*.o*
# global_time_series_1980-1990.o759882:Elapsed time: 92 seconds
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_legacy_3.0.0_comprehensive_v3_output/test-zppy-gts-split-20250610-try3/v3.LR.historical_0051/post/scripts/ && grep -H "Elapsed time" global_time_series_*.o*
# global_time_series_1985-1995.o759978:Elapsed time: 89 seconds
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_legacy_3.0.0_comprehensive_v2_output/test-zppy-gts-split-20250610-try3/v2.LR.historical_0201/post/scripts/ && grep -H "Elapsed time" global_time_series_*.o*
# global_time_series_1980-1990.o759948:Elapsed time: 85 seconds
Can you point to a specific code block (i.e., comment on it in the PR)? You mean code is duplicated amongst components in Viewer mode, correct? (i.e., not that code is duplicated between Viewer and Classic PDF modes). I'm looking at the
Sure, @tomvothecoder please review if you have the bandwidth. @chengzhuzhang in the meantime, can you review E3SM-Project/zppy#720 so we can at least get that part merged? (The first two commits of that PR are shared with E3SM-Project/zppy#722.) |
I have a few high priority items for e3sm_diags to try to close out before vacation. If I complete them and have time this afternoon, I'll try to squeeze in a quick review for this PR. |
| ] | ||
| for rgn in parameters.regions: | ||
| for component, plot_list in mapping: | ||
| make_plot_pdfs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I asked Copilot for potential performance enhancement. One thing stands out is that the pdfs now are generated in serial, if we use multiprocessing to generate pdfs in parallel, it potentially can save a good amount of run time. And Copilot is able to provide suggested code patch , which include:
Add plot_single_variable as a helper for plotting one variable.
Use ProcessPoolExecutor to map over plot_list in parallel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| requested_variables.vars_atm = set_var( | ||
| exp, | ||
| "atmos", | ||
| requested_variables.vars_atm, | ||
| valid_vars, | ||
| invalid_vars, | ||
| parameters, | ||
| ) | ||
| requested_variables.vars_ice = set_var( | ||
| exp, | ||
| "ice", | ||
| requested_variables.vars_ice, | ||
| valid_vars, | ||
| invalid_vars, | ||
| parameters, | ||
| ) | ||
| requested_variables.vars_land = set_var( | ||
| exp, | ||
| "land", | ||
| requested_variables.vars_land, | ||
| valid_vars, | ||
| invalid_vars, | ||
| parameters, | ||
| ) | ||
| requested_variables.vars_ocn = set_var( | ||
| exp, | ||
| "ocean", | ||
| requested_variables.vars_ocn, | ||
| valid_vars, | ||
| invalid_vars, | ||
| parameters, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is an example of duplicated code for components.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I would say the code formatting simply make this appear more duplicative than it really is. It's a little tricky because we'd need to iterate through attributes rather than dictionary keys here.
The following at first appears like it would work, but it would not:
for (exp_key, var_list) in zip(["atm", "ice", "land", "ocean"], [requested_variables.vars_atm, requested_variables.vars_ice, requested_variables.vars_land, requested_variables.vas_ocn]):
var_list = set_var(exp, exp_key, var_list, valid_vars, invalid_vars, parameters)Notice then that while var_list is updated, the requested_variables.vars_{component} is absolutely not being updated!
We could try to dive deeper and have set_var simply mutate the var_list parameter, but that is also impractical. We create new_var_list by iterating through var_list. And then at that point, var_list = new_var_list would have exactly the same problem as above, i.e., the variable we want updated isn't getting updated.
And we can't simply mutate var_list while iterating through the for loop because that can cause issues and is thus bad practice. Really we'd need to do an element-wise copy from new_var_list to var_list.
| traceback.print_exc() | ||
| logger.error(f"plot_generic failed. Invalid plot={plot_name}, rgn={rgn}") | ||
| invalid_plots.append(plot_name) | ||
| i += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is probably not needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, I mean i+=1 is not needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it appears this didn't cause any issues because i gets rewritten on each iteration, but yes it should be removed. Probably a holdover from the original combined plotting.
I point to an example that duplicated code among components just in viewer mode. I think if the PR is just for separation (not concerning performance), it looks good to me. It does feel like we should be able to get speed up using multiprocessing because for the Viewer mode, the variables can be processed and plotted in parallel. This can probably addressed by another PR. I will move on the zppy PR now, just to clarify, the previous using configuration should still work? |
I agree. It's a good improvement but it will add more complexity, so it's a good idea to implement it as a distinct PR.
Yes, the legacy |
|
I tagged Copilot to do a review. Refer to it with discretion of course. I won't be able to review, but can do so when I get back from vacation (6/25) if this PR is still open. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the global_time_series task by splitting the Classic PDF generation and Viewer implementations, updates CLI argument handling, and cleans up shared utilities to support both modes.
- Introduces a dedicated Viewer driver and plotting modules
- Updates
Parametersto centralize shared and mode-specific configuration - Adjusts
__main__.pyto dispatch between Classic and Viewer workflows and updates tests accordingly
Reviewed Changes
Copilot reviewed 19 out of 19 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| zppy_interfaces/global_time_series/viewer/driver.py | New Viewer entry point invoking run_coupled_global |
| zppy_interfaces/global_time_series/viewer/coupled_global_plotting.py | Viewer-specific plotting logic |
| zppy_interfaces/global_time_series/viewer/coupled_global.py | Core Viewer data-processing and run flow |
| zppy_interfaces/global_time_series/utils.py | Enhanced Parameters class with shared/mode flags |
| zppy_interfaces/global_time_series/main.py | Updated CLI to select Classic PDF vs. Viewer mode |
| tests/unit/global_time_series/test_global_time_series.py | Tests extended to cover both classic and viewer paths |
Comments suppressed due to low confidence (3)
zppy_interfaces/global_time_series/utils.py:70
- The deprecation warning for
ncolsmistakenly labels the parameter asnrows; consider updating it toncols={self.ncols}for accuracy.
logger.warning(f"nrows={self.ncols} is DEPRECATED. It will be overridden as 2.")
zppy_interfaces/global_time_series/viewer/coupled_global.py:108
- [nitpick] The variable name
varsshadows the built-invars()function; consider renaming it (e.g.,component_vars) to avoid confusion.
vars = get_vars(requested_variables, component)
zppy_interfaces/global_time_series/viewer/driver.py:8
- [nitpick] The new viewer
runentry point doesn't have corresponding unit tests; adding tests to cover this function will help ensure correct mode dispatching.
def run(parameters: Parameters):
| traceback.print_exc() | ||
| logger.error(f"plot_generic failed. Invalid plot={plot_name}, rgn={rgn}") | ||
| invalid_plots.append(plot_name) | ||
| i += 1 |
Copilot
AI
Jun 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The manual increment of 'i' inside the for-loop is redundant because the loop variable is automatically updated; you can remove this line for clarity.
| i += 1 |
| valid_plots, | ||
| invalid_plots, | ||
| ) | ||
| logger.info(f"These {rgn} region plots generated successfully: {valid_plots}") |
Copilot
AI
Jun 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The valid_plots (and invalid_plots) lists accumulate entries across regions, so log messages include plots from previous regions; consider resetting these lists at the start of each region loop.
733854d to
64e1450
Compare
|
@tomvothecoder @xylar The CI/CD Build Workflow is failing for a reason I can't work out: https://github.com/E3SM-Project/zppy-interfaces/actions/runs/16012365805/job/45172523746?pr=26 Relevant pieces of the stack trace: It was originally failing with I'm also confused where |
…dling Multiprocessing with 128+ processes was causing deadlocks and memory issues when processing 300+ NetCDF variables. Sequential processing provides better reliability and memory management for this I/O-intensive workload. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Adds run_zi_global-time-series.py to demonstrate usage of the global time series functionality with real data paths and performance timing. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
02927db to
3909dfa
Compare
Testing processExpandcd ~/ez/zppy-interfaces
git status
# Branch: issue-23-gts-split
# No uncommitted changes
git log
# Last commit (9/30): fix directory generation for component only config
# Bad, missing latest commis from https://github.com/E3SM-Project/zppy-interfaces/pull/26/commits
# First, make a backup
git checkout -b issue-23-gts-split-backup20251001
git checkout issue-23-gts-split
git fetch upstream
git reset --hard upstream/issue-23-gts-split
git log
# Good, latest commit: Merge remote changes and fix single-variable plot generation bug
# Good, commit hashs match https://github.com/E3SM-Project/zppy-interfaces/pull/26/commits
# Now, to rebase off the latest `main`
git fetch upstream
git rebase upstream/main
# We have some merge conflicts
git status
# Last commands done (19 commands done)
git grep -n "<<<<<<<"
# zppy_interfaces/global_time_series/coupled_global/plots_component.py:10:<<<<<<< HEAD
# zppy_interfaces/global_time_series/coupled_global/plots_component.py:56:<<<<<<< HEAD
# zppy_interfaces/global_time_series/coupled_global/utils.py:4:<<<<<<< HEAD
# zppy_interfaces/global_time_series/coupled_global/utils.py:235:<<<<<<< HEAD
# zppy_interfaces/global_time_series/coupled_global/utils.py:389:<<<<<<< HEAD
# zppy_interfaces/global_time_series/coupled_global/utils.py:715:<<<<<<< HEAD
#
# For all: Accept Current Change
git add zppy_interfaces/global_time_series/coupled_global/plots_component.py zppy_interfaces/global_time_series/coupled_global/utils.py
git rebase --continue
git status
# Last commands done (20 commands done)
#
# More merge conflicts
# At this point, let's just cherry-pick that commit on the original working branch...
git rebase --abort
# Let's rename this branch
git branch -m issue-23-gts-split-GitHub-0251001
# And use our backup branch instead:
git checkout issue-23-gts-split-backup20251001
git checkout -b issue-23-gts-split
git log
# Last commit: fix directory generation for component only config
# Now, we need to make this match https://github.com/E3SM-Project/zppy-interfaces/pull/26/commits
# That means adding the "fix for single variable" commit.
# (The merge commit adds no diffs)
git fetch upstream issue-23-gts-split
git cherry-pick 63599cb1e0dbe3255fa0089f5581f08615999364
git log
# Last commit: fix for single variable
# Good
git fetch upstream main
git rebase upstream/main
rm -rf build
conda clean --all --y
conda env create -f conda/dev.yml -n zi-gts-updates-20251001
conda activate zi-gts-updates-20251001
pre-commit run --all-files
python -m pip install .
pytest tests/unit/global_time_series/test_*.py
# 10 passed in 24.12s
cd ~/ez/zppy
git status
# Branch: update-python
# No uncommitted changes
git checkout issue-23-gts-split
git log
# Last commit (9/5): remove subsection to align with zi
# Good, matches https://github.com/E3SM-Project/zppy/pull/722/commits
# Hashes match too
# We do want to get the testing fixes commit though
git fetch upstream further-test-fixes
git cherry-pick b0132f0600cb6f867e6f45b37f2acf2d32a49c59
git log
# Good, commit added
rm -rf build
conda clean --all --y
conda env create -f conda/dev.yml -n zppy-gts-updates-20251001
conda activate zppy-gts-updates-20251001
pytest tests/test_*.py
# 25 passed in 0.69s
# Edit tests/integration/utils.py
# TEST_SPECIFICS: Dict[str, Any] = {
# "diags_environment_commands": "source /gpfs/fs1/home/ac.forsyth2/miniforge3/etc/profile.d/conda.sh; conda activate e3sm-diags-main-20250926-rerun",
# "global_time_series_environment_commands": "source /gpfs/fs1/home/ac.forsyth2/miniforge3/etc/profile.d/conda.sh; conda activate zi-gts-updates-20251001",
# "cfgs_to_run": [
# "weekly_bundles",
# "weekly_comprehensive_v2",
# "weekly_comprehensive_v3",
# "weekly_legacy_3.0.0_bundles",
# "weekly_legacy_3.0.0_comprehensive_v2",
# "weekly_legacy_3.0.0_comprehensive_v3",
# ],
# "tasks_to_run": ["global_time_series"],
# "unique_id": "test_gts_updates_20251001",
# }
python tests/integration/utils.py
pre-commit run --all-files
python -m pip install .
zppy -c tests/integration/generated/test_weekly_bundles_chrysalis.cfg
zppy -c tests/integration/generated/test_weekly_comprehensive_v2_chrysalis.cfg
zppy -c tests/integration/generated/test_weekly_comprehensive_v3_chrysalis.cfg
zppy -c tests/integration/generated/test_weekly_legacy_3.0.0_bundles_chrysalis.cfg
zppy -c tests/integration/generated/test_weekly_legacy_3.0.0_comprehensive_v2_chrysalis.cfg
zppy -c tests/integration/generated/test_weekly_legacy_3.0.0_comprehensive_v3_chrysalis.cfg
# Check on bundles status
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_bundles_output/test_gts_updates_20251001/v3.LR.historical_0051/post/scripts
grep -v "OK" *status
# Good, no errors
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_legacy_3.0.0_bundles_output/test_gts_updates_20251001/v3.LR.historical_0051/post/scripts
grep -v "OK" *status
# Good, no errors
# Now, run bundles part 2
cd ~/ez/zppy
git status
# No uncommitted changes
# Branch: issue-23-gts-split
zppy -c tests/integration/generated/test_weekly_bundles_chrysalis.cfg
zppy -c tests/integration/generated/test_weekly_legacy_3.0.0_bundles_chrysalis.cfg
# Actually, nothing new to run when we're only running global_time_series
### v2 ###
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_comprehensive_v2_output/test_gts_updates_20251001/v2.LR.historical_0201/post/scripts
grep -v "OK" *status
# Good, no errors
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_legacy_3.0.0_comprehensive_v2_output/test_gts_updates_20251001/v2.LR.historical_0201/post/scripts
grep -v "OK" *status
# Good, no errors
### v3 ###
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_comprehensive_v3_output/test_gts_updates_20251001/v3.LR.historical_0051/post/scripts
grep -v "OK" *status
# Good, no errors
cd /lcrc/group/e3sm/ac.forsyth2/zppy_weekly_legacy_3.0.0_comprehensive_v3_output/test_gts_updates_20251001/v3.LR.historical_0051/post/scripts
grep -v "OK" *status
# Good, no errors
cd ~/ez/zppy
ls tests/integration/test_*.py
git status
# Branch: issue-23-gts-split
pytest tests/integration/test_bash_generation.py
# 1 failed in 2.40s
# Diffs appear to be expected, based on this PR's changes to zppy/templates/global_time_series.bash
pytest tests/integration/test_campaign.py
# 6 passed in 4.14s
pytest tests/integration/test_defaults.py
# 1 passed in 0.62s
pytest tests/integration/test_last_year.py
# 1 passed in 0.60s
pytest tests/integration/test_bundles.py
# 2 failed in 0.52s
# Failure is because we didn't run the entire cfg and
# this test isn't customized like test_images.py is.
pytest tests/integration/test_images.py
# 1 failed in 13.92s
cat test_images_summary.md
git log
# Matches https://github.com/E3SM-Project/zppy/pull/722/commits
# + https://github.com/E3SM-Project/zppy/pull/738/commits
# So, nothing to add here
cd ~/ez/zppy-interfaces
git log
# Doesn't match https://github.com/E3SM-Project/zppy-interfaces/pull/26/commits
# Need to push rebased commits
git push -f upstream issue-23-gts-splitComplete resultsExpandDiff subdir is where to find the lists of missing/mismatched images, the image diff grid, and the individual diffs.
Concise results
And those failures are because we changed what subtasks run for the v3 test, so all good. Test cases
Ok, it looks like the atmosphere plots are now linked, good. Ready to mergeI think E3SM-Project/zppy#722 && #26 are now ready to merge. I tested on rebased branches and have pushed the rebased commits. The atm plots and links are now showing up -- example. @chengzhuzhang Do you want to check anything else or can I go ahead and merge these? Thanks for all the effort on this! |
|
Great news. I'm not very clear about the cases you tested: what is the difference between original only and Original 8 plots? Otherwise, I think this PR is good to merge. |
Sorry, the last row should actually be "Original 8 plots, no ocn" |
In this case, it’s a bit odd that “Original 8 plots, no ocn” produces no output when make_viewer = True, while Original does produce output. Let me quickly check the logic that disables output in the “Original 8 plots, no ocn” and “All land variables” cases. It might be better to always generate output, regardless of whether make_viewer is set to True or False. |
That is because I didn't run that case. N/A meant it wasn't one of the test cases. |
|
Those last two rows are sort of extra add-on cases. The top 3 rows are the basics. |
|
The test cases are defined in the test cfg https://github.com/E3SM-Project/zppy/blob/69c0a18edef13f058b21c190483d611123005291/tests/integration/generated/test_weekly_comprehensive_v3_chrysalis.cfg: |
|
OH, I mis interpreted the table. Thanks for clarifying. If N/A just means the case was not tested, then we are okay. |
|
Great, I'll merge the PRs then, thanks!! |
Summary
Corresponding
zppyPR: E3SM-Project/zppy#722Objectives:
global_time_series. These two output formats deviate substantially and so separating them can help clean up the code.Issue resolution:
DatasetWrapper.dataset#21 (by replacing Only read requested vars #22)Select one: This pull request is...
Big Change
1. Does this do what we want it to do?
Required:
2. Are the implementation details accurate & efficient?
Required:
If applicable:
zppy-interfaces/conda, not just animportstatement.3. Is this well documented?
Required:
4. Is this code clean?
Required:
If applicable: