-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking recipes (Lauer et al.) #3598
base: main
Are you sure you want to change the base?
Conversation
…on_clouds_cycles.yml
…/ESMValTool into benchmarking_maps4monitoring
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @schlunma and @axel-lauer !
Some minor suggestions and questions from me.
I note also that readthedocs build still failing (so I haven't fully reviewed the documentation), and a couple of style complaints in run_tests.
# Make sure that the data has the correct dimensions | ||
cube = dataset['cube'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this line ensure that the data has the right dimensions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I understand that _plot_benchmarking_zonal()
expects the cube to have zonal mean applied as pre-processor, rather than being a function that makes and plots zonal mean of 3D data. Also, often zonal means are 1D line plots (i.e. zonal mean of a 2D lat-lon field), so scope for confusion there too. Is required shape of data documented?
Description
This PR implements a set of benchmarking recipes for comparison of different metrics (RMSE, bias, correlation, EMD) calculated for a given model simulation to the results from an ensemble of (model) datasets:
For this, the existing monitoring diagnostics monitoring/monitor.py and monitor/multi_datasets.py have been extended.
The new diurnal cycle plot has also been added to the following existing recipes:
Documentation for the benchmarking recipes is available in recipes/recipe_benchmarking.rst, the documentation for monitoring and model evaluation have been updated to include the diurnal cycle plots.
Note for testing
The benchmarking recipes require the new preprocessor functions local_solar_time and distance_metric and the extended version of preprocessor resample_hours.
Checklist
It is the responsibility of the author to make sure the pull request is ready to review. The icons indicate whether the item will be subject to the 🛠 Technical or 🧪 Scientific review.
New or updated recipe/diagnostic