-
Notifications
You must be signed in to change notification settings - Fork 1
Home
The Water Resources Evaluation Service (WRES) is a comprehensive service for evaluating the quality of model predictions, such as hydrometeorological forecasts. The WRES encapsulates a data-to-statistics evaluation pipeline, including reading data from files or web services, rescaling data, changing measurement units, filtering data, pairing predictions and observations, allocating pairs to pools based on pooling criteria (e.g., common forecast lead times), computing statistics and writing statistics formats.
The WRES has three modes of operation:
-
"Cluster mode" using a web-service instance. This is the preferred mechanism for deploying the WRES "at scale" as a centrally-managed, multi-user or "cluster" instance on server hardware and is described in the wiki, WRES Web Service (wiki is yet to be written), which includes instructions for setting up a web service instance. An example instance is the Central OWP WRES (COWRES), which is hosted at the National Water Center (NWC) in Tuscaloosa, Alabama, and available for use from National Weather Service (NWS) River Forecast Center (RFC) and Office of Water Prediction (OWP) machines.
-
"Standalone mode" using a short-running instance. This requires no particular installation or deployment and is the preferred mechanism for a "laptop user", i.e., for performing modestly-sized evaluations on consumer hardware. This mechanism is described below and requires either downloading an official release (preferred) or cloning the source code and building the software locally.
-
"Standalone mode" using a long-running, local-server instance. This has a similar scope of application to a short-running standalone (see above). However, it benefits from reduced latency/spin-up time because the software is running continuously in the background. This mechanism is described in the wiki WRES Local Server. It, too, requires either downloading a release artifact (preferred) or cloning the source code and building the software locally.
In each mode, evaluations may be executed in main memory (RAM), which generally improves performance, but is only viable for evaluations whose datasets will fit in main memory, or against a database, which is generally required for larger evaluations across many geographic features. See Instructions for Using WRES for more information.
See Instructions for Using WRES. This wiki will redirect you to other wikis, as needed, including to the Declaration language wiki for evaluation declaration instructions, and for the different modes of deployment/operation described above.
Running the WRES as a command-line application may be the simplest way to execute the software. However, this is less efficient than a central (cluster) deployment because each instance must be deployed, managed, updated and supported separately. If a web-service instance is available, it is highly recommended that you use it. Still, instructions for downloading and executing the standalone are below.
a. Navigate to the releases page, https://github.com/NOAA-OWP/wres/releases.
b. Download the latest core zip from the assets of the most recent deployment. That .zip should follow the pattern, wres-DATE-VERSION.zip
.
Unzip the release package and change directory to the unzipped wres
directory. To execute a project you can run the following command:
bin/wres execute your_evaluation.yml
Building the software locally will be necessary for developers/contributors, but, if you just want to execute evaluations locally, downloading a released version is recommended, as described above. Instructions for building the WRES are below.
To build WRES for local use, clone the repository, and run the following commands in your preferred terminal (use gradlew.bat
on a Windows machine):
./gradlew check javadoc installDist
This is similar to unzipping the production distribution zip locally. The WRES software will be installed in build/install/wres
directory, as if unzipped.
Do the following:
cd build/install/wres/
bin/wres execute yourProject.yml
Upon cloning the repository, a large number of system test scenarios are available as example evaluations. The simplest, and very small, example system test scenario with data in the repository is scenario500
. Once the software has been installed locally, running the following command (use wres.bat
on a Windows machine) will execute the scenario500
example using the executable you have created (this assumes you are running from the wres/build/install/wres
directory):
bin/wres ../../../systests/scenario500/evaluation.yml
The WRES sources time-series and other datasets from web services. These data sources can vary significantly in quality. It is the responsibility of the user to verify the accuracy of the datasets used for model evaluations. In some cases, such as USGS stage and discharge measurements, data may be provisional, i.e., subject to change. The quality of the measurements from individual instruments can vary significantly. An evaluation is only as informative as the datasets being evaluated. Users are assumed to have considered the site-specific details of the data before interpreting and using any evaluation statistics to guide their decision processes.
The WRES Wiki
-
Options for Deploying and Operating the WRES
- Obtaining and using the WRES as a standalone application
- WRES Local Server
- WRES Web Service (under construction)
-
- Format Requirements for CSV Files
- Format Requirements for NetCDF Files
- Introductory Resources on Forecast Verification
- Instructions for Human Interaction with a WRES Web-service
- Instructions for Programmatic Interaction with a WRES Web-service
- Output Format Description for CSV2
- Posting timeseries data directly to a WRES web‐service as inputs for a WRES job
- WRES Scripts Usage Guide