Skip to content
James Brown edited this page Jul 18, 2024 · 29 revisions

The Water Resources Evaluation Service

The Water Resources Evaluation Service (WRES) is a comprehensive service for evaluating the quality of model predictions, such as hydrometeorological forecasts. The WRES encapsulates a data-to-statistics evaluation pipeline, including reading data from files or web services, rescaling data, changing measurement units, filtering data, pairing predictions and observations, allocating pairs to pools based on pooling criteria (e.g., common forecast lead times), computing statistics and writing statistics formats.

What are the options for deploying and operating WRES?

The WRES has three modes of operation:

  • "Cluster mode" using a web-service instance. This is the preferred mechanism for deploying the WRES "at scale" as a centrally-managed, multi-user or "cluster" instance on server hardware and is described in the wiki, WRES Web Service (wiki is yet to be written), which includes instructions for setting up a web service instance. An example instance is the Central OWP WRES (COWRES), which is hosted at the National Water Center (NWC) in Tuscaloosa, Alabama, and available for use from National Weather Service (NWS) River Forecast Center (RFC) and Office of Water Prediction (OWP) machines.

  • "Standalone mode" using a short-running instance. This requires no particular installation or deployment and is the preferred mechanism for a "laptop user", i.e., for performing modestly-sized evaluations on consumer hardware. This mechanism is described below and requires either downloading an official release (preferred) or cloning the source code and building the software locally.

  • "Standalone mode" using a long-running, local-server instance. This has a similar scope of application to a short-running standalone (see above). However, it benefits from reduced latency/spin-up time because the software is running continuously in the background. This mechanism is described in the wiki WRES Local Server. It, too, requires either downloading a release artifact (preferred) or cloning the source code and building the software locally.

In each mode, evaluations may be executed in main memory (RAM), which generally improves performance, but is only viable for evaluations whose datasets will fit in main memory, or against a database, which is generally required for larger evaluations across many geographic features. See Instructions for Using WRES for more information.

What are the general instructions for using the WRES to perform an evaluation?

See Instructions for Using WRES. This wiki will redirect you to other wikis, as needed, including to the Declaration language wiki for evaluation declaration instructions, and for the different modes of deployment/operation described above.

What if I just want to download a release and run the WRES as a command-line application?

Running the WRES as a command-line application may be the simplest way to execute the software. However, this is less efficient than a central (cluster) deployment because each instance must be deployed, managed, updated and supported separately. If a web-service instance is available, it is highly recommended that you use it. Still, instructions for downloading and executing the standalone are below.

1. Make sure you have the correct version of Java, which must be Java 17 or higher.

2. Obtain the latest distribution .zip and install the software locally.

a. Navigate to the releases page, https://github.com/NOAA-OWP/wres/releases.

b. Download the latest core zip from the assets of the most recent deployment. That .zip should follow the pattern, wres-DATE-VERSION.zip.

3. Unpackage the file and execute your evaluation.

Unzip the release package and change directory to the unzipped wres directory. To execute a project you can run the following command:

bin/wres execute your_evaluation.yml

What if I want to obtain and build the software myself?

Building the software locally will be necessary for developers/contributors, but, if you just want to execute evaluations locally, downloading a released version is recommended, as described above. Instructions for building the WRES are below.

1. Make sure you have the correct version of Java, which must be at Java 17 or higher.

2. Obtain and build the WRES software.

To build WRES for local use, clone the repository, and run the following commands in your preferred terminal (use gradlew.bat on a Windows machine):

./gradlew check javadoc installDist

This is similar to unzipping the production distribution zip locally. The WRES software will be installed in build/install/wres directory, as if unzipped.

3. Navigate to the installed software and execute your evaluation project.

Do the following:

cd build/install/wres/
bin/wres execute yourProject.yml

Running an example evaluation using system testing data available in the cloned repository.

Upon cloning the repository, a large number of system test scenarios are available as example evaluations. The simplest, and very small, example system test scenario with data in the repository is scenario500. Once the software has been installed locally, running the following command (use wres.bat on a Windows machine) will execute the scenario500 example using the executable you have created (this assumes you are running from the wres/build/install/wres directory):

bin/wres ../../../systests/scenario500/evaluation.yml

Disclaimer: Data Accuracy

The WRES sources time-series and other datasets from web services. These data sources can vary significantly in quality. It is the responsibility of the user to verify the accuracy of the datasets used for model evaluations. In some cases, such as USGS stage and discharge measurements, data may be provisional, i.e., subject to change. The quality of the measurements from individual instruments can vary significantly. An evaluation is only as informative as the datasets being evaluated. Users are assumed to have considered the site-specific details of the data before interpreting and using any evaluation statistics to guide their decision processes.

Clone this wiki locally