You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to work out how to check we don't break models between commits because sometimes it becomes apparent that something has changed only when you look at the example plots and noticing they look wrong.
For example, when I recently changed the PAR API I only noticed that it wasn't working properly by looking at one of the box model example outputs.
I guess a solution to this is to improve unit testing, but that requires us to foresea the problems. Another idea I had was to run regression tests so if model outputs change we know, and can hunt down the cause.
I'm dubious about calling them regression tests like Oceananigans does because we're not checking the models fit some physical/analytical result but just that they don't change each time.
The text was updated successfully, but these errors were encountered:
Either some regression tests or simple "physics tests"?
For example some tests that would ensure that things behave in some physically meaningful way? We have implemented a few of those for GeophysicalFlows.jl; see, e.g., https://github.com/FourierFlows/GeophysicalFlows.jl/blob/main/test/test_twodnavierstokes.jl and more. These include eg how a Rossby wave would propagate or ensuring a tracer is conserved after being advected or even more complicated ones e.g. testing that the energy budget closes. We might have to think a few for OceanBioME models.
I've been trying to work out how to check we don't break models between commits because sometimes it becomes apparent that something has changed only when you look at the example plots and noticing they look wrong.
For example, when I recently changed the PAR API I only noticed that it wasn't working properly by looking at one of the box model example outputs.
I guess a solution to this is to improve unit testing, but that requires us to foresea the problems. Another idea I had was to run regression tests so if model outputs change we know, and can hunt down the cause.
I started writing this here: https://github.com/OceanBioME/OceanBioME.jl/tree/jsw/regression-tests/test/regression_tests but was wondering if anyone else thought it was worthwhile?
I'm dubious about calling them regression tests like Oceananigans does because we're not checking the models fit some physical/analytical result but just that they don't change each time.
The text was updated successfully, but these errors were encountered: