-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor cubewrite()
workflow steps into process()
#145
Comments
The above can be combined with splitting the combined Generator Design: split like: def process(infile, outfile, args):
# this is the imperative shell
ff = mule.load_umfile(str(infile)) # I/O
mv = process_mule_vars(ff)
cubes = sorted(iris.load(infile), key=something) # I/O
with iris.fileformats.netcdf.Saver(outfile, ...) as dest: # this is I/O
# write global vars
for cube in process_cubes(cubes, mv, ...):
dest.write(cube, zlib=True, complevel=compression, ...) # also I/O
def process_cubes(cubes, mv, ...):
# this is the top of the functional core
for cube in cubes:
# workflow steps are here
fix_var_name()
cube = modified_cube(cube) # for example
yield cube # modified cubes returned to caller The
|
Alternative Coroutine Design: def process(infile, outfile, args):
# this is the imperative shell
ff = mule.load_umfile(str(infile)) # I/O
mv = process_mule_vars(ff)
cubes = sorted(iris.load(infile), key=something) # I/O
con = consumer(args, ...)
process_cubes(cubes, mv, con, ...):
def consumer(outfile, ...):
# part of the imperative shell
with iris.fileformats.netcdf.Saver(outfile, ...) as dest: # I/O
# write global vars here, then
cube = (yield)
dest.write(cube, zlib=True, complevel=compression, ...) # I/O
# testing this is fiddlier due to the con/coroutine dependency
def process_cubes(cubes, mv, con, ...):
# this is the top level of the functional core
for cube in cubes:
# workflow func calls go here
fix_var_name()
cube = modified_cube(cube) # for example
con.send(cube) # generic coroutine mechanism, separates this from I/O
# also return cubes???
return ... This pattern is fiddlier, but may provide an alt option if the generator version isn't feasible. There's potential benefits if multiple forms of consumer are required (e.g. different versions of libs). |
The generator design looks clear and clean to me. Could that structure be easily adapted for one variable per file output? Is there a 1:1 relationship between cubes and variables? Would context lib.ExitStack work to switch between single file and multiple output? |
Because there's a few separate tasks involved, I'll try and address them in bitesize pieces and make small PRs into the |
Early October 2024 work has simplified
cubewrite()
enough to move its contents intoprocess()
& unify the top level workflow under a single function.This is dependent on merging #114, #118 and #144.
Some tasks for this include:
mock.patch("umpost.um2netcdf.cubewrite")
mock.Mock(name="mock_sman")
Tasks:
item_code
to be lostassert
s, how do we compare test fixture cubes against copies fromprocess_cubes()
? (what assertions are needed for the output cubes?)sman
todest
. (e.g.dest.write(cube, ...)
)fill_value
be grabbed from the cube forsman.write()
unlimited_dimensions
stored in the cube?aiihca.paa1jan
& Martin's original versionaiihca.paa1jan
(larger data tests)The text was updated successfully, but these errors were encountered: