Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't disable logging when using predict(...) method for a model. #665

Open
harrymyburgh opened this issue Jun 15, 2023 · 8 comments
Open

Comments

@harrymyburgh
Copy link

Whenever I call the predict(...) method for a model, the following output is printed:

Predicting DataLoader 0: 100%|██████████| 1/1 [00:00<00:00, 236.69it/s]

How do I prevent NeuralForecast from logging this output?
Setting the pytorch_lightning logging level to ERROR does not seem to eliminate it.

@kdgutier
Copy link
Collaborator

Hey @harrymyburgh,

If you are using a notebook, you can capture the cell's outputs like this:

%%capture
nf.predict()

@harrymyburgh
Copy link
Author

Hi @kdgutier , thank you for your advice. Do you know how to turn off the logging altogether though? I'm trying to parallelize predictions from different models, but because they all log to the same location, race conditions occur, which means that it ends up running in series.

@kdgutier
Copy link
Collaborator

Hey @harrymyburgh,

From PL's logging documentation https://pytorch-lightning.readthedocs.io/en/0.10.0/logging.html.
Would something like this work?

import logging

pl_loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict if 'lightning' in name]

for logger in pl_loggers:
    logger.setLevel(logging.CRITICAL)
    #logger.setLevel(logging.ERROR)

Also, I found this post on node parallelization of PL: https://medium.com/@joelstremmel22/multi-node-multi-gpu-comprehensive-working-example-for-pytorch-lightning-on-azureml-bde6abdcd6aa.

@harrymyburgh
Copy link
Author

Hi @kdgutier, unfortunately the above doesn't work. The output is a direct result of NeuralForecast's TimeSeriesDataModule, which has no flags available to decrease verbosity. I'm fairly certain now that TimeSeriesDataModule prints to stdout, because I have tried disabling logging altogether, however output still prints. I came across some documentation yesterday explaining how third parties can control the output of PL's DataLoader (which is the offender in TimeSeriesDataModule). There was specifically a function that needs to be called to disable output, but I can't for the life of me find this documentation now (I will post the link here if I manage to find it later today).

@tufanbt
Copy link

tufanbt commented Jun 30, 2023

you can use enable_progress_bar=False for your models. It passes this in trainer_kwargs to the pytorch-lightning.Trainer through BaseWindows class.
Note: This also disables training progress bars. You may try to change kwargs just before prediction.

@harrymyburgh
Copy link
Author

@tufanbt Thank you for your response. Unfortunately, setting the enable_progress_bar flag only affects the training and predicting progress outputs.

The issue here is specifically to do with the data loaders used by Neural Forecast. It appears that the data loader used by Neural Forecast is custom and inherits from PyTorch's DataLoader class, so I don't think it's anything the user of Neural Forecast can directly fix.

I have managed to get around it by redirecting the output to a different output stream, but this is very messy, and writes are still being made to said redirected output stream (which is still not ideal).

@harrymyburgh
Copy link
Author

Hi @anonymous-engineering , I currently have no best-case solutions to this issue. Unfortunately, I don't think it's anything we can control with respect to Neural Forecast just yet.

See my other comment for a temporary, very messy fix.

@aitirga
Copy link

aitirga commented Nov 6, 2024

Hey, any updates on this? I am having the same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants