-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't disable logging when using predict(...)
method for a model.
#665
Comments
Hey @harrymyburgh, If you are using a notebook, you can capture the cell's outputs like this:
|
Hi @kdgutier , thank you for your advice. Do you know how to turn off the logging altogether though? I'm trying to parallelize predictions from different models, but because they all log to the same location, race conditions occur, which means that it ends up running in series. |
Hey @harrymyburgh, From PL's logging documentation https://pytorch-lightning.readthedocs.io/en/0.10.0/logging.html.
Also, I found this post on node parallelization of PL: https://medium.com/@joelstremmel22/multi-node-multi-gpu-comprehensive-working-example-for-pytorch-lightning-on-azureml-bde6abdcd6aa. |
Hi @kdgutier, unfortunately the above doesn't work. The output is a direct result of NeuralForecast's |
you can use enable_progress_bar=False for your models. It passes this in trainer_kwargs to the pytorch-lightning.Trainer through BaseWindows class. |
@tufanbt Thank you for your response. Unfortunately, setting the The issue here is specifically to do with the data loaders used by Neural Forecast. It appears that the data loader used by Neural Forecast is custom and inherits from PyTorch's DataLoader class, so I don't think it's anything the user of Neural Forecast can directly fix. I have managed to get around it by redirecting the output to a different output stream, but this is very messy, and writes are still being made to said redirected output stream (which is still not ideal). |
Hi @anonymous-engineering , I currently have no best-case solutions to this issue. Unfortunately, I don't think it's anything we can control with respect to Neural Forecast just yet. See my other comment for a temporary, very messy fix. |
Hey, any updates on this? I am having the same issue |
Whenever I call the predict(...) method for a model, the following output is printed:
How do I prevent NeuralForecast from logging this output?
Setting the
pytorch_lightning
logging level toERROR
does not seem to eliminate it.The text was updated successfully, but these errors were encountered: