-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/nf evaluation #891
base: main
Are you sure you want to change the base?
Feat/nf evaluation #891
Conversation
Marco Peixeiro seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of passing the dataset name as an argument, you could use a config file? (yaml, toml or whatever).
In this way if you want to add another dataset you have just to change the config and not three scripts ;-)
from datasetsforecast.long_horizon import LongHorizon | ||
|
||
|
||
def get_dataset(name): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the name of the method is not correct. You are returning only the dataset but also other parameters
@@ -0,0 +1,30 @@ | |||
from neuralforecast.auto import * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will import just what you need and not everything
|
||
def get_model(model_name, horizon, num_samples): | ||
"""Returns the model class given the model name. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: docstring on one line, i.e., """Returns the model class given the model name."""
'AutoPatchTST': AutoPatchTST(config=None, h=horizon, loss=HuberLoss(), num_samples=num_samples) | ||
} | ||
|
||
return model_dict[model_name] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could just create a dictionary and import it instead of create a dictionary, put it inside a method and use the method.
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" | ||
|
||
|
||
def main(args): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i will call it run_experiments
'MSE': [model_mse], | ||
'sMAPE': [model_smape], | ||
'MAPE': [model_mape], | ||
'time': [elapsed_time] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'elapsed_time' instead of time?
results_path = f'./results/{args.dataset}' | ||
os.makedirs(results_path, exist_ok=True) | ||
|
||
metrics_df = pd.DataFrame(metrics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for every model you are creating a pandas dataframe. What if you loop evaluate every model and you create a pandas dataframe with all the results? It will be much easier to compare different models
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Script to evaluate all NF models on benchmark datasets. The table needs to be updated with metrics as results are obtained.
Main changes:
ds
column asint
ordatetime
in datasets.py