-
After following the documentation from both the website and also looking into the code, I found that the ways to create experiences manually are quite hidden (which may be deliberate if experience need to be created along with the scenario). However, since both strategy.eval() and strategy.train() take experiences as input, it happens sometimes that you only want to evaluate on one dataset ( after having trained on a sequence of others ). Right now if I want to do that using strategy.eval I have to create a scenario for instance using dataset_benchmark and only give it one dataset. It seems quite an artificial way to do so since after that I also have to look for scenario.test_stream[0] etc ... when I only really want to evaluate on one dataset. Is there a function that instead could generate one experience from a pytorch dataset, in which case that experience would not be related to any scenario ? I could not find a way to do that in the documentation. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Unfortunately at the moment we do not offer any quick utility method that given a Pythorch Dataset produces an Avalanche Experience. This is because a reference to the related benchmark is often needed to simplify the implementations of some plugins (such as the evaluation one) and provide maximum flexibility at prototype level even breaking some CL assumptions (i.e. having access to past and future experiences, meta-data about the benchmark, etc. Decoupling benchmarks from experiences is something very interesting that may indeed increase the flexibility of avl (especially for "production" use-cases). For now a quick hack would be what you suggested, feel free to open a PR with a method to be put in the benchmarks.utils if you think it would be useful to others! :) |
Beta Was this translation helpful? Give feedback.
Unfortunately at the moment we do not offer any quick utility method that given a Pythorch Dataset produces an Avalanche Experience.
This is because a reference to the related benchmark is often needed to simplify the implementations of some plugins (such as the evaluation one) and provide maximum flexibility at prototype level even breaking some CL assumptions (i.e. having access to past and future experiences, meta-data about the benchmark, etc.
Decoupling benchmarks from experiences is something very interesting that may indeed increase the flexibility of avl (especially for "production" use-cases). For now a quick hack would be what you suggested, feel free to open a PR with a method …