Replies: 2 comments 2 replies
-
Hi @rmassidda, thanks for reaching out! We are aware of this issue, and one of the reasons that strategy templates were introduced was to enable different strategy types such as online continual learning in Avalanche, which is the case for MER. Currently, we have a basic implementation of |
Beta Was this translation helpful? Give feedback.
-
It's probably better not to call them. If I understand correctly, your forward/backward/update methods are the inner loop of a meta-learning training, so they don't have much in common with the standard SGD loop apart from the names. You should probably make your own template. At some point I think we will add a meta-learning template. For the efficiency, I don't think we can do much if you iterate sample-wise. |
Beta Was this translation helpful? Give feedback.
-
Hey!
I'm trying to implement Meta-Experience Replay (https://arxiv.org/abs/1810.11910) as a strategy in Avalanche, but I'm not quite sure about which should be the best approach to follow.
In a nutshell, for each data point, the technique draws a set of examples from a reservoir memory and then computes the optimization step on a per-element basis. Then, by caching the model weights, a meta-update step on the weights is computed at the end of each set and for each original data point.
I drafted two solutions: one based on
SupervisedTemplate
andBaseStrategy
.For the former, since operations in the template are batch-level defined, I implemented a custom
DataLoader
returning "batches" with one element at a time following the paper procedure. As expected, this solution turned out to be extremely slow. Intuitively, this should be inevitable since I'm repeating all the batch-level operations for each element.Alternatively, by keeping a standard
DataLoader
and redefiningtraining_epoch
to loop over a batch and then applying forward-backward-step independently to each element, I'm not confident whether to maintain the callbacks. For instance, is it consistent to repeatedly call_{before,after}_{forward,backward,update}()
for each element in the batch?On the other hand, by extending
BaseStrategy
the issue reduces to implementing_train_exp
. Nonetheless, given the lack of integration with Loggers, Supervised Plugins and so on, I feel like I'm missing out on most of the "Avalanche experience". 😄TL;DR: Meta-Experience Replay exploits per-element operations, while Avalanche revolves around batch-level operations. By extending either
SupervisedTemplate
orBaseStrategy
it's not clear to me how to handle the various callbacks.Beta Was this translation helpful? Give feedback.
All reactions