Replies: 1 comment 3 replies
-
Hi @highclef !
|
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I am currently developing a continual learning framework for my project.
My LSTM is getting amount of batch dataset at different time points (t1, t2, ...), which can be defined as experiences in this case.
However, since each batch dataset is too big, I am considering of saving network parameters of my LSTM (weights, biases) after feeding each batch dataset at t1 as a single experience to the CL strategy. So the sequence is as follows:
And here are my questions:
tensors_scenario
?Avalanche
finetune (Naive) and typicalPyTorch
training method (forward pass, backward & optimization)? I have already seen this:but unfortunately it does not give me any idea why it actually showed better performance than the typical
PyTorch
training method. What exactly doesAvalanche
Naive method with continual learning concepts?Beta Was this translation helpful? Give feedback.
All reactions