You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
The memory in the learner processes will increment occasionally until it chokes are machines (8 Gb memory, 5Gb for the learner process) at around 1 million steps. I spoke to someone in discord about this, and they did not have this issue. However I note that the graph of memory usage they posted showed around 300 Mb of use, which seems too small (but I suppose its really net dependent)
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
My collaborator and I both experience this issue.
python -m torchbeast.monobeast --env PongNoFrameskip-v4 --num_actors 1 --num_buffers 2 --total_steps 5000000
The memory in the learner processes will increment occasionally until it chokes are machines (8 Gb memory, 5Gb for the learner process) at around 1 million steps. I spoke to someone in discord about this, and they did not have this issue. However I note that the graph of memory usage they posted showed around 300 Mb of use, which seems too small (but I suppose its really net dependent)
The text was updated successfully, but these errors were encountered: