You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of letting docker create volume backing store directories in /var/lib/docker/volumes, in staging and production stacks we point to directories such as /srv/data/docker/STACKNAME/VOLUMENAME
It's my recall that if the directories do not exist when the stack is deployed, the deployment fails (the because Docker cannot instantiate the volumes for use by containers), and the error is not displayed in any place "obvious" (maybe "systemctl status docker" shows it?)
While this may be "well understood" (by the subset of current developers who regularly deploy production and staging stacks), it could cause mystification to new developers, or those that have forgotten this arcana.
Now, I think the behavior is both a bug, and a feature. A feature in that an attempt to launch a production stack on the wrong server won't work (tho silent failure is obviously a bug). It's more solidly a bug for staging, where it should, perhaps, always work.
So I'm not sure I'd be happy with any fix that silently always creates the directories, and perhaps the UI should be to check that the directories (if any) exist before launching the stack, quitting for production (or prompting the user, to at least try to force them to consider if they're doing the right thing), and creating the volumes (with a warning message) for staging deploys.
My first guess on the "right time" to catch this is in deploy.sh between when the expanded docker/docker-compose.yml.dump file is created (where the information is in the .dump file), and when the actual stack launch occurs.
The text was updated successfully, but these errors were encountered:
Instead of letting docker create volume backing store directories in /var/lib/docker/volumes, in staging and production stacks we point to directories such as /srv/data/docker/STACKNAME/VOLUMENAME
It's my recall that if the directories do not exist when the stack is deployed, the deployment fails (the because Docker cannot instantiate the volumes for use by containers), and the error is not displayed in any place "obvious" (maybe "systemctl status docker" shows it?)
While this may be "well understood" (by the subset of current developers who regularly deploy production and staging stacks), it could cause mystification to new developers, or those that have forgotten this arcana.
Now, I think the behavior is both a bug, and a feature. A feature in that an attempt to launch a production stack on the wrong server won't work (tho silent failure is obviously a bug). It's more solidly a bug for staging, where it should, perhaps, always work.
So I'm not sure I'd be happy with any fix that silently always creates the directories, and perhaps the UI should be to check that the directories (if any) exist before launching the stack, quitting for production (or prompting the user, to at least try to force them to consider if they're doing the right thing), and creating the volumes (with a warning message) for staging deploys.
My first guess on the "right time" to catch this is in deploy.sh between when the expanded
docker/docker-compose.yml.dump
file is created (where the information is in the .dump file), and when the actual stack launch occurs.The text was updated successfully, but these errors were encountered: