You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 30, 2019. It is now read-only.
Taking the k8s example for a spin, some suggestions from our k8s engineer
All the env-variables except downwardAPI, might be better using a ConfigMap and then mount them as volumes. Only reason to use configMaps over Environment variables is you can change the ConfigMap on the fly and don’t need to re-create the whole deployment.
imagePullPolicy: IfNotPresent should be Always in case of Production because in case you have old image with same tag locally, it will never pull new images
my additional remark would be to remove the dependency on minikube and it's ingress and make something more generic, example based on istio which can be deployed on any k8s host (including docker for mac + k8s extension)
The text was updated successfully, but these errors were encountered:
All the env-variables except downwardAPI, might be better using a ConfigMap and then mount them as volumes. Only reason to use configMaps over Environment variables is you can change the ConfigMap on the fly and don’t need to re-create the whole deployment.
While yes you can change the config map on the fly, how would the application pick the changes up? Lagom/Play/Akka read their config on startup, and even if they did reread periodically, most of that config would require restarting anyway. So, when you change the config, you need to restart all the apps for them to pick it up, and generally you want that done immediately. But you don't want to do that manually, want to do it in a rolling fashion, like a deployment does for you when you update the deployment. So, for most of that config, it's actually better to make it environment variables, since then you can take advantage of recreating the deployment to do a rolling update for you. If you used a config map, you'd have to restart everything manually, in a rolling fashion, which would be time consuming and error prone.
The general principle here would be to use config maps for anything that apps can use dynamically, and for anything that requires a restart, use environment variables.
imagePullPolicy: IfNotPresent should be Always in case of Production because in case you have old image with same tag locally, it will never pull new images
This sounds dangerous to me - if you have an old image with the same tag locally, then how do you know what's running where? How do you know which version of the image a particular container is running when it fails and you need to debug and reproduce the failure? New images deployed to production should always have a new tag, in which case, it doesn't matter if IfNotPresent or Always is used, and IfNotPresent is a nice optimisation.
All that said, the current example exposes far too many environment variables, most of those things don't need to be/shouldn't be configurable. It's a result of taking another tool that automatically generates the YAML and using its output as is. We'll be producing new examples fairly soon that provide a much simpler YAML with only essential environment variables exposed.
Taking the k8s example for a spin, some suggestions from our k8s engineer
downwardAPI
, might be better using aConfigMap
and then mount them as volumes. Only reason to use configMaps over Environment variables is you can change the ConfigMap on the fly and don’t need to re-create the whole deployment.imagePullPolicy: IfNotPresent
should beAlways
in case of Production because in case you have old image with same tag locally, it will never pull new imagesmy additional remark would be to remove the dependency on minikube and it's ingress and make something more generic, example based on istio which can be deployed on any k8s host (including docker for mac + k8s extension)
The text was updated successfully, but these errors were encountered: