This project template is meant for use by anyone and everyone, it's aim is to promote good practices and make it easy to kick-start a project.
It does some assumptions however:
- You want to use Azure DevOps with it (if you haven't tried it before, you should check it out).
- You mostly deploy Docker images to Kubernetes
- You use Yaml for your Kubernetes configuration
- You are comfortable with using Python >=3.6 at least for your tooling
- You use Git (though usage with e.g. Mercurial is reasonably easy)
- You like using pre-commit for your hooks
- You use minikube (but any other local Kubernetes should work fine with minimal changes)
- You would like kured to manage the restarting of your Kubernetes nodes automatically
- You want to store secrets in encrypted form in your repository using Sealed Secrets
It works especially well for projects where you build Python APIs as the tooling already uses Python. However, you can really build anything else with it as well and this template aims to prove that.
Every machine where you run projects based on this, perform builds, releases or similar should likely have the following installed:
To save some effort, you can use the Azure DevOps pipeline agent in
this repository's service/pipeline-agent
. You'll need to buy the
dedicated agent slots on Azure DevOps for it though, so you might want
to start with the hosted agents.
If you do use it, configure service/pipeline-agent/kube/01-config.yaml
and create the related sealed secrets for it. VSTS_TOKEN
needs to be
created from Azure DevOps. More information at https://hub.docker.com/_/microsoft-azure-pipelines-vsts-agent
The directory structure should be fairly self-explanatory.
tasks.py
- Contains Invoke tasks run viapoetry run invoke ...
azure-devops
- Contains the Azure DevOps pipeline configurationsdevops
- Contains the bulk of the DevOps -tooling codeenvs
- Contains configuration for environmentskube
- Basic configuration for all Kubernetes clusters
You will find many commands and configs refer to "components". It is
used simply to refer to the path with a Dockerfile
and kube/*.yaml
that can be used to build and deploy your things to Kubernetes.
The repository contains the example component service/pipeline-agent
and the build and release configurations for it.
Naming works as follows:
- Components:
path/to/component
->{IMAGE_PREFIX}-path-to-component
in e.g. Docker repository names. This means that if you configuredevops/settings.py
withIMAGE_PREFIX = "myproj"
, thenservice/pipeline-agent
will be built asmyproj-service-pipeline-agent
and you need to use that name for it in Kubernetes configs and in some pipeline variables. - Kubernetes Deployments etc.: Typically use the last component or last
components of the path that are unique, e.g.
service/pipeline-agent
->pipeline-agent
, orapi/user/v1
->user-v1
. Use the same name for Deployment and Service for simplicity. - Pipelines:
Build|Release <component name>
- unfortunately it seems Azure DevOps does not respect thename
property and you have to rename them manually after creation. - Pipeline configs:
(build|release)-<component name>.yml
For deleting old Kubernetes resources, you can move your .yaml
file under kube/obsolete/
and it will be picked up AFTER the release of the kube/*.yaml
files have been applied and the resources will be deleted.
The envs
have a few things to keep in mind.
Firstly, every envs/*
is expected to be run on one Kubernetes cluster
(except minikube
), though mostly because of Sealed Secrets keys (in
envs/<env>/secrets.pem
) and such things not being considered for
distribution.
Secondly, you should store the sealed secrets generated with kubeseal
to envs/<env>/secrets/<num>-<name>.yaml
, e.g. 01-pipeline-agent.yaml
and they will be applied during release. Similarly the files in
envs/<env>/secrets/obsolete/
will have their resources deleted.
Thirdly, if you need to override any component's kube/
configs, you
can store an override to envs/<env>/overrides/component/path/kube/<file>.yaml
,
e.g. api/test/v1/kube/01-config.yaml
could be overridden for staging
env by creating envs/staging/overrides/api/test/v1/kube/01-config.yaml
with the
full replacement contents.
If you want to do purely local settings for devops
scripts, e.g. to
change the LOG_LEVEL
, you can create devops/settings_local.py
with
your overrides.
In the envs/<env>/merges/
folder you can put files to merge with your
existing configs.
E.g. if you want to just override one field, add one setting, or remove some specific thing, you don't need to replace the whole file. This will help with reducing duplication and thus risking your settings getting out of sync.
To remove previously defined properties, set the value as ~
.
To skip items in lists (leave them untouched), just use an empty value as in:
list:
- # Skipped
- override
If you need to skip a full YAML document on a multi-document file, make sure the YAML parser understands that. E.g. to skip the first document you will need to do something like:
---
---
# Document 2
spec:
value: override
Otherwise it should work pretty much as expected. Any items in original file that do not exist in merges, stay untouched. Any new items are added. Any string/number/similar values on both get replaced.
As a specific example, if you have component/kube/01-example.yaml
and
envs/test/merges/component/kube/01-example.yaml
with the contents:
# component/kube/01-example.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myproj-settings
data:
MY_SETTING: "foo"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: my-deployment
template:
metadata:
labels:
app: my-deployment
spec:
containers:
- name: my-container
imagePullPolicy: IfNotPresent
image: my-container:latest
env:
- name: ANOTHER_SETTING
value: some-value
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
and
# envs/test/merges/component/kube/01-example.yaml
data:
MY_SETTING: "bar"
---
spec:
template:
spec:
containers:
- env:
- name: ANOTHER_SETTING # this prop is here just for clarity
value: another-value
volumeMounts: ~
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
You will end up afterwards with a processed combination of:
apiVersion: v1
kind: ConfigMap
metadata:
name: myproj-settings
data:
MY_SETTING: "bar"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: my-deployment
template:
metadata:
labels:
app: my-deployment
spec:
containers:
- name: my-container
imagePullPolicy: IfNotPresent
image: my-container:latest
env:
- name: ANOTHER_SETTING
value: another-value
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
In some cases you might want multiple components in each environment share some settings or variables; the pods might want to know in which environment they are running, a base URL for the whole environment or something similar. If you have multiple components and multiple environments creating merge files or overrides manually can be tedious and error prone.
You can use Jinja2 templates to render merge templates and override templates
for components per environment. Place the templates for merge files in
path/to/component/kube/merge-templates/
and templates for overrides in
path/to/component/kube/override-templates/
.
Then define the variables for each environment in envs/<env>/settings.py
in
the TEMPLATE_VARIABLES
dictionary.
The templates will be rendered by the pre-commit hook, but can also be rendered
by running poetry run invoke update-from-templates
.
The rendering process will add a header to the files and automatically take care of updating and removing files with those headers as needed and leave any manually created overrides or merges alone.
Example:
In the envs/minikube/settings.py
set:
TEMPLATE_VARIABLES = {
"env": "minikube"
}
If you have more environments make sure to also add the variable to the
settings.py
of those environments.
Then create the file service/pipeline-agent/kube/merge-templates/01-config.yaml
with the content:
data:
ENV: "{{ env }}"
When then running poetry run invoke update-from-templates
the file
envs/minikube/merges/service/pipeline-agent/kube/01-config.yaml
will be
generated with the following content:
# THIS FILE HAS BEEN AUTOMATICALLY GENERATED BY: poetry run invoke update-from-templates
# GENERATED FROM service/pipeline-agent/kube/merge-templates/01-config.yaml, DO NOT MODIFY THIS FILE BY HAND
#
data:
ENV: "minikube"
When you need to e.g. perform database migrations after release, the
tooling can help out. In any component directory you can create a file
post-release.sh
, which will be automatically executed on a random pod
after the resources have been restarted.
In practice when e.g. writing database migrations for your Python API you might want that script to run something like:
- Download the latest release of this repository
- Create an Azure DevOps project if you don't have one yet
- Maybe remove
.travis.yml
if you don't want Travis-CI integration to SonarCloud - Update or remove
LICENSE.md
- Update this file for your needs
- Configure devops/settings.py, especially
IMAGE_PREFIX
- Update
kured-config
in kube/02-kured.yaml - Create
envs/*/settings.py
for the environments you wish to manage - Run
poetry run invoke init-kubernetes <env>
for every relevant Kubectl context (incl.minikube
) - keep in mind we assume one cluster per env. - Modify
azure-devops/*.yml
variables
to match your settings - Update
azureSubscription
inazure-devops/*.yml
- Set up necessary Service Connections from Azure DevOps to Azure and
configure
azure-devops/*.yml
accordingly. - Commit the
envs/*/secrets.pem
-files - Convert any existing secrets with kubeseal using
--cert envs/<env>/secrets.pem
-arg. - Commit and push all changes to your Azure DevOps project
- Enable Multi-stage pipelines from preview features (possibly optional)
- Add pipelines from the
azure-devops/*.yml
-files AND manually configure the automatic post-build triggers as necessary (e.g. to release after master build) - Try to run the pipelines and then "Authorize" them in Azure DevOps or fix missing service connections and such.
Every developer checking out the repository should first run
minikube start [--cpus=4] [--memory=4g] [--disk-size=50g]
minikube docker-env
# Follow instructions from output
poetry install
poetry run invoke init
For most projects you should set up ksync
or similar, or to run
minikube mount .:/src --ip=$(minikube ip)
These will configure the hooks and minikube environment.
To restart from scratch, just run minikube delete
and start again.
# Releasing specific version for specific component
poetry run invoke release <env> \
--component service/pipeline-agent \
--image service/pipeline-agent=<name>.azurecr.io/project-service-pipeline-agent \
--tag service/pipeline-agent=master-994ee2d-20191012-141539
# Build a specific component
poetry run invoke build-images --component service/pipeline-agent
# Pass build args
poetry run invoke build-images --component service/pipeline-agent --docker-arg foo=bar --docker-arg boo=far ...
# Clean up the Azure Container Registry, name is from <name>.azurecr.io
poetry run invoke cleanup-registry <name>
You can pass build arguments to Dockerfile by passing
--docker-arg ARG1=argument --docker-arg ARG2=argument
to the poetry run invoke build-images
command. Remember to define the
args in your Dockerfile (ARG ARG1=default_value
) to actually use them.
This of course means that you can also pass them to
docker build --build-arg ARG1=argument
For setting up everything to work nearly with this template on AKS, you should run a few commands afterwards.
First, ensure your kubectl
context is set correctly (kubectl config get-contexts
and kubectl config use-context <ctx>
).
- If you didn't create the AKS cluster with
--attach-acr
then you should create a service principal for your AKS cluster to access ACR.
# Set up pull permissions to ACR from this cluster so we don't need
# imagePullSecrets in all kube configs.
# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)
# Create role assignment
az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID
- The dashboard typically needs permissions that are not there by default.
# Set up permissions for dashboard
kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
- Run the init-kubernetes task:
poetry run invoke init-kubernetes <env>
This will apply the *.yaml
files found under the kube
directory.
By default these include the sealed secrets controller and kured.
You should preferably run this task from your local machine, as the task will
also fetch the public key for the sealed secrets for the cluster and store it
under envs/<env>/secrets.pem
and you most likely want to commit that to the
repository to make it easy to add new secrets.
If you need to override any of the files from kube/*.yaml
for some cluster,
you can do that by adding an identically named file in kube/<kube_context>/overrides/*.yaml
.
The <kube_context>
should match the KUBE_CONTEXT
value defined in envs/<env>/settings.py
.
This approach can for example be used to customize the KURED_SLACK_NAME
for each cluster by making
a cluster/context specific override of the 02-kured.yaml
file.
You can also add a totally new file in the overrides folder, with no corresponding file in kube/*.yaml
,
if there's no default version of the file.
For the minikube
env the scripts will automatically store the master
key for Sealed Secrets in the repo, and restore it from there to all dev
environments. This is so you can also use Sealed Secrets in dev and not
run into problems in other environments.
However this also means that the secrets stored for minikube
are NOT
SECURE. Do not put any real secrets in them without being fully aware
of the consequences.
For all other environments you are expected to store the secrets.pem
for each environment in the repository, and the sealed secrets in
encrypted form in envs/<env>/secrets/*.yaml
.
For example:
# Create a secret, output as yaml to a file and don't run on server
kubectl create secret generic my-secret \
-o yaml \
--dry-run \
--from-literal=foo=bar > my-secret.yaml
kubeseal --cert envs/<env>/secrets.pem < my-secret.yaml > envs/<env>/secrets/01-my-secret.yaml
You might also want to back-up your master key, but do NOT store it in the repository - put it safely away in e.g. your password manager's secure notes -section.
You can export the master key by running:
poetry run invoke get-master-key --env <env>
This will save the key as envs/<env>/master.key
. Make sure to NOT under any circumstance commit this file.
If you have the master.key
file for an environment (or sufficient privileges to retrieve it from the cluster),
you can use the following command to unseal the secrets to a readable and editable format:
poetry run invoke unseal-secrets --env <env>
This will convert each of the files in envs/<env>/secrets/*.yaml
to a corresponding
*.unsealed-secrets.yaml
file. This will also base64 decode the file, so you can see and edit the actual values easily.
NOTE: Make sure to not under any circumstance commit the *.unsealed-secrets.yaml
or master.key
files!
Once you're done editing the files, you can do the reverse operation by running:
poetry run invoke seal-secrets --env <env>
This will base64 encode all the values and then seal the secrets using envs/<env>/secrets.pem
.
Note that also unchanged secrets will show up as changed in the SealedSecrets yaml file due to how kubeseal works.
This can be mitigated using the --only-changed
flag, which requires the master.key
(or sufficient privileges to retrieve it from the cluster).
The project template is released with the BSD 3-clause license. Some of the tools used might use other licenses. Please see LICENSE.md for more.
While this is not GPL/LGPL/similar, if you improve on the template, especially anything on the TODO, contribution back to the source would be appreciated.
You will likely want to update that file for your own private projects.
If you choose to use a different build name, you should likely update
cleanup_acr_repository
in tasks.py
and fix _sort_tag
for your
names.
Q: Why do you not use ctx.run
for Invoke tasks?
A:
It has been unreliable, especially when you run a large number of small tasks - sometimes just raising
UnexpectedExit
with no exit code or output at all.
Q: Why do you use Alpine Linux base images?
A:
The minimal distribution makes builds and releases faster, and reduces attack footprint.
Q: Why do you use /bin/sh
(or #!/usb/bin/env sh
) instead of BASH?
A:
Compatibility with busybox etc., especially Alpine Linux.
Q: How to optimize Dockerfile build speeds?
A:
First, use
verdaccio
anddevpi
and such caches hosted locally. Secondly, use pipeline caching. Thirdly, make sure you split yourDockerfile
in steps so your commands are:
- Set up environment, args, other things that basically never change
- Install the build dependencies and other such always required pkgs
- Copy the project dependency configuration for
npm
,poetry
etc.RUN
the task to install dependencies based on that configurationCOPY
the source files overRUN
any final configuration, incl. deletion of build depsThis way Docker's cache gets invalidated less often.
Things that could be done better still:
- Invoke tasks: Create a
dev
task that automatically runs various tests, maybe uses e.g. ksync and such. - Azure DevOps: Use of templates
- Azure DevOps: Examples of pipeline caching
- Azure Key Vault: Examples of Key Vault use for secure storage of secrets
- Kubernetes RBAC: Limiting privileges each user has for the non-local environments.
- DevSecOps: E.g. automatic security scans of build containers
- Automated tests: Good examples for how to run automated tests after components have been built
- Env setup tools: Simple tools to deploy a new named environment
- Dashboard RBAC: Does it really need that high level of permissions?
- Backup: Automated tool to back up important configuration e.g. Sealed Secret master keys
- DevOps scripts: Unit tests, probably based on
--dry-run
mode - Caches: Add examples of usage of Verdaccio and devpi
- Azure DevOps: It would be nice to be able to set variables with a form, especially things like the tag for release
- Releases: Examples of real functional database migrations using migrate-anything
This project has been made possible thanks to Cocreators and Lietu. You can help us continue our open source work by supporting us on Buy me a coffee.