Skip to content
This repository has been archived by the owner on Aug 6, 2024. It is now read-only.

Latest commit

 

History

History
270 lines (210 loc) · 9.3 KB

README.md

File metadata and controls

270 lines (210 loc) · 9.3 KB

Table of Contents

kaniko go app

The kaniko app is a simple application able to build an image using kaniko and a Dockerfile.

Example of a dockerfile to be parsed by Kaniko

FROM alpine

RUN apk add wget curl

During the execution of this kaniko app:

  • We will call the kaniko build function,
  • Kaniko will parse the Dockerfile, execute each docker commands (RUN, COPY, ...) that it supports,
  • A snapshot of each layer (= command executed) is then created,
  • Finally, the layers will be pushed into an image,
  • Our app will copy the layers created from the /kaniko dir to the /cache dir
  • For each layer (except the base image), the content will be extracted under the root FS /

When the kaniko-app is launched, then the following Dockerfile is parsed. This dockerfile will install some missing packages: wget, curl

NOTE: a layer is saved as a sha256:xxxxx.tgz file under the /kaniko dir. The xxxxxx corresponds the layer.digest which is the hash of the compressed layer.

How to build and run the application

To play with the application, build first the go application and build a container image of the kaniko-app.

Open a terminal within the kaniko project.

cp -r ../workspace ./workspace
./hack/build.sh

Launch the kaniko-app container

docker run \
  -e DOCKER_FILE_NAME="Dockerfile" \
  -v $(pwd)/../workspace:/workspace \
  -e EXTRACT_LAYERS=true \
  -v $(pwd)/cache:/cache \
  -it kaniko-app

Different ENV variables can be defined and passed as parameters to the containerized engine: LOGGING_LEVEL Log level: trace, debug, info, warn, error, fatal, panic LOGGING_FORMAT Logging format: text, color, json DOCKER_FILE_NAME Dockerfile to be parsed: Dockerfile is the default name DEBUG To launch the dlv remote debugger. See remote debugger EXTRACT_LAYERS To extract from the layers (= tgz files) the files. See extract layers CNB_* Pass Arg to the Dockerfile. See CNB Args IGNORE_PATHS Files to be ignored by Kaniko. See Ignore Paths. TODO: Should be also used to ignore paths during untar process or file search FILES_TO_SEARCH Files to be searched post layers content extraction. See files to search

Example using DOCKER_FILE_NAME env var

docker run \
  -e DOCKER_FILE_NAME="alpine" \
  -e LOGGING_LEVEL=info \
  -e IGNORE_PATHS="/usr/lib,/var/spool/mail,/var/mail" \
  -e EXTRACT_LAYERS=true \
  -v $(pwd)/../workspace:/workspace \
  -v $(pwd)/cache:/cache \
  -it kaniko-app

To verify that the kaniko application is working fine, execute the following command which is using as configuration file a metadata.toml file using the ENV var METADATA_FILE_NAME. This file should be created under the workspace/layers folder.

NOTE: The ENV var DOCKERFILE_NAME should not be used with METADATA_FILE_NAME !

docker run \
  -e LOGGING_LEVEL=info \
  -e IGNORE_PATHS="/var/spool/mail,/var/mail" \
  -e EXTRACT_LAYERS=true \
  -e FILES_TO_SEARCH="curl" \
  -e METADATA_FILE_NAME=metadata_curl.toml \
  -v $(pwd)/../workspace:/workspace \
  -v $(pwd)/cache:/cache \
  -it kaniko-app

Use a metadata.toml file

Instead of passing the file name of the Dockerfile to be processed, we can also use a metadata.toml file as it will be generated by the Buildpack Lifecycle using the ENV var METADATA_FILE_NAME. This file should be created under the wks/layers folder.

NOTE: The ENV var DOCKERFILE_NAME should not be used with METADATA_FILE_NAME !

docker run \
  -e LOGGING_LEVEL=info \
  -e IGNORE_PATHS="/var/spool/mail,/var/mail" \
  -e EXTRACT_LAYERS=true \
  -e FILES_TO_SEARCH="curl" \
  -e METADATA_FILE_NAME=metadata_curl.toml \
  -v $(pwd)/../workspace:/workspace \
  -v $(pwd)/cache:/cache \
  -it kaniko-app

Remote debugging

To use the dlv remote debugger, simply pass as ENV var DEBUG=true and the port 4000 to access it using your favorite IDE (Visual studio, IntelliJ, ...)

docker run \
  -e DEBUG=true \
  -p 2345:2345 \
  -v $(pwd)/../workspace:/workspace \
  -v $(pwd)/cache:/cache \
  -it kaniko-app

CNB Build args

When the Dockerfile contains some ARG arg commands

ARG CNB_BaseImage
FROM ${CNB_BaseImage}

then, we must pass them as ENV vars to the container. Our application will then convert the ENV var into a Kaniko BuildArgs array of []string

docker run \
       -e LOGGING_LEVEL=debug \
       -e LOGGING_FORMAT=color \
       -e EXTRACT_LAYERS=true \
       -e IGNORE_PATHS="/usr/lib" \
       -e CNB_BaseImage="ubuntu:bionic" \
       -e DOCKER_FILE_NAME="base-image-arg" \
       -v $(pwd)/../workspace:/workspace \
       -v $(pwd)/cache:/cache \
       -it kaniko-app

Ignore Paths

To ignore some paths during the process to create the new image, then use the following IGNORE_PATHS env var which is used by kaniko. Multiple paths can be defined using as separator ,.

docker run \
       -e EXTRACT_LAYERS=true \
       -e IGNORE_PATHS="/var/spool/mail,/usr/lib" \
       -e FILES_TO_SEARCH="hello.txt,curl" \
       -e LOGGING_LEVEL=debug \
       -e LOGGING_FORMAT=color \
       -e DOCKER_FILE_NAME="alpine" \
       -v $(pwd)/../workspace:/workspace \
       -v $(pwd)/cache:/cache \
       -it kaniko-app

NOTE: If the ENV var is not set, then an empty array of string is passed to Kaniko Opts

Extract layer files

By default, the layer tgz files are not extracted to the home dir of the container's filesystem. Nevertheless, the files part of the compressed tgz files will be logged.

To extract the layers files, enable the following ENV var EXTRACT_LAYERS=true

docker run \
       -e EXTRACT_LAYERS=true \
       -e IGNORE_PATHS="/usr/lib" \
       -e LOGGING_FORMAT=color \
       -e DOCKER_FILE_NAME="alpine" \
       -v $(pwd)/../workspace:/workspace \
       -v $(pwd)/cache:/cache \
       -it kaniko-app

Verify if files exist

To check/control if files added from the layers exist under the root filesystem, please use the following ENV var FILES_TO_SEARCH

docker run \
       -e EXTRACT_LAYERS=true \
       -e FILES_TO_SEARCH="hello.txt,curl" \
       -e IGNORE_PATHS="/usr/lib" \    
       -e LOGGING_LEVEL=debug \
       -e LOGGING_FORMAT=color \
       -e DOCKER_FILE_NAME="alpine" \
       -v $(pwd)/../workspace:/workspace \
       -v $(pwd)/cache:/cache \
       -it kaniko-app
...
DEBU[0009] File found: /usr/bin/curl                  
DEBU[0009] File found: /workspace/hello.txt        

Cache content

The content of the dockerfile which has been processed by the Kaniko build is available under the ./cache folder

drwxr-xr-x  10 cmoullia  staff      320 Nov 18 14:00 .
drwxr-xr-x  10 cmoullia  staff      320 Nov 18 13:56 ..
-rw-r--r--@  1 cmoullia  staff     6148 Nov 18 13:54 .DS_Store
-rw-------   1 cmoullia  staff  4383232 Nov 18 13:58 425529682
-rw-------   1 cmoullia  staff     1024 Nov 18 13:58 544414207
-rw-------   1 cmoullia  staff     1024 Nov 18 13:50 577703017
-rw-r--r--   1 cmoullia  staff      933 Nov 18 13:58 config.json
-rw-r--r--   1 cmoullia  staff       12 Nov 18 13:58 hello.txt
-rw-r--r--@  1 cmoullia  staff  2822981 Nov 18 13:58 sha256:97518928ae5f3d52d4164b314a7e73654eb686ecd8aafa0b79acd980773a740d.tgz
-rw-r--r--   1 cmoullia  staff  3175266 Nov 18 13:58 sha256:aa2ad9d70c8b9b0b0c885ba0a81d71f5414dcac97bee8f5753ec03f92425c540.tgz

Using Kubernetes

To run the kaniko-app as a kubernetes pod, some additional steps are required and described hereafter.

Create a k8s cluster having access to your local workspace and cache folders. This step can be achieved easily using kind and the following bash script where the following config can be defined to access your local folders

  extraMounts:
    - hostPath: $(pwd)/../workspace  # PLEASE CHANGE ME
      containerPath: /workspace
    - hostPath: $(pwd)/cache      # PLEASE CHANGE ME
      containerPath: /cache

Next, create the cluster using the command ./k8s/kind-reg.sh

When the cluster is up and running like the registry, we can push the image:

REGISTRY="localhost:5000"
docker tag kaniko-app $REGISTRY/kaniko-app
docker push $REGISTRY/kaniko-app

and then deploy the kaniko pod

kubectl apply -f k8s/manifest.yml 

NOTE: Check the content of the pod initContainers logs using k9s or another tool :-)

To delete the pod, do

kubectl delete -f k8s/manifest.yml