To set up Ark on AWS, you:
- Create your S3 bucket
- Create an AWS IAM user for Ark
- Configure the server
- Create a Secret for your credentials
If you do not have the aws
CLI locally installed, follow the [user guide][5] to set it up.
Heptio Ark requires an object storage bucket to store backups in. Create an S3 bucket, replacing placeholders appropriately:
aws s3api create-bucket \
--bucket <YOUR_BUCKET> \
--region <YOUR_REGION> \
--create-bucket-configuration LocationConstraint=<YOUR_REGION>
NOTE: us-east-1 does not support a LocationConstraint
. If your region is us-east-1
, omit the bucket configuration:
aws s3api create-bucket \
--bucket <YOUR_BUCKET> \
--region us-east-1
For more information, see the AWS documentation on IAM users.
-
Create the IAM user:
aws iam create-user --user-name heptio-ark
-
Attach policies to give
heptio-ark
the necessary permissions:BUCKET=<YOUR_BUCKET> cat > heptio-ark-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::${BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::${BUCKET}" ] } ] } EOF aws iam put-user-policy \ --user-name heptio-ark \ --policy-name heptio-ark \ --policy-document file://heptio-ark-policy.json
-
Create an access key for the user:
aws iam create-access-key --user-name heptio-ark
The result should look like:
{ "AccessKey": { "UserName": "heptio-ark", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } }
-
Create an Ark-specific credentials file (
credentials-ark
) in your local directory:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
where the access key id and secret are the values returned from the
create-access-key
request.
In the Ark root directory, run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See Run in custom namespace.
kubectl apply -f examples/common/00-prereqs.yaml
Create a Secret. In the directory of the credentials file you just created, run:
kubectl create secret generic cloud-credentials \
--namespace <ARK_SERVER_NAMESPACE> \
--from-file cloud=credentials-ark
Specify the following values in the example files:
-
In
examples/aws/00-ark-config.yaml
:- Replace
<YOUR_BUCKET>
and<YOUR_REGION>
. See the Config definition for details.
- Replace
-
In
examples/common/10-deployment.yaml
:- Make sure that
spec.template.spec.containers[*].env.name
is "AWS_SHARED_CREDENTIALS_FILE".
- Make sure that
-
(Optional) If you run the nginx example, in file
examples/nginx-app/with-pv.yaml
:- Replace
<YOUR_STORAGE_CLASS_NAME>
withgp2
. This is AWS's defaultStorageClass
name.
- Replace
In the root of your Ark directory, run:
kubectl apply -f examples/aws/00-ark-config.yaml
kubectl apply -f examples/common/10-deployment.yaml