Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add cluster local postgres deployment #50

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -98,3 +98,4 @@ target/
openshift/config.yaml
openshift/secrets.yaml
openshift/env.sh
openshift/rds.json
58 changes: 15 additions & 43 deletions openshift/deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,6 @@ is_set_or_fail AWS_SECRET_ACCESS_KEY "${AWS_SECRET_ACCESS_KEY}"
is_set_or_fail AWS_DEFAULT_REGION "${AWS_DEFAULT_REGION}"
is_set_or_fail OC_TOKEN "${OC_TOKEN}"

templates_dir="${here}/templates"
templates="fabric8-analytics-jobs fabric8-analytics-server fabric8-analytics-data-model
fabric8-analytics-worker fabric8-analytics-pgbouncer gremlin-docker
fabric8-analytics-license-analysis fabric8-analytics-stack-analysis
f8a-server-backbone fabric8-analytics-stack-report-ui fabric8-analytics-api-gateway"

purge_aws_resources=false # default
for key in "$@"; do
case $key in
Expand All @@ -57,47 +51,25 @@ for key in "$@"; do
done
[ "$purge_aws_resources" == false ] && echo "Use --purge-aws-resources if you want to also clear previously allocated AWS resources (RDS database, SQS queues, S3 buckets, DynamoDB tables)."

openshift_login
# openshift_login
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

export KUBECONFIG=..``or oc login ... is a must.

create_or_reuse_project
allocate_aws_rds
generate_and_deploy_config
deploy_secrets

#Get templates for fabric8-analytics projects
for template in ${templates}
do
curl -sS "https://raw.githubusercontent.com/fabric8-analytics/${template}/master/openshift/template.yaml" > "${templates_dir}/${template#fabric8-analytics-}.yaml"
done
github_org_base="https://raw.githubusercontent.com/fabric8-analytics"
openshift_template_path="master/openshift/template.yaml"
openshift_template_path2="master/openshift/template-prod.yaml"

oc_process_apply "${templates_dir}/pgbouncer.yaml"
sleep 20
oc_process_apply "${templates_dir}/gremlin-docker.yaml" "-p CHANNELIZER=http -p REST_VALUE=1 -p IMAGE_TAG=latest"
sleep 20
oc_process_apply "${templates_dir}/gremlin-docker.yaml" "-p CHANNELIZER=http -p REST_VALUE=1 -p IMAGE_TAG=latest -p QUERY_ADMINISTRATION_REGION=ingestion"
sleep 20
oc_process_apply "${templates_dir}/data-model.yaml"
sleep 20
oc_process_apply "${templates_dir}/jobs.yaml"
sleep 20
oc_process_apply "${templates_dir}/worker.yaml" "-p WORKER_ADMINISTRATION_REGION=ingestion -p WORKER_EXCLUDE_QUEUES=GraphImporterTask"
sleep 20
oc_process_apply "${templates_dir}/worker.yaml" "-p WORKER_ADMINISTRATION_REGION=ingestion -p WORKER_INCLUDE_QUEUES=GraphImporterTask -p WORKER_NAME_SUFFIX=-graph-import"
sleep 20
Comment on lines -80 to -85
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only bare minimum services to support online flow has been enabled.

oc_process_apply "${templates_dir}/worker.yaml" "-p WORKER_ADMINISTRATION_REGION=api -p WORKER_RUN_DB_MIGRATIONS=1 -p WORKER_EXCLUDE_QUEUES=GraphImporterTask"
sleep 20
oc_process_apply "${templates_dir}/worker.yaml" "-p WORKER_ADMINISTRATION_REGION=api -p WORKER_INCLUDE_QUEUES=GraphImporterTask -p WORKER_NAME_SUFFIX=-graph-import"
sleep 20
oc_process_apply "${templates_dir}/f8a-server-backbone.yaml"
sleep 20
oc_process_apply "${templates_dir}/server.yaml"
sleep 20
oc_process_apply "${templates_dir}/stack-analysis.yaml" "-p KRONOS_SCORING_REGION=maven"
# kronos-pypi is not used/maintained now
# sleep 20
# oc_process_apply "${templates_dir}/stack-analysis.yaml" "-p KRONOS_SCORING_REGION=pypi"
sleep 20
oc_process_apply "${templates_dir}/license-analysis.yaml"
sleep 20
oc_process_apply "${templates_dir}/stack-report-ui.yaml" "-p REPLICAS=1"
oc_process_apply "${github_org_base}/fabric8-analytics-pgbouncer/${openshift_template_path}"
oc_process_apply "${github_org_base}/gremlin-docker/${openshift_template_path}" "-p CHANNELIZER=http -p REST_VALUE=1 -p IMAGE_TAG=latest"
oc_process_apply "${github_org_base}/gremlin-docker/${openshift_template_path}" "-p CHANNELIZER=http -p REST_VALUE=1 -p IMAGE_TAG=latest -p QUERY_ADMINISTRATION_REGION=ingestion"
sleep 20
oc_process_apply "${templates_dir}/api-gateway.yaml"
oc_process_apply "${github_org_base}/fabric8-analytics-data-model/${openshift_template_path}"
oc_process_apply "${github_org_base}/fabric8-analytics-worker/${openshift_template_path}" "-p WORKER_ADMINISTRATION_REGION=api -p WORKER_RUN_DB_MIGRATIONS=1 -p WORKER_EXCLUDE_QUEUES=GraphImporterTask"
oc_process_apply "${github_org_base}/f8a-server-backbone/${openshift_template_path}"
oc_process_apply "${github_org_base}/fabric8-analytics-server/${openshift_template_path}"
oc_process_apply "${github_org_base}/fabric8-analytics-license-analysis/${openshift_template_path}"
oc_process_apply "${github_org_base}/fabric8-analytics-npm-insights/${openshift_template_path}"
oc_process_apply "${github_org_base}/f8a-pypi-insights/${openshift_template_path}"
oc_process_apply "${github_org_base}/f8a-hpf-insights/${openshift_template_path2}" "-p HPF_SCORING_REGION=maven -p RESTART_POLICY=Always"
90 changes: 18 additions & 72 deletions openshift/helpers.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ function is_set_or_fail() {
local name=$1
local value=$2

if [ ! -v value ] || [ "${value}" == "not-set" ]; then
if [ "${value}" == "not-set" ]; then
echo "You have to set $name" >&2
exit 1
fi
Expand All @@ -29,30 +29,29 @@ function generate_and_deploy_config() {

function deploy_secrets() {
#All secrets must be base64 encoded
oc process -p AWS_ACCESS_KEY_ID="$(/bin/echo -n "${AWS_ACCESS_KEY_ID}" | base64)" \
-p AWS_SECRET_ACCESS_KEY="$(/bin/echo -n "${AWS_SECRET_ACCESS_KEY}" | base64)" \
-p AWS_DEFAULT_REGION="$(/bin/echo -n "${AWS_DEFAULT_REGION}" | base64)" \
-p GITHUB_API_TOKENS="$(/bin/echo -n "${GITHUB_API_TOKENS}" | base64)" \
-p GITHUB_OAUTH_CONSUMER_KEY="$(/bin/echo -n "${GITHUB_OAUTH_CONSUMER_KEY}" | base64)" \
oc process -p AWS_ACCESS_KEY_ID="$(echo -n "${AWS_ACCESS_KEY_ID}" | base64)" \
-p AWS_SECRET_ACCESS_KEY="$(echo -n "${AWS_SECRET_ACCESS_KEY}" | base64)" \
-p AWS_DEFAULT_REGION="$(echo -n "${AWS_DEFAULT_REGION}" | base64)" \
-p GITHUB_API_TOKENS="$(echo -n "${GITHUB_API_TOKENS}" | base64)" \
-p GITHUB_OAUTH_CONSUMER_KEY="$(echo -n "${GITHUB_OAUTH_CONSUMER_KEY}" | base64)" \
-p GITHUB_OAUTH_CONSUMER_SECRET="$(/bin/echo -n "${GITHUB_OAUTH_CONSUMER_SECRET}" | base64)" \
-p LIBRARIES_IO_TOKEN="$(/bin/echo -n "${LIBRARIES_IO_TOKEN}" | base64)" \
-p FLASK_APP_SECRET_KEY="$(/bin/echo -n "${FLASK_APP_SECRET_KEY}" | base64)" \
-p RDS_ENDPOINT="$(/bin/echo -n "${RDS_ENDPOINT}" | base64)" \
-p RDS_PASSWORD="$(/bin/echo -n "${RDS_PASSWORD}" | base64)" \
-p SNYK_TOKEN="$(/bin/echo -n "${SNYK_TOKEN}" | base64)" \
-p SNYK_ISS="$(/bin/echo -n "${SNYK_ISS}" | base64)" \
-p LIBRARIES_IO_TOKEN="$(echo -n "${LIBRARIES_IO_TOKEN}" | base64)" \
-p FLASK_APP_SECRET_KEY="$(echo -n "${FLASK_APP_SECRET_KEY}" | base64)" \
-p RDS_ENDPOINT="$(echo -n "${RDS_ENDPOINT}" | base64)" \
-p RDS_PASSWORD="$(echo -n "${RDS_PASSWORD}" | base64)" \
-p SNYK_TOKEN="$(echo -n "${SNYK_TOKEN}" | base64)" \
-p SNYK_ISS="$(echo -n "${SNYK_ISS}" | base64)" \
-p CVAE_NPM_INSIGHTS_BUCKET="$(echo -n "${USER_ID}-cvae-npm-insights" | base64)" \
-p HPF_PYPI_INSIGHTS_BUCKET="$(echo -n "${USER_ID}-hpf-pypi-insights" | base64)" \
-p HPF_MAVEN_INSIGHTS_BUCKET="$(echo -n "${USER_ID}-hpf-maven-insights" | base64)" \
-f "${here}/secrets-template.yaml" > "${here}/secrets.yaml"
oc apply -f secrets.yaml
}

function oc_process_apply() {
echo -e "\\n Processing template - $1 ($2) \\n"
# Don't quote $2 as we need it to split into individual arguments
oc process -f "$1" $2 | oc apply -f -
}

function openshift_login() {
oc login "${OC_URI}" --token="${OC_TOKEN}" --insecure-skip-tls-verify=true
oc process -f "$1" $2 | oc apply -f - --wait=true
}

function purge_aws_resources() {
Expand Down Expand Up @@ -86,61 +85,8 @@ function create_or_reuse_project() {
fi
}

function tag_rds_instance() {
TAGS="Key=ENV,Value=${DEPLOYMENT_PREFIX}"
echo "Tagging RDS instance with ${TAGS}"
aws rds add-tags-to-resource \
--resource-name "${RDS_ARN}" \
--tags "${TAGS}"
}

function get_rds_instance_info() {
aws --output=table rds describe-db-instances --db-instance-identifier "${RDS_INSTANCE_NAME}" 2>/dev/null
}

function allocate_aws_rds() {
if ! get_rds_instance_info; then
aws rds create-db-instance \
--allocated-storage "${RDS_STORAGE}" \
--db-instance-identifier "${RDS_INSTANCE_NAME}" \
--db-instance-class "${RDS_INSTANCE_CLASS}" \
--db-name "${RDS_DBNAME}" \
#--db-subnet-group-name "${RDS_SUBNET_GROUP_NAME}" \
--engine postgres \
--engine-version "9.6.1" \
--master-username "${RDS_DBADMIN}" \
--master-user-password "${RDS_PASSWORD}" \
--publicly-accessible \
--storage-type gp2
#--storage-encrypted
echo "Waiting (60s) for ${RDS_INSTANCE_NAME} to come online"
sleep 60
wait_for_rds_instance_info
else
echo "DB instance ${RDS_INSTANCE_NAME} already exists"
wait_for_rds_instance_info
if [ "$purge_aws_resources" == true ]; then
echo "recreating database"
PGPASSWORD="${RDS_PASSWORD}" psql -d template1 -h "${RDS_ENDPOINT}" -U "${RDS_DBADMIN}" -c "drop database ${RDS_DBNAME}"
PGPASSWORD="${RDS_PASSWORD}" psql -d template1 -h "${RDS_ENDPOINT}" -U "${RDS_DBADMIN}" -c "create database ${RDS_DBNAME}"
fi
fi
tag_rds_instance
}

function wait_for_rds_instance_info() {
while true; do
echo "Trying to get RDS DB endpoint for ${RDS_INSTANCE_NAME} ..."

RDS_ENDPOINT=$(get_rds_instance_info | grep -w Address | awk '{print $4}')
RDS_ARN=$(get_rds_instance_info | grep -w DBInstanceArn | awk '{print $4}')

if [ -z "${RDS_ENDPOINT}" ]; then
echo "DB is still initializing, waiting 30 seconds and retrying ..."
sleep 30
else
break
fi
done
RDS_ENDPOINT="f8a-postgres"
oc apply -f postgres.yaml --wait=true
}

77 changes: 77 additions & 0 deletions openshift/postgres.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: f8a-postgres
labels:
f8a-component: f8a-postgres
spec:
replicas: 1
selector:
matchLabels:
f8a-component: f8a-postgres
template:
metadata:
labels:
f8a-component: f8a-postgres
spec:
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: f8a-postgres
containers:
- name: postgres
image: postgres:9.6
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: username
name: coreapi-postgres
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: database
name: coreapi-postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: coreapi-postgres
- name: PGDATA
value: "/var/lib/postgres/data/f8a"
volumeMounts:
- name: postgres-data
mountPath: "/var/lib/postgres/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: f8a-postgres
labels:
f8a-component: f8a-postgres
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
volumeName: "f8a-postgres"
---
apiVersion: v1
kind: Service
metadata:
name: f8a-postgres
labels:
f8a-component: f8a-postgres
spec:
type: ClusterIP
ports:
- port: 5432
protocol: TCP
name: postgres
targetPort: 5432
selector:
f8a-component: f8a-postgres
45 changes: 45 additions & 0 deletions openshift/secrets-template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,33 @@ objects:
sqs-access-key-id: ${AWS_ACCESS_KEY_ID}
sqs-secret-access-key: ${AWS_SECRET_ACCESS_KEY}
aws_region: ${AWS_DEFAULT_REGION}
- apiVersion: v1
kind: Secret
metadata:
name: hpf-pypi-insights-s3
type: Opaque
data:
aws_access_key_id: ${AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
bucket: ${HPF_PYPI_INSIGHTS_BUCKET}
- apiVersion: v1
kind: Secret
metadata:
name: cvae-npm-insights-s3
type: Opaque
data:
aws_access_key_id: ${AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
bucket: ${CVAE_NPM_INSIGHTS_BUCKET}
- apiVersion: v1
kind: Secret
metadata:
name: hpf-maven-insights-s3
type: Opaque
data:
aws_access_key_id: ${AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
bucket: ${HPF_MAVEN_INSIGHTS_BUCKET}
arajkumar marked this conversation as resolved.
Show resolved Hide resolved
- apiVersion: v1
kind: Secret
metadata:
Expand Down Expand Up @@ -237,3 +264,21 @@ parameters:
name: SNYK_ISS
value: "bm90LXNldA==" # not-set

- description: Pypi insights bucket name
displayName: Pypi insights bucket name
required: false
name: HPF_PYPI_INSIGHTS_BUCKET
value: "not-set" # not-set

- description: npm insights bucket name
displayName: npm insights bucket name
required: false
name: CVAE_NPM_INSIGHTS_BUCKET
value: "not-set" # not-set

- description: Maven insights bucket name
displayName: Maven insights bucket name
required: false
name: HPF_MAVEN_INSIGHTS_BUCKET
value: "not-set" # not-set