Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Public VLAN deployment for SNO clusters #576

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions ansible/roles/create-inventory/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@
ocpinventory_sno_nodes: []
ocpinventory_hv_nodes: []

- name: Public VLAN in BM cluster
when: public_vlan and cluster_type == "mno"
- name: Public VLAN in SNO/MNO cluster
when: public_vlan and (cluster_type == "mno" or cluster_type == "sno")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there are only 2 cluster types (mno and sno) do you think we should just have when: public_vlan?

block:
- name: Public VLAN - Configuring controlplane_network_interface_idx to the last interface
set_fact:
Expand Down
17 changes: 14 additions & 3 deletions ansible/roles/create-inventory/templates/inventory-sno.j2
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
[all:vars]
allocation_node_count={{ ocpinventory.json.nodes | length }}
supermicro_nodes={{ has_supermicro | bool }}
{% if public_vlan %}
cluster_name={{ cluster_name }}
controlplane_network={{ controlplane_network }}
{% if lab == "scalelab" %}
base_dns_name=rdu2.scalelab.redhat.com
{% elif lab == "performancelab" %}
base_dns_name=rdu3.labs.perfscale.redhat.com
{% else %}
base_dns_name={{ base_dns_name }}
{% endif %}
{% endif %}

[bastion]
{{ bastion_machine }} ansible_ssh_user=root bmc_address=mgmt-{{ bastion_machine }}
Expand Down Expand Up @@ -31,11 +42,11 @@ bmc_password={{ bmc_password }}
{%- if use_bastion_registry -%}
{%- set ip=controlplane_network | ansible.utils.nthhost(loop.index0 + 5) -%}
{%- elif public_vlan | bool -%}
{%- set ip=controlplane_pub_network_cidr | ansible.utils.nthhost(loop.index0 + 1) -%}
{%- set ip=controlplane_network | ansible.utils.nthhost(loop.index0 + 3) -%}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updating this expression to pick the 3rd IP of the subnet, which is the one that has the API DNS entry, for example:

$ dig api.vlan101.rdu3.labs.perfscale.redhat.com +shor
10.6.128.3

{%- else -%}
{%- set ip=(sno_foreman_data.results|selectattr('json.name', 'eq', sno.pm_addr | replace('mgmt-',''))|first).json.ip -%}
{%- endif -%}
{% if not loop.first %}# {% endif %}{{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }} bmc_address={{ sno.pm_addr }} boot_iso={{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }}.iso ip={{ ip }} vendor={{ hw_vendor[(sno.pm_addr.split('.')[0]).split('-')[-1]] }} lab_mac={{ (sno_foreman_data.results|selectattr('json.name', 'eq', sno.pm_addr | replace('mgmt-',''))|first) | json_query(mac_query) | join(', ') }} mac_address={{ sno.mac[controlplane_network_interface_idx] }} install_disk={{ sno_install_disk }}
{% if not loop.first %}# {% endif %}{{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }} bmc_address={{ sno.pm_addr }} boot_iso={{ sno.pm_addr.split('.')[0] | replace('mgmt-','') }}.iso ip={{ ip }} vendor={{ hw_vendor[(sno.pm_addr.split('.')[0]).split('-')[-1]] }} lab_mac={{ (sno_foreman_data.results|selectattr('json.name', 'eq', sno.pm_addr | replace('mgmt-',''))|first) | json_query(mac_query) | join(', ') }} mac_address={{ sno.mac[controlplane_network_interface_idx|int] }} install_disk={{ sno_install_disk }}
{% endfor %}

[sno:vars]
Expand All @@ -47,7 +58,7 @@ network_interface={{ controlplane_network_interface }}
gateway={{ controlplane_network_gateway }}
{% endif %}
{% if public_vlan %}
network_prefix={{ controlplane_pub_network_cidr | ipaddr('prefix') }}
network_prefix={{ controlplane_network_prefix }}
{% elif use_bastion_registry %}
network_prefix={{ controlplane_network | ipaddr('prefix') }}
{% endif %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,7 @@
- network_yaml: "{{ lookup('template', 'nmstate.yml.j2') }}"
mac_interface_map: "{{ lookup('template', 'mac_interface_map.json.j2') }}"
when:
- lab in rh_labs
- use_bastion_registry
- lab in rh_labs and (use_bastion_registry or public_vlan)

- name: (Cloud lab) Assemble SNO static network config
set_fact:
Expand All @@ -17,6 +16,9 @@
mac_interface_map: "{{ lookup('template', 'ibmcloud_mac_interface_map.json.j2') }}"
when: lab in cloud_labs

- debug:
var: cluster_network_cidr
- debug:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like an extra empty debug tasks made it here.

- name: Create SNO cluster
uri:
url: "http://{{ assisted_installer_host }}:{{ assisted_installer_port }}/api/assisted-install/v2/clusters"
Expand All @@ -25,7 +27,7 @@
status_code: [201]
return_content: true
body: {
"name": "{{ hostvars[item].inventory_hostname }}",
"name": "{{ cluster_name }}",
"openshift_version": "{{ openshift_version }}",
"high_availability_mode": "None",
"base_dns_domain": "{{ base_dns_name }}",
Expand Down Expand Up @@ -78,7 +80,7 @@
}
when: use_bastion_registry

- name: Patch infra-env for cloud labs with static network config
- name: Patch infra-env for labs with static network config
uri:
url: "http://{{ assisted_installer_host }}:{{ assisted_installer_port }}/api/assisted-install/v2/infra-envs/{{ ai_infraenv_id }}"
method: PATCH
Expand All @@ -88,7 +90,7 @@
body: {
"static_network_config": "{{ static_network_config }}"
}
when: lab in cloud_labs
when: lab in cloud_labs or public_vlan

- name: Append cluster ids
set_fact:
Expand Down
2 changes: 1 addition & 1 deletion ansible/roles/sno-create-ai-cluster/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@

- name: set machine network cidr for public vlan
set_fact:
machine_network_cidr: "{{ controlplane_pub_network_cidr }}"
machine_network_cidr: "{{ controlplane_network }}"
when:
- public_vlan | bool

Expand Down
3 changes: 0 additions & 3 deletions ansible/vars/all.sample.yml
Original file line number Diff line number Diff line change
Expand Up @@ -74,9 +74,6 @@ use_bastion_registry: false
# Network configuration for all mno cluster nodes
controlplane_lab_interface: eno1np0

# Network configuration for public VLAN based sno cluster_type deployment
controlplane_pub_network_cidr:
controlplane_pub_network_gateway:
Comment on lines -77 to -79
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I'd like to remove any other vars we can to reduce confusion from the all.sample.yml. 👍

jumbo_mtu: false

################################################################################
Expand Down
3 changes: 0 additions & 3 deletions docs/deploy-mno-byol.md
Original file line number Diff line number Diff line change
Expand Up @@ -325,9 +325,6 @@ use_bastion_registry: false
# Network configuration for all mno cluster nodes
controlplane_lab_interface: eno8303

# Network configuration for public VLAN based sno cluster_type deployment
controlplane_pub_network_cidr:
controlplane_pub_network_gateway:
jumbo_mtu: false

################################################################################
Expand Down
3 changes: 0 additions & 3 deletions docs/deploy-mno-performancelab.md
Original file line number Diff line number Diff line change
Expand Up @@ -407,9 +407,6 @@ use_bastion_registry: false
# Network configuration for all mno cluster nodes
controlplane_lab_interface: eno8303

# Network configuration for public VLAN based sno cluster_type deployment
controlplane_pub_network_cidr:
controlplane_pub_network_gateway:
jumbo_mtu: false

################################################################################
Expand Down
3 changes: 0 additions & 3 deletions docs/deploy-mno-scalelab.md
Original file line number Diff line number Diff line change
Expand Up @@ -404,9 +404,6 @@ use_bastion_registry: false
# Network configuration for all mno cluster nodes
controlplane_lab_interface: eno12399np0

# Network configuration for public VLAN based sno cluster_type deployment
controlplane_pub_network_cidr:
controlplane_pub_network_gateway:
jumbo_mtu: false

################################################################################
Expand Down
33 changes: 25 additions & 8 deletions docs/deploy-sno-performancelab.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,17 @@ your cloud allocation on
_**Table of Contents**_

<!-- TOC -->
- [Bastion setup](#bastion-setup)
- [Configure Ansible vars in `all.yml`](#configure-ansible-vars-in-allyml)
- [Review all.yml](#review-allyml)
- [Run playbooks](#run-playbooks)
- [Deploy a Single Node OpenShift cluster via Jetlag quickstart](#deploy-a-single-node-openshift-cluster-via-jetlag-quickstart)
- [Bastion setup](#bastion-setup)
- [Configure Ansible vars in `all.yml`](#configure-ansible-vars-in-allyml)
- [Lab \& cluster infrastructure vars](#lab--cluster-infrastructure-vars)
- [Bastion node vars](#bastion-node-vars)
- [OCP node vars](#ocp-node-vars)
- [Deploy in the public VLAN](#deploy-in-the-public-vlan)
- [Extra vars](#extra-vars)
- [Disconnected and ipv6 vars](#disconnected-and-ipv6-vars)
- [Review `all.yml`](#review-allyml)
- [Run playbooks](#run-playbooks)
<!-- /TOC -->

<!-- Bastion setup is duplicated in multiple files and should be kept in sync!
Expand Down Expand Up @@ -299,6 +306,19 @@ For the guide we set our values for the Dell r750.
edit your generated inventory file to correct any nic names until this is
reasonably automated.

### Deploy in the public VLAN

In order to deploy a cluster using the public VLAN, set the variable `public_vlan` in `all.yml` to `true`. Once enabled the following variables are automatically configured:

- `controlplane_network_interface_idx`: Is set to the corresponding interface number
- `base_dns_name` is set to `rdu2.scalelab.redhat.com` in the inventory
- `controlplane_network`: public VLAN subnet
- `controlplane_network_prefix`: public VLAN network mask
- `controlplane_network_gateway`: public VLAN default gateway
- `cluster_name`: cluster name according to the pre-existing DNS records in the public VLAN, i.e: `vlan604`

When the deployment is completed, the cluster API and routes should be reachable directly from the VPN.

### Extra vars

No extra vars are needed for an IPv4 SNO cluster.
Expand Down Expand Up @@ -366,7 +386,7 @@ networktype: OVNKubernetes
# Set this variable if you want to host your SNO cluster on lab public routable
# VLAN network, set this ONLY if you have public routable VLAN enabled in your
# scalelab cloud
# For mno clusters, enable this variable to autoconfigure controlplane_network_interface_idx,
# For mno and sno clusters, enable this variable to autoconfigure controlplane_network_interface_idx,
# base_dns_name, cluster_name, controlplane_network, network_prefix, gateway to the values
# required in the public VLAN
public_vlan: false
Expand Down Expand Up @@ -405,9 +425,6 @@ use_bastion_registry: false
# Network configuration for all mno cluster nodes
controlplane_lab_interface: eno8303

# Network configuration for public VLAN based sno cluster_type deployment
controlplane_pub_network_cidr:
controlplane_pub_network_gateway:
jumbo_mtu: false

################################################################################
Expand Down
33 changes: 25 additions & 8 deletions docs/deploy-sno-scalelab.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,17 @@ Assuming you received a scale lab allocation named `cloud99`, this guide will wa
_**Table of Contents**_

<!-- TOC -->
- [Bastion setup](#bastion-setup)
- [Configure Ansible vars in `all.yml`](#configure-ansible-vars-in-allyml)
- [Review all.yml](#review-allyml)
- [Run playbooks](#run-playbooks)
- [Deploy a Single Node OpenShift cluster via Jetlag quickstart](#deploy-a-single-node-openshift-cluster-via-jetlag-quickstart)
- [Bastion setup](#bastion-setup)
- [Configure Ansible vars in `all.yml`](#configure-ansible-vars-in-allyml)
- [Lab \& cluster infrastructure vars](#lab--cluster-infrastructure-vars)
- [Bastion node vars](#bastion-node-vars)
- [OCP node vars](#ocp-node-vars)
- [Deploy in the public VLAN](#deploy-in-the-public-vlan)
- [Extra vars](#extra-vars)
- [Disconnected and ipv6 vars](#disconnected-and-ipv6-vars)
- [Review `all.yml`](#review-allyml)
- [Run playbooks](#run-playbooks)
<!-- /TOC -->

<!-- Bastion setup is duplicated in multiple files and should be kept in sync!
Expand Down Expand Up @@ -332,6 +339,19 @@ For the guide we set our values for the Supermicro 1029U.

** If your machine types are not homogeneous, then you will have to manually edit your generated inventory file to correct any nic names until this is reasonably automated.

### Deploy in the public VLAN

In order to deploy a cluster using the public VLAN, set the variable `public_vlan` in `all.yml` to `true`. Once enabled the following variables are automatically configured:

- `controlplane_network_interface_idx`: Is set to the corresponding interface number
- `base_dns_name` is set to `rdu2.scalelab.redhat.com` in the inventory
- `controlplane_network`: public VLAN subnet
- `controlplane_network_prefix`: public VLAN network mask
- `controlplane_network_gateway`: public VLAN default gateway
- `cluster_name`: cluster name according to the pre-existing DNS records in the public VLAN, i.e: `vlan604`

When the deployment is completed, the cluster API and routes should be reachable directly from the VPN.

### Extra vars

No extra vars are needed for an ipv4 SNO cluster.
Expand Down Expand Up @@ -398,7 +418,7 @@ networktype: OVNKubernetes
# Set this variable if you want to host your SNO cluster on lab public routable
# VLAN network, set this ONLY if you have public routable VLAN enabled in your
# scalelab cloud
# For mno clusters, enable this variable to autoconfigure controlplane_network_interface_idx,
# For mno and sno clusters, enable this variable to autoconfigure controlplane_network_interface_idx,
# base_dns_name, cluster_name, controlplane_network, network_prefix, gateway to the values
# required in the public VLAN
public_vlan: false
Expand Down Expand Up @@ -437,9 +457,6 @@ use_bastion_registry: false
# Network configuration for all mno cluster nodes
controlplane_lab_interface: eno1

# Network configuration for public VLAN based sno cluster_type deployment
controlplane_pub_network_cidr:
controlplane_pub_network_gateway:
jumbo_mtu: false

################################################################################
Expand Down