Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPIKE - AMI #120

Open
teddytpc1 opened this issue Nov 20, 2024 · 7 comments
Open

SPIKE - AMI #120

teddytpc1 opened this issue Nov 20, 2024 · 7 comments
Assignees
Labels
level/task Task issue type/enhancement Enhancement issue

Comments

@teddytpc1
Copy link
Member

teddytpc1 commented Nov 20, 2024

Objective
https://github.com/wazuh/internal-devel-requests/issues/1319

Description

We need to analyze if we can improve/simplify the AMI generation. Aspects to analyze:

  • Time to build
  • Maintenance
  • Complexity
  • Configurations
  • Tools used to build

As a requirement, the AMI must not be built with the Installation assistant.

Additionally, we need to design and implement DevOps-owned AMI testing. Currently, there are no tests for the AMI. The goal is to create a GitHub Action (GHA) workflow that serves both as a PR check and an on-demand testing tool. The GHA should validate:

  1. The successful deployment of the AMI.
  2. The status of its components (services running as expected).
  3. Main logs, scanning for errors or warnings.

Implementation restrictions

  • Testing Environment: The tests must be implemented using GitHub Actions (GHA).
  • Compatibility: The workflow should be compatible with the environments used for PR testing and manual testing.
  • Logs Validation: The logs checking must identify and report critical issues (e.g., errors, warnings) in a clear and actionable way.
  • Minimal Maintenance: The implementation should aim for low complexity and minimal maintenance overhead.
  • Allocator module: the allocator module should be used if applies.

Plan

  1. Research & Analysis
  • Review the current AMI generation process, identifying areas for improvement in terms of time, complexity, and maintenance.
  • Analyze existing tools and configurations to determine potential simplifications.
  • Analyze the impact of installing the components without using the Installation assistant and propose a new approach.
  1. Workflow Design
  • Define the key steps and criteria for validating AMI deployment.
  • Identify the components to monitor and logs to analyze for errors or warnings.
@teddytpc1 teddytpc1 added level/task Task issue type/enhancement Enhancement issue labels Nov 20, 2024
@wazuhci wazuhci moved this to Backlog in Release 5.0.0 Nov 20, 2024
@wazuhci wazuhci moved this from Backlog to In progress in Release 5.0.0 Nov 22, 2024
@Enaraque
Copy link
Member

First approach

Several considerations have been made regarding the creation of the AMI and the associated tests.

Currently, the AMI is being created using Amazon Linux 2 (AL2), which will reach its end of support soon—on June 30, 2025. One of the improvements that should be considered is transitioning to Amazon Linux 2023 (AL2023) to ensure continued support and leverage a more secure operating system for AMI creation. This change was previously considered in past issues (wazuh/wazuh-packages#2986) but could not be implemented for various reasons. Further investigation is necessary to determine how to switch the operating system of the AMI to achieve this goal.

Tests Implementation

For implementing the tests, using Python would be an excellent choice. One of the main reasons to choose Python is its high scalability and modularity, which greatly benefit test writing. Additionally, it allows for a more robust and customizable logging system, making it easier to identify any issues that arise.

Moreover, Python has been the language used in other repositories (e.g., the allocator module). Therefore, it is the language with which we have the most knowledge, experience, and comfort, making it the ideal choice for writing the necessary tests.

AMI Creation

Regarding the AMI creation process itself, Ansible is currently being used to execute the Installation Assistant, which sets up the AIO infrastructure for the AMI. Given that the Installation Assistant logic may change in version 5.0.0, deploying Wazuh components for the AMI might become more complex. This could make Ansible less suitable for deployment, as creating and configuring each Wazuh component solely through Ansible would lack scalability and involve complex maintenance. This has already been observed with frequent updates requiring changes to various Ansible functions and modules.

For these reasons, as discussed in the testing section, we could consider using Python to implement the logic responsible for deploying Wazuh components in the AMI. This would provide the scalability and modularity that Ansible lacks, as well as significantly reduce maintenance costs.

Tip

This approach would also enable the creation of dedicated tests for the AMI configuration logic. In other words, beyond verifying the AMI's functionality by creating an instance, we would gain the flexibility to test each function in the module responsible for AMI creation. The goal would be to achieve test coverage that ensures the AMI is deployed correctly.

GitHub Actions Integration

By implementing the changes mentioned above, creating the necessary GitHub Actions (GHA) workflows for testing would become a straightforward process. There would be no need to implement complex logic in any job steps. We would only need to call the testing framework, which would handle validating the code logic (ensuring the subsequent deployment succeeds) and verifying that the AMI functions correctly once deployed.

@Enaraque
Copy link
Member

Update report

The process of migrating the operating system from AL2 to AL2023 has been under investigation. This effort has been conducted alongside the spike for the OVA, as it represents a joint improvement.

The research has been carried out in collaboration with @CarlosALgit , and more information can be found here.

Further testing is required to reach a definitive conclusion.

@Enaraque
Copy link
Member

Update report

We have continued with the research and testing in order to create the AL2023 VM in the macStadium.

All the information can be seen in this comment.

Next steps

Once the research is finished and we can see that the OVA can be deployed correctly, we will continue with the tests to verify that we can create a base AMI with AL2023 for the creation of the Wazuh AMI.

@Enaraque
Copy link
Member

Enaraque commented Dec 2, 2024

Update report

We have been testing the OVA to ensure it is created correctly for use in MacStadium.
More info here: #119 (comment)

On the other hand, I have been configuring the base AMI setup script for the Wazuh AMI. In this script, I perform various tasks:

  • Modify the cloud.cfg file to add the necessary details so that the default user is wazuh-user and to prevent the hostname from being modified every time the instance is launched.
  • Remove the ec2-user.
  • Edit the SSH configuration file to set the port as #Port 22.
  • Add the Wazuh logo to /etc/motd so it is displayed every time a session is initiated.
  • Perform a cleanup of everything necessary.

PoC

I execute the script on the instance.

Warning

The script must be run as the root user because it will remove the ec2-user. If run as ec2-user, it cannot be deleted since it would be in use by at least one process.

Script execution
$ sudo bash AMI_generate_base_ami.sh
+ CLOUD_CFG_PATH=/etc/cloud/cloud.cfg
+ SSH_CONFIG_PATH=/etc/ssh/sshd_config
+ WAZUH_USER=wazuh-user
+ WAZUH_PASSWORD=wazuh
+ SSH_PORT=22
+ WAZUH_LOGO='


wwwwww.           wwwwwww.          wwwwwww.
wwwwwww.          wwwwwww.          wwwwwww.
 wwwwww.         wwwwwwwww.        wwwwwww.
 wwwwwww.        wwwwwwwww.        wwwwwww.
  wwwwww.       wwwwwwwwwww.      wwwwwww.
  wwwwwww.      wwwwwwwwwww.      wwwwwww.
   wwwwww.     wwwwww.wwwwww.    wwwwwww.
   wwwwwww.    wwwww. wwwwww.    wwwwwww.
    wwwwww.   wwwwww.  wwwwww.  wwwwwww.
    wwwwwww.  wwwww.   wwwwww.  wwwwwww.
     wwwwww. wwwwww.    wwwwww.wwwwwww.
     wwwwwww.wwwww.     wwwwww.wwwwwww.
      wwwwwwwwwwww.      wwwwwwwwwwww.
      wwwwwwwwwww.       wwwwwwwwwwww.      oooooo
       wwwwwwwwww.        wwwwwwwwww.      oooooooo
       wwwwwwwww.         wwwwwwwwww.     oooooooooo
        wwwwwwww.          wwwwwwww.      oooooooooo
        wwwwwww.           wwwwwwww.       oooooooo
         wwwwww.            wwwwww.         oooooo


         WAZUH Open Source Security Platform
                  https://wazuh.com



'
+ modify_cloud_cfg
+ sed -i 's/gecos: .*$/gecos: WAZUH AMI/' /etc/cloud/cloud.cfg
+ sed -i 's/name: .*$/name: wazuh-user/' /etc/cloud/cloud.cfg
+ sed -i /set-hostname/d /etc/cloud/cloud.cfg
+ sed -i 's/update-hostname/preserve_hostname: true/' /etc/cloud/cloud.cfg
+ sudo cloud-init clean
+ sudo cloud-init init
Cloud-init v. 22.2.2 running 'init' at Mon, 02 Dec 2024 17:24:55 +0000. Up 2395.39 seconds.
ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
ci-info: |  ens5  | True |        172.31.92.101         | 255.255.240.0 | global | 12:9c:2f:ba:e2:e5 |
ci-info: |  ens5  | True | fe80::109c:2fff:feba:e2e5/64 |       .       |  link  | 12:9c:2f:ba:e2:e5 |
ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
ci-info: | Route | Destination |   Gateway   |     Genmask     | Interface | Flags |
ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
ci-info: |   0   |   0.0.0.0   | 172.31.80.1 |     0.0.0.0     |    ens5   |   UG  |
ci-info: |   1   |  172.31.0.2 | 172.31.80.1 | 255.255.255.255 |    ens5   |  UGH  |
ci-info: |   2   | 172.31.80.0 |   0.0.0.0   |  255.255.240.0  |    ens5   |   U   |
ci-info: |   3   | 172.31.80.1 |   0.0.0.0   | 255.255.255.255 |    ens5   |   UH  |
ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
ci-info: +-------+-------------+---------+-----------+-------+
ci-info: | Route | Destination | Gateway | Interface | Flags |
ci-info: +-------+-------------+---------+-----------+-------+
ci-info: |   0   |  fe80::/64  |    ::   |    ens5   |   U   |
ci-info: |   2   |    local    |    ::   |    ens5   |   U   |
ci-info: |   3   |  multicast  |    ::   |    ens5   |   U   |
ci-info: +-------+-------------+---------+-----------+-------+
2024-12-02 17:24:56,223 - schema.py[WARNING]: Invalid cloud-config provided: Please run 'sudo cloud-init schema --system' to see the schema errors.
Generating public/private ed25519 key pair.
Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
The key fingerprint is:
SHA256:0Elj7I23u1UFeFM3b4ObNT9FFX31YjENfu8bcRob1DU [email protected]
The key's randomart image is:
+--[ED25519 256]--+
|       .+    .=E%|
|       +.o  ..+*@|
|      ..oo   o==X|
|       .o o  o+=*|
|        S. . o+o+|
|          .  . *+|
|           .. o..|
|          ..    o|
|          ..   . |
+----[SHA256]-----+
Generating public/private ecdsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
The key fingerprint is:
SHA256:jUs4J2+nmgK8a50QSh5oG7mryNgUyGJ0yN/5LvQLqq4 [email protected]
The key's randomart image is:
+---[ECDSA 256]---+
|                 |
|. .              |
|.+..             |
|+B+ . .. o       |
|*=*o o+ S .      |
|+++. ..* .       |
|  o=.o..+ .      |
|++o =.o+ o       |
|E=+o .++o        |
+----[SHA256]-----+
+ sudo cloud-init modules --mode=config
Cloud-init v. 22.2.2 running 'modules:config' at Mon, 02 Dec 2024 17:24:56 +0000. Up 2396.26 seconds.
+ sudo cloud-init modules --mode=final
Cloud-init v. 22.2.2 running 'modules:final' at Mon, 02 Dec 2024 17:24:56 +0000. Up 2396.60 seconds.
Cloud-init v. 22.2.2 finished at Mon, 02 Dec 2024 17:24:57 +0000. Datasource DataSourceEc2.  Up 2396.98 seconds
+ modify_hostname
+ sudo hostnamectl set-hostname wazuh-server
+ delete_ec2user
+ sudo userdel -r ec2-user
+ set_wazuh_logo
+ echo '


wwwwww.           wwwwwww.          wwwwwww.
wwwwwww.          wwwwwww.          wwwwwww.
 wwwwww.         wwwwwwwww.        wwwwwww.
 wwwwwww.        wwwwwwwww.        wwwwwww.
  wwwwww.       wwwwwwwwwww.      wwwwwww.
  wwwwwww.      wwwwwwwwwww.      wwwwwww.
   wwwwww.     wwwwww.wwwwww.    wwwwwww.
   wwwwwww.    wwwww. wwwwww.    wwwwwww.
    wwwwww.   wwwwww.  wwwwww.  wwwwwww.
    wwwwwww.  wwwww.   wwwwww.  wwwwwww.
     wwwwww. wwwwww.    wwwwww.wwwwwww.
     wwwwwww.wwwww.     wwwwww.wwwwwww.
      wwwwwwwwwwww.      wwwwwwwwwwww.
      wwwwwwwwwww.       wwwwwwwwwwww.      oooooo
       wwwwwwwwww.        wwwwwwwwww.      oooooooo
       wwwwwwwww.         wwwwwwwwww.     oooooooooo
        wwwwwwww.          wwwwwwww.      oooooooooo
        wwwwwww.           wwwwwwww.       oooooooo
         wwwwww.            wwwwww.         oooooo


         WAZUH Open Source Security Platform
                  https://wazuh.com



'
+ set_ssh_port
+ grep -q '^Port' /etc/ssh/sshd_config
++ grep '^Port' /etc/ssh/sshd_config
++ awk '{print $2}'
+ CURRENT_SSH_PORT=2200
+ '[' 2200 '!=' 22 ']'
+ sudo sed -i 's/^Port .*/#Port 22/' /etc/ssh/sshd_config
+ sudo systemctl restart sshd.service
+ clean_up
+ sudo yum clean all
0 files removed
+ sudo rm -rf /var/log/cloud-init-output.log /var/log/cloud-init.log /var/log/dnf.librepo.log /var/log/dnf.log /var/log/dnf.rpm.log /var/log/hawkey.log /var/log/lastlog /var/log/sa
+ sudo rm -rf /tmp/systemd-private-05c551df56d24ceab80db4488e938d66-systemd-hostnamed.service-Z5K0On /tmp/tmp.CLUc2wPwWV /tmp/tmp.EZF40LhgmJ /tmp/tmp.Gp9xNSqKLj /tmp/tmp.MzfuvGamY8 /tmp/tmp.RRAnJNcX56 /tmp/tmp.n6gv6NMHk7 /tmp/tmp.rflh3JIng1 /tmp/tmp.wRl5uvcEXl
+ sudo rm -rf '/var/cache/yum/*'
+ sudo rm /root/.ssh/authorized_keys
+ sudo yum autoremove
Amazon Linux 2023 repository                                                                                          65 MB/s |  29 MB     00:00    
Amazon Linux 2023 Kernel Livepatch repository                                                                         62 kB/s |  11 kB     00:00    
Dependencies resolved.
Nothing to do.
Complete!
+ sudo rm -rf '/root/.ssh/*'
+ cat /dev/null
+ history -c
+ exit

Now we can see that ec2-user does not exist:

$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/usr/sbin/nologin
systemd-oom:x:999:999:systemd Userspace OOM Killer:/:/usr/sbin/nologin
systemd-resolve:x:193:193:systemd Resolver:/:/usr/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/usr/share/empty.sshd:/sbin/nologin
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
libstoragemgmt:x:997:997:daemon account for libstoragemgmt:/:/usr/sbin/nologin
systemd-coredump:x:996:996:systemd Core Dumper:/:/usr/sbin/nologin
systemd-timesync:x:995:995:systemd Time Synchronization:/:/usr/sbin/nologin
chrony:x:994:994:chrony system user:/var/lib/chrony:/sbin/nologin
ec2-instance-connect:x:993:993::/home/ec2-instance-connect:/sbin/nologin
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
tcpdump:x:72:72::/:/sbin/nologin
wazuh-user:x:1001:1001:WAZUH AMI:/home/wazuh-user:/bin/bash

$ cat /etc/passwd | grep ec2
ec2-instance-connect:x:993:993::/home/ec2-instance-connect:/sbin/nologin

And when we access to the VM we can see the logo display:

$ ssh -i xxxxx -p 22 [email protected]



wwwwww.           wwwwwww.          wwwwwww.
wwwwwww.          wwwwwww.          wwwwwww.
 wwwwww.         wwwwwwwww.        wwwwwww.
 wwwwwww.        wwwwwwwww.        wwwwwww.
  wwwwww.       wwwwwwwwwww.      wwwwwww.
  wwwwwww.      wwwwwwwwwww.      wwwwwww.
   wwwwww.     wwwwww.wwwwww.    wwwwwww.
   wwwwwww.    wwwww. wwwwww.    wwwwwww.
    wwwwww.   wwwwww.  wwwwww.  wwwwwww.
    wwwwwww.  wwwww.   wwwwww.  wwwwwww.
     wwwwww. wwwwww.    wwwwww.wwwwwww.
     wwwwwww.wwwww.     wwwwww.wwwwwww.
      wwwwwwwwwwww.      wwwwwwwwwwww.
      wwwwwwwwwww.       wwwwwwwwwwww.      oooooo
       wwwwwwwwww.        wwwwwwwwww.      oooooooo
       wwwwwwwww.         wwwwwwwwww.     oooooooooo
        wwwwwwww.          wwwwwwww.      oooooooooo
        wwwwwww.           wwwwwwww.       oooooooo
         wwwwww.            wwwwww.         oooooo


         WAZUH Open Source Security Platform
                  https://wazuh.com


[wazuh-user@wazuh-server ~]$

Configuration Script

Script
#!/bin/sh

set -euxo pipefail

# Define paths
CLOUD_CFG_PATH="/etc/cloud/cloud.cfg"
SSH_CONFIG_PATH="/etc/ssh/sshd_config"

# Define user and password
WAZUH_USER="wazuh-user"
WAZUH_PASSWORD="wazuh"

# Define SSH port
SSH_PORT="22"

WAZUH_LOGO="


wwwwww.           wwwwwww.          wwwwwww.
wwwwwww.          wwwwwww.          wwwwwww.
 wwwwww.         wwwwwwwww.        wwwwwww.
 wwwwwww.        wwwwwwwww.        wwwwwww.
  wwwwww.       wwwwwwwwwww.      wwwwwww.
  wwwwwww.      wwwwwwwwwww.      wwwwwww.
   wwwwww.     wwwwww.wwwwww.    wwwwwww.
   wwwwwww.    wwwww. wwwwww.    wwwwwww.
    wwwwww.   wwwwww.  wwwwww.  wwwwwww.
    wwwwwww.  wwwww.   wwwwww.  wwwwwww.
     wwwwww. wwwwww.    wwwwww.wwwwwww.
     wwwwwww.wwwww.     wwwwww.wwwwwww.
      wwwwwwwwwwww.      wwwwwwwwwwww.
      wwwwwwwwwww.       wwwwwwwwwwww.      oooooo
       wwwwwwwwww.        wwwwwwwwww.      oooooooo
       wwwwwwwww.         wwwwwwwwww.     oooooooooo
        wwwwwwww.          wwwwwwww.      oooooooooo
        wwwwwww.           wwwwwwww.       oooooooo
         wwwwww.            wwwwww.         oooooo


         WAZUH Open Source Security Platform
                  https://wazuh.com



"

function modify_cloud_cfg() {
    sed -i "s/gecos: .*$/gecos: WAZUH AMI/" "$CLOUD_CFG_PATH"
    sed -i "s/name: .*$/name: $WAZUH_USER/" "$CLOUD_CFG_PATH"
    sed -i "/set-hostname/d" "$CLOUD_CFG_PATH"
    sed -i "s/update-hostname/preserve_hostname: true/" "$CLOUD_CFG_PATH"

    sudo cloud-init clean
    sudo cloud-init init
    sudo cloud-init modules --mode=config
    sudo cloud-init modules --mode=final
}

function modify_hostname() {
    sudo hostnamectl set-hostname wazuh-server
}

function delete_ec2user() {
    sudo userdel -r ec2-user || true
}

function set_ssh_port {
    if grep -q '^Port' "${SSH_CONFIG_PATH}"; then
        CURRENT_SSH_PORT=$(grep '^Port' "${SSH_CONFIG_PATH}" | awk '{print $2}')
        if [ "$CURRENT_SSH_PORT" != "$SSH_PORT" ]; then
            sudo sed -i "s/^Port .*/#Port $SSH_PORT/" "${SSH_CONFIG_PATH}"
            sudo systemctl restart sshd.service
        fi
    fi
}

function set_wazuh_logo() {
    echo "$WAZUH_LOGO" > /etc/motd
}

function clean_up() {
    sudo yum clean all
    sudo rm -rf /var/log/*
    sudo rm -rf /tmp/*
    sudo rm -rf /var/cache/yum/*
    sudo rm  ~/.ssh/*
    sudo yum autoremove
    sudo rm -rf /root/.ssh/*
    cat /dev/null > /root/.bash_history && history -c && exit
    cat /dev/null > ~/.bash_history && history -c && exit
}

modify_cloud_cfg
modify_hostname
delete_ec2user
set_wazuh_logo
set_ssh_port
clean_up

@Enaraque
Copy link
Member

Enaraque commented Dec 4, 2024

Update report

Once it has been confirmed that we can successfully create a base AMI for the Wazuh AMI, the following investigations have continued:

Verifying the Use of AL2023 for the OVA

As previously discussed in update reports, we have been investigating the possibility of using AL2023 as the base image for the OVA. This would provide consistency by using the same operating system for both the OVA and the AMI. Currently, we encountered an issue where, after creating the test OVA, a network interface was found in a "down" state.

After numerous trials and testing various configurations, it seems we have found a solution by adding a custom network interface in Cloud-Init (Cloud-Init network configuration documentation).
More details about this can be found here.

Deploying Wazuh Components Using Python

Another key area of investigation is determining whether it is feasible and efficient to handle the installation logic of the different Wazuh components using Python. Currently, this process is managed through playbooks and the installation assistant (a tool that will not be used in version 5.0.0).

For this, we have been exploring the possibility of using the provisioning module from the wazuh-qa repository (or creating a custom one tailored to our needs). Unlike the current approach of installing Wazuh in the AMI and OVA, this module uses Python's Jinja2 templates, providing significant flexibility in installing any Wazuh component. This approach removes dependency on the installation assistant and allows us to specify which component(s) to install.

Additionally, since it is written in Python, it enables us to create our own tests and verify that everything is set up correctly.

@Enaraque
Copy link
Member

Upadate report

Definition of the Different Components

To create the OVA and the AMI, we will need several components essential for both.

First, we will need the allocator (from wazuh-automation). This component will handle the creation of the necessary instance.

Once the instance is created, we will need the provisioner. This component will be responsible for installing the required dependencies on the instance, including Wazuh components. It will also be used to uninstall packages (similar to the one found in wazuh-qa).

With this, the instance will be created and everything necessary will be installed. The remaining task will be to configure the Wazuh components to ensure communication between them.

For this, a module will need to be created to handle the configuration of each component. This module could be called configurer.

In summary, we will have three main modules:

  • allocator: Creates instances (in wazuh-automation).
  • provisioner: Handles the installation of necessary components on the instances.
  • configurer: Responsible for configuring the Wazuh components to communicate with each other.

Testing

We will have one module dedicated to testing the provisioner and another dedicated to testing the configurer.

Each module should have two parts:

  • Logic testing: Ensures that the logic behaves as expected. Depending on the input data, it checks that the correct functions are called, the necessary data is generated, etc.
  • VM functionality testing: Ensures that, once the VM is created and configured, everything works as it should:
    • The provisioner installs the requested packages correctly.
    • The configurer sets up all the components correctly. This includes verifying the state, checking for error messages in logs, ensuring proper API connectivity, verifying certificates are created, and confirming correct connectivity with Filebeat.

To Be Defined

Further research is still needed on the configurer to ensure it can be used for both the OVA and the AMI. Additionally, we need to investigate whether the current Bash scripts can be integrated into Python to facilitate testing.

@Enaraque
Copy link
Member

Update report

In addition to being responsible for installing Wazuh components and dependencies, the provider module will also handle the cert-tool installation.

In the configurer module, we will define three main submodules:

  • Assisted Installation: This submodule is responsible for configuring everything that was previously managed by the Installation Assistant. This module will be used by both the OVA and the AMI, as both previously relied on the Installation Assistant.

  • OVA: The OVA currently includes various configuration files written in Bash. These configuration files will be migrated to Python to improve maintainability and testing.

  • AMI: Similarly to the OVA, the AMI has configuration files. These files will also be migrated to Python for better maintainability and testing.

Regarding the playbooks used for generating the OVA and AMI, they will be incorporated into the corresponding submodule of the Configurer. The tasks in these playbooks related to configuring components will be migrated to Python. This will make the playbooks much easier to use and maintain, while also simplifying the testing of configurations.

@wazuhci wazuhci moved this from In progress to Pending review in Release 5.0.0 Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
level/task Task issue type/enhancement Enhancement issue
Projects
Status: Pending review
Development

No branches or pull requests

2 participants