-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SPIKE - AMI #120
Comments
First approachSeveral considerations have been made regarding the creation of the AMI and the associated tests. Currently, the AMI is being created using Amazon Linux 2 (AL2), which will reach its end of support soon—on June 30, 2025. One of the improvements that should be considered is transitioning to Amazon Linux 2023 (AL2023) to ensure continued support and leverage a more secure operating system for AMI creation. This change was previously considered in past issues (wazuh/wazuh-packages#2986) but could not be implemented for various reasons. Further investigation is necessary to determine how to switch the operating system of the AMI to achieve this goal. Tests ImplementationFor implementing the tests, using Python would be an excellent choice. One of the main reasons to choose Python is its high scalability and modularity, which greatly benefit test writing. Additionally, it allows for a more robust and customizable logging system, making it easier to identify any issues that arise. Moreover, Python has been the language used in other repositories (e.g., the allocator module). Therefore, it is the language with which we have the most knowledge, experience, and comfort, making it the ideal choice for writing the necessary tests. AMI CreationRegarding the AMI creation process itself, Ansible is currently being used to execute the Installation Assistant, which sets up the AIO infrastructure for the AMI. Given that the Installation Assistant logic may change in version For these reasons, as discussed in the testing section, we could consider using Python to implement the logic responsible for deploying Wazuh components in the AMI. This would provide the scalability and modularity that Ansible lacks, as well as significantly reduce maintenance costs. Tip This approach would also enable the creation of dedicated tests for the AMI configuration logic. In other words, beyond verifying the AMI's functionality by creating an instance, we would gain the flexibility to test each function in the module responsible for AMI creation. The goal would be to achieve test coverage that ensures the AMI is deployed correctly. GitHub Actions IntegrationBy implementing the changes mentioned above, creating the necessary GitHub Actions (GHA) workflows for testing would become a straightforward process. There would be no need to implement complex logic in any job steps. We would only need to call the testing framework, which would handle validating the code logic (ensuring the subsequent deployment succeeds) and verifying that the AMI functions correctly once deployed. |
Update reportThe process of migrating the operating system from AL2 to AL2023 has been under investigation. This effort has been conducted alongside the spike for the OVA, as it represents a joint improvement. The research has been carried out in collaboration with @CarlosALgit , and more information can be found here. Further testing is required to reach a definitive conclusion. |
Update reportWe have continued with the research and testing in order to create the AL2023 VM in the macStadium. All the information can be seen in this comment. Next stepsOnce the research is finished and we can see that the OVA can be deployed correctly, we will continue with the tests to verify that we can create a base AMI with AL2023 for the creation of the Wazuh AMI. |
Update reportWe have been testing the OVA to ensure it is created correctly for use in MacStadium. On the other hand, I have been configuring the base AMI setup script for the Wazuh AMI. In this script, I perform various tasks:
PoCI execute the script on the instance. Warning The script must be run as the Script execution$ sudo bash AMI_generate_base_ami.sh
+ CLOUD_CFG_PATH=/etc/cloud/cloud.cfg
+ SSH_CONFIG_PATH=/etc/ssh/sshd_config
+ WAZUH_USER=wazuh-user
+ WAZUH_PASSWORD=wazuh
+ SSH_PORT=22
+ WAZUH_LOGO='
wwwwww. wwwwwww. wwwwwww.
wwwwwww. wwwwwww. wwwwwww.
wwwwww. wwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwww. wwwwwww.
wwwwww. wwwwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwwwww. wwwwwww.
wwwwww. wwwwww.wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww.wwwwwww.
wwwwwww.wwwww. wwwwww.wwwwwww.
wwwwwwwwwwww. wwwwwwwwwwww.
wwwwwwwwwww. wwwwwwwwwwww. oooooo
wwwwwwwwww. wwwwwwwwww. oooooooo
wwwwwwwww. wwwwwwwwww. oooooooooo
wwwwwwww. wwwwwwww. oooooooooo
wwwwwww. wwwwwwww. oooooooo
wwwwww. wwwwww. oooooo
WAZUH Open Source Security Platform
https://wazuh.com
'
+ modify_cloud_cfg
+ sed -i 's/gecos: .*$/gecos: WAZUH AMI/' /etc/cloud/cloud.cfg
+ sed -i 's/name: .*$/name: wazuh-user/' /etc/cloud/cloud.cfg
+ sed -i /set-hostname/d /etc/cloud/cloud.cfg
+ sed -i 's/update-hostname/preserve_hostname: true/' /etc/cloud/cloud.cfg
+ sudo cloud-init clean
+ sudo cloud-init init
Cloud-init v. 22.2.2 running 'init' at Mon, 02 Dec 2024 17:24:55 +0000. Up 2395.39 seconds.
ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
ci-info: | ens5 | True | 172.31.92.101 | 255.255.240.0 | global | 12:9c:2f:ba:e2:e5 |
ci-info: | ens5 | True | fe80::109c:2fff:feba:e2e5/64 | . | link | 12:9c:2f:ba:e2:e5 |
ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . |
ci-info: | lo | True | ::1/128 | . | host | . |
ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
ci-info: | 0 | 0.0.0.0 | 172.31.80.1 | 0.0.0.0 | ens5 | UG |
ci-info: | 1 | 172.31.0.2 | 172.31.80.1 | 255.255.255.255 | ens5 | UGH |
ci-info: | 2 | 172.31.80.0 | 0.0.0.0 | 255.255.240.0 | ens5 | U |
ci-info: | 3 | 172.31.80.1 | 0.0.0.0 | 255.255.255.255 | ens5 | UH |
ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
ci-info: +-------+-------------+---------+-----------+-------+
ci-info: | Route | Destination | Gateway | Interface | Flags |
ci-info: +-------+-------------+---------+-----------+-------+
ci-info: | 0 | fe80::/64 | :: | ens5 | U |
ci-info: | 2 | local | :: | ens5 | U |
ci-info: | 3 | multicast | :: | ens5 | U |
ci-info: +-------+-------------+---------+-----------+-------+
2024-12-02 17:24:56,223 - schema.py[WARNING]: Invalid cloud-config provided: Please run 'sudo cloud-init schema --system' to see the schema errors.
Generating public/private ed25519 key pair.
Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
The key fingerprint is:
SHA256:0Elj7I23u1UFeFM3b4ObNT9FFX31YjENfu8bcRob1DU [email protected]
The key's randomart image is:
+--[ED25519 256]--+
| .+ .=E%|
| +.o ..+*@|
| ..oo o==X|
| .o o o+=*|
| S. . o+o+|
| . . *+|
| .. o..|
| .. o|
| .. . |
+----[SHA256]-----+
Generating public/private ecdsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
The key fingerprint is:
SHA256:jUs4J2+nmgK8a50QSh5oG7mryNgUyGJ0yN/5LvQLqq4 [email protected]
The key's randomart image is:
+---[ECDSA 256]---+
| |
|. . |
|.+.. |
|+B+ . .. o |
|*=*o o+ S . |
|+++. ..* . |
| o=.o..+ . |
|++o =.o+ o |
|E=+o .++o |
+----[SHA256]-----+
+ sudo cloud-init modules --mode=config
Cloud-init v. 22.2.2 running 'modules:config' at Mon, 02 Dec 2024 17:24:56 +0000. Up 2396.26 seconds.
+ sudo cloud-init modules --mode=final
Cloud-init v. 22.2.2 running 'modules:final' at Mon, 02 Dec 2024 17:24:56 +0000. Up 2396.60 seconds.
Cloud-init v. 22.2.2 finished at Mon, 02 Dec 2024 17:24:57 +0000. Datasource DataSourceEc2. Up 2396.98 seconds
+ modify_hostname
+ sudo hostnamectl set-hostname wazuh-server
+ delete_ec2user
+ sudo userdel -r ec2-user
+ set_wazuh_logo
+ echo '
wwwwww. wwwwwww. wwwwwww.
wwwwwww. wwwwwww. wwwwwww.
wwwwww. wwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwww. wwwwwww.
wwwwww. wwwwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwwwww. wwwwwww.
wwwwww. wwwwww.wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww.wwwwwww.
wwwwwww.wwwww. wwwwww.wwwwwww.
wwwwwwwwwwww. wwwwwwwwwwww.
wwwwwwwwwww. wwwwwwwwwwww. oooooo
wwwwwwwwww. wwwwwwwwww. oooooooo
wwwwwwwww. wwwwwwwwww. oooooooooo
wwwwwwww. wwwwwwww. oooooooooo
wwwwwww. wwwwwwww. oooooooo
wwwwww. wwwwww. oooooo
WAZUH Open Source Security Platform
https://wazuh.com
'
+ set_ssh_port
+ grep -q '^Port' /etc/ssh/sshd_config
++ grep '^Port' /etc/ssh/sshd_config
++ awk '{print $2}'
+ CURRENT_SSH_PORT=2200
+ '[' 2200 '!=' 22 ']'
+ sudo sed -i 's/^Port .*/#Port 22/' /etc/ssh/sshd_config
+ sudo systemctl restart sshd.service
+ clean_up
+ sudo yum clean all
0 files removed
+ sudo rm -rf /var/log/cloud-init-output.log /var/log/cloud-init.log /var/log/dnf.librepo.log /var/log/dnf.log /var/log/dnf.rpm.log /var/log/hawkey.log /var/log/lastlog /var/log/sa
+ sudo rm -rf /tmp/systemd-private-05c551df56d24ceab80db4488e938d66-systemd-hostnamed.service-Z5K0On /tmp/tmp.CLUc2wPwWV /tmp/tmp.EZF40LhgmJ /tmp/tmp.Gp9xNSqKLj /tmp/tmp.MzfuvGamY8 /tmp/tmp.RRAnJNcX56 /tmp/tmp.n6gv6NMHk7 /tmp/tmp.rflh3JIng1 /tmp/tmp.wRl5uvcEXl
+ sudo rm -rf '/var/cache/yum/*'
+ sudo rm /root/.ssh/authorized_keys
+ sudo yum autoremove
Amazon Linux 2023 repository 65 MB/s | 29 MB 00:00
Amazon Linux 2023 Kernel Livepatch repository 62 kB/s | 11 kB 00:00
Dependencies resolved.
Nothing to do.
Complete!
+ sudo rm -rf '/root/.ssh/*'
+ cat /dev/null
+ history -c
+ exit Now we can see that ec2-user does not exist: $ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/usr/sbin/nologin
systemd-oom:x:999:999:systemd Userspace OOM Killer:/:/usr/sbin/nologin
systemd-resolve:x:193:193:systemd Resolver:/:/usr/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/usr/share/empty.sshd:/sbin/nologin
rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
libstoragemgmt:x:997:997:daemon account for libstoragemgmt:/:/usr/sbin/nologin
systemd-coredump:x:996:996:systemd Core Dumper:/:/usr/sbin/nologin
systemd-timesync:x:995:995:systemd Time Synchronization:/:/usr/sbin/nologin
chrony:x:994:994:chrony system user:/var/lib/chrony:/sbin/nologin
ec2-instance-connect:x:993:993::/home/ec2-instance-connect:/sbin/nologin
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
tcpdump:x:72:72::/:/sbin/nologin
wazuh-user:x:1001:1001:WAZUH AMI:/home/wazuh-user:/bin/bash
$ cat /etc/passwd | grep ec2
ec2-instance-connect:x:993:993::/home/ec2-instance-connect:/sbin/nologin And when we access to the VM we can see the logo display: $ ssh -i xxxxx -p 22 [email protected]
wwwwww. wwwwwww. wwwwwww.
wwwwwww. wwwwwww. wwwwwww.
wwwwww. wwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwww. wwwwwww.
wwwwww. wwwwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwwwww. wwwwwww.
wwwwww. wwwwww.wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww.wwwwwww.
wwwwwww.wwwww. wwwwww.wwwwwww.
wwwwwwwwwwww. wwwwwwwwwwww.
wwwwwwwwwww. wwwwwwwwwwww. oooooo
wwwwwwwwww. wwwwwwwwww. oooooooo
wwwwwwwww. wwwwwwwwww. oooooooooo
wwwwwwww. wwwwwwww. oooooooooo
wwwwwww. wwwwwwww. oooooooo
wwwwww. wwwwww. oooooo
WAZUH Open Source Security Platform
https://wazuh.com
[wazuh-user@wazuh-server ~]$ Configuration ScriptScript#!/bin/sh
set -euxo pipefail
# Define paths
CLOUD_CFG_PATH="/etc/cloud/cloud.cfg"
SSH_CONFIG_PATH="/etc/ssh/sshd_config"
# Define user and password
WAZUH_USER="wazuh-user"
WAZUH_PASSWORD="wazuh"
# Define SSH port
SSH_PORT="22"
WAZUH_LOGO="
wwwwww. wwwwwww. wwwwwww.
wwwwwww. wwwwwww. wwwwwww.
wwwwww. wwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwww. wwwwwww.
wwwwww. wwwwwwwwwww. wwwwwww.
wwwwwww. wwwwwwwwwww. wwwwwww.
wwwwww. wwwwww.wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww. wwwwwww.
wwwwwww. wwwww. wwwwww. wwwwwww.
wwwwww. wwwwww. wwwwww.wwwwwww.
wwwwwww.wwwww. wwwwww.wwwwwww.
wwwwwwwwwwww. wwwwwwwwwwww.
wwwwwwwwwww. wwwwwwwwwwww. oooooo
wwwwwwwwww. wwwwwwwwww. oooooooo
wwwwwwwww. wwwwwwwwww. oooooooooo
wwwwwwww. wwwwwwww. oooooooooo
wwwwwww. wwwwwwww. oooooooo
wwwwww. wwwwww. oooooo
WAZUH Open Source Security Platform
https://wazuh.com
"
function modify_cloud_cfg() {
sed -i "s/gecos: .*$/gecos: WAZUH AMI/" "$CLOUD_CFG_PATH"
sed -i "s/name: .*$/name: $WAZUH_USER/" "$CLOUD_CFG_PATH"
sed -i "/set-hostname/d" "$CLOUD_CFG_PATH"
sed -i "s/update-hostname/preserve_hostname: true/" "$CLOUD_CFG_PATH"
sudo cloud-init clean
sudo cloud-init init
sudo cloud-init modules --mode=config
sudo cloud-init modules --mode=final
}
function modify_hostname() {
sudo hostnamectl set-hostname wazuh-server
}
function delete_ec2user() {
sudo userdel -r ec2-user || true
}
function set_ssh_port {
if grep -q '^Port' "${SSH_CONFIG_PATH}"; then
CURRENT_SSH_PORT=$(grep '^Port' "${SSH_CONFIG_PATH}" | awk '{print $2}')
if [ "$CURRENT_SSH_PORT" != "$SSH_PORT" ]; then
sudo sed -i "s/^Port .*/#Port $SSH_PORT/" "${SSH_CONFIG_PATH}"
sudo systemctl restart sshd.service
fi
fi
}
function set_wazuh_logo() {
echo "$WAZUH_LOGO" > /etc/motd
}
function clean_up() {
sudo yum clean all
sudo rm -rf /var/log/*
sudo rm -rf /tmp/*
sudo rm -rf /var/cache/yum/*
sudo rm ~/.ssh/*
sudo yum autoremove
sudo rm -rf /root/.ssh/*
cat /dev/null > /root/.bash_history && history -c && exit
cat /dev/null > ~/.bash_history && history -c && exit
}
modify_cloud_cfg
modify_hostname
delete_ec2user
set_wazuh_logo
set_ssh_port
clean_up
|
Update reportOnce it has been confirmed that we can successfully create a base AMI for the Wazuh AMI, the following investigations have continued: Verifying the Use of AL2023 for the OVAAs previously discussed in update reports, we have been investigating the possibility of using AL2023 as the base image for the OVA. This would provide consistency by using the same operating system for both the OVA and the AMI. Currently, we encountered an issue where, after creating the test OVA, a network interface was found in a "down" state. After numerous trials and testing various configurations, it seems we have found a solution by adding a custom network interface in Cloud-Init (Cloud-Init network configuration documentation). Deploying Wazuh Components Using PythonAnother key area of investigation is determining whether it is feasible and efficient to handle the installation logic of the different Wazuh components using Python. Currently, this process is managed through playbooks and the installation assistant (a tool that will not be used in version For this, we have been exploring the possibility of using the provisioning module from the Additionally, since it is written in Python, it enables us to create our own tests and verify that everything is set up correctly. |
Upadate reportDefinition of the Different ComponentsTo create the OVA and the AMI, we will need several components essential for both. First, we will need the allocator (from wazuh-automation). This component will handle the creation of the necessary instance. Once the instance is created, we will need the provisioner. This component will be responsible for installing the required dependencies on the instance, including Wazuh components. It will also be used to uninstall packages (similar to the one found in With this, the instance will be created and everything necessary will be installed. The remaining task will be to configure the Wazuh components to ensure communication between them. For this, a module will need to be created to handle the configuration of each component. This module could be called configurer. In summary, we will have three main modules:
TestingWe will have one module dedicated to testing the provisioner and another dedicated to testing the configurer. Each module should have two parts:
To Be DefinedFurther research is still needed on the configurer to ensure it can be used for both the OVA and the AMI. Additionally, we need to investigate whether the current Bash scripts can be integrated into Python to facilitate testing. |
Update reportIn addition to being responsible for installing Wazuh components and dependencies, the provider module will also handle the In the configurer module, we will define three main submodules:
Regarding the playbooks used for generating the OVA and AMI, they will be incorporated into the corresponding submodule of the Configurer. The tasks in these playbooks related to configuring components will be migrated to Python. This will make the playbooks much easier to use and maintain, while also simplifying the testing of configurations. |
Description
We need to analyze if we can improve/simplify the AMI generation. Aspects to analyze:
As a requirement, the AMI must not be built with the Installation assistant.
Additionally, we need to design and implement DevOps-owned AMI testing. Currently, there are no tests for the AMI. The goal is to create a GitHub Action (GHA) workflow that serves both as a PR check and an on-demand testing tool. The GHA should validate:
Implementation restrictions
Plan
The text was updated successfully, but these errors were encountered: