Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pivots to SCOS fail due to newer ext4 features enabled in FCOS #2041

Open
alrf opened this issue Oct 8, 2024 · 53 comments
Open

Pivots to SCOS fail due to newer ext4 features enabled in FCOS #2041

alrf opened this issue Oct 8, 2024 · 53 comments
Assignees
Labels
OKD SCOS 4.16 OKD SCOS 4.17 pre-release-testing Items related to testing nightlies before a release.

Comments

@alrf
Copy link

alrf commented Oct 8, 2024

OKD Version: 4.15.0-0.okd-2024-03-10-010116
FCOS Version: 39.20240210.3.0 (CoreOS)

https://api-int.mydomain.com:22623/config/master shows HTTP ERROR 500

In the logs of the machine-config-server container I see:

E1008 11:17:17.901441       1 api.go:183] couldn't convert config for req: {master 0xc0003a2380}, error: failed to convert config from spec v3.2 to v2.2: unable to convert Ignition spec v3 config to v2: SizeMiB and StartMiB in Storage.Disks.Partitions is not supported on 2.2
@JaimeMagiera
Copy link
Contributor

Hi,

Not sure if you saw either of these...

https://okd.io/blog/2024/06/01/okd-future-statement/
https://okd.io/blog/2024/07/30/okd-pre-release-testing/

So, you'll want to try an install of 4.16. Are you writing the ignition yourself? As the error notes, there is a mismatch of versions.

@JaimeMagiera JaimeMagiera self-assigned this Oct 8, 2024
@alrf
Copy link
Author

alrf commented Oct 8, 2024

Hi @JaimeMagiera, I've not seen this. The ignition files are written by openshift-install tool.
I saw this discussion: #2029
Is release 4.16 already published somewhere?
Actually, what is the official resource/url now to check for OKD releases?
Even the documentation for 4.17 mentions only FCOS: https://docs.okd.io/4.17/installing/overview/index.html#about-rhcos

@alrf
Copy link
Author

alrf commented Oct 9, 2024

I found releases here: https://github.com/okd-project/okd-scos/releases
But openshift-install v4.16 shows fedora images only:

# openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.qemu.formats."qcow2.xz".disk.location'
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/39.20231101.3.0/x86_64/fedora-coreos-39.20231101.3.0-qemu.x86_64.qcow2.xz

# openshift-install coreos print-stream-json | grep -c fedora
113
# openshift-install coreos print-stream-json | grep -c centos
0
# openshift-install coreos print-stream-json | grep -c coreos
113

# openshift-install version
openshift-install 4.16.0-0.okd-scos-2024-09-27-110344
built from commit 17f65f8808858b0111fd1624e7ee45f96efdc5dc
release image quay.io/okd/scos-release@sha256:eb66b3806689ad6fd068c965e0800813a8193ca1b2be748ec481521bb98a9962

How to download then v4.16 SCOS disk for bare-metal installation?

@JaimeMagiera
Copy link
Contributor

Sorry for the confusion. That repository for OKD-SCOS is not relevant here. The current state of OKD is that the nodes start as FCOS, and boot into SCOS using rpm-ostree after the installer runs. You can use fedora-coreos-39.20231101.3.0 for your nodes.

Currently, there are only nightly builds of OKD SCOS. We haven’t signed off on a GM release yet. We’re getting close and actually could use your help testing. Our ability to test bare-metal has been limited. A nightly that has passed E2E testing is here…

https://amd64.origin.releases.ci.openshift.org/releasestream/4-scos-stable/release/4.16.0-0.okd-scos-2024-09-24-151747

Let us know how it goes. Thanks.

@alrf
Copy link
Author

alrf commented Oct 9, 2024

Thank you for the link. But getting the installer and client looks very weird, no direct links to archives:

Download installer and client with:

oc adm release extract --tools quay.io/okd/scos-release:4.16.0-0.okd-scos-2024-09-24-151747

Imagine I have a new host and no oc installed, it is chicken-egg problem.

Back to the actual problems, I managed to install a bootstrap node and 1 master node on bare-metal.
However, on the bootstrap node:

[core@bootstrap01 ~]$ sudo -s
[systemd]
Failed Units: 1
  systemd-sysusers.service
[root@bootstrap01 core]# systemctl status systemd-sysusers.service
× systemd-sysusers.service - Create System Users
     Loaded: loaded (/usr/lib/systemd/system/systemd-sysusers.service; static)
     Active: failed (Result: exit-code) since Wed 2024-10-09 12:38:57 UTC; 55min ago
   Duration: 4.328s
       Docs: man:sysusers.d(5)
             man:systemd-sysusers.service(8)
    Process: 916 ExecStart=systemd-sysusers (code=exited, status=1/FAILURE)
   Main PID: 916 (code=exited, status=1/FAILURE)
        CPU: 55ms

Oct 09 12:38:57 bootstrap01.test-env.mydomain.com systemd[1]: Starting Create System Users...
Oct 09 12:38:57 bootstrap01.test-env.mydomain.com systemd-sysusers[916]: Creating group 'sgx' with GID 991.
Oct 09 12:38:57 bootstrap01.test-env.mydomain.com systemd-sysusers[916]: /etc/gshadow: Group "sgx" already exists.
Oct 09 12:38:57 bootstrap01.test-env.mydomain.com systemd[1]: systemd-sysusers.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 12:38:57 bootstrap01.test-env.mydomain.com systemd[1]: systemd-sysusers.service: Failed with result 'exit-code'.
Oct 09 12:38:57 bootstrap01.test-env.mydomain.com systemd[1]: Failed to start Create System Users.

On the master node rpm-ostree fails:

[root@serverXXX core]# rpm-ostree status
A dependency job for rpm-ostreed.service failed. See 'journalctl -xe' for details.
○ rpm-ostreed.service - rpm-ostree System Management Daemon
     Loaded: loaded (/usr/lib/systemd/system/rpm-ostreed.service; static)
    Drop-In: /etc/systemd/system/rpm-ostreed.service.d
             └─10-mco-default-env.conf
             /run/systemd/system/rpm-ostreed.service.d
             └─bug2111817.conf
             /etc/systemd/system/rpm-ostreed.service.d
             └─mco-controlplane-nice.conf
     Active: inactive (dead)
       Docs: man:rpm-ostree(1)

Oct 09 15:19:01 serverXXX.mydomain.com systemd[1]: Dependency failed for rpm-ostree System Management Daemon.
Oct 09 15:19:01 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
error: Loading sysroot: exit status: 1

Additionally, I was not able to install the second master node because all required containers (like kube-*) stopped and disappeared on the bootstrap node:

# oc get node
The connection to the server api.test-env.mydomain.com:6443 was refused - did you specify the right host or port?


[root@bootstrap01 core]# crictl ps -a
CONTAINER           IMAGE                                                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD
2e193f2a99c22       625b24f036a1dcd1480cceb191dee59b2945e8a73bd750f977e29bb177122092                                   About an hour ago   Running             etcd                0                   126366c482e0d       etcd-bootstrap-member-bootstrap01.test-env.mydomain.com
7dd584a6695b0       quay.io/okd/scos-content@sha256:0637f82bd0b20204b87f50c55a8fd627701a8f09f78fd2fa7c5a6a4ac8054a87   About an hour ago   Running             etcdctl             0                   126366c482e0d       etcd-bootstrap-member-bootstrap01.test-env.mydomain.com
[root@bootstrap01 core]#

I came across with the same behaviour on v4.15.

@JaimeMagiera
Copy link
Contributor

In terms of the chicken/egg situation, we have a new Community Testing page with a link to the oc binaries.

https://okd.io/docs/community/community-testing/#getting-started

@JaimeMagiera JaimeMagiera added OKD SCOS 4.16 pre-release-testing Items related to testing nightlies before a release. labels Oct 9, 2024
@JaimeMagiera
Copy link
Contributor

Can you walk me through the process you're following to install OKD? I feel like there's something missing. Also, what is your bare-metal configuration?

@alrf
Copy link
Author

alrf commented Oct 9, 2024

Seems I found something:

Oct 09 16:47:39 serverXXX.mydomain.com systemd-fsck[111287]: /dev/md126 has unsupported feature(s): FEATURE_C12
Oct 09 16:47:39 serverXXX.mydomain.com systemd-fsck[111287]: e2fsck: Get a newer version of e2fsck!
Oct 09 16:47:39 serverXXX.mydomain.com systemd-fsck[111287]: boot: ********** WARNING: Filesystem still has errors **********
Oct 09 16:47:39 serverXXX.mydomain.com systemd-fsck[111285]: fsck failed with exit status 12.
Oct 09 16:47:39 serverXXX.mydomain.com systemd[1]: systemd-fsck@dev-disk-by\x2duuid-0464017b\x2d51dc\x2d45bb\x2da6a6\x2db96ba296763d.service: Main process exited, code=exited, status=1/FAILURE

Oct 09 16:47:39 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
Oct 09 16:47:39 serverXXX.mydomain.com systemd[1]: boot.mount: Job boot.mount/start failed with result 'dependency'.
Oct 09 16:47:39 serverXXX.mydomain.com systemd[1]: Software RAID monitoring and management was skipped because of an unmet condition check (ConditionPathExists=/etc/mdadm.conf).

i.e. rpm-ostree fails due to the systemd-fsck service and systemd-fsck - because of e2fsck: Get a newer version of e2fsck.

Bare metal config:
8 CPU, 64GB RAM, 477GB 2 disks in software RAID1.

@alrf
Copy link
Author

alrf commented Oct 9, 2024

FEATURE_C12 is about orphan_file: https://askubuntu.com/a/1514580
have Linux kernel newer then v5.15, because orphan_file feature requires it.

Checking my servers, SCOS has:

[root@serverSCOS core]# uname -a
Linux serverSCOS.mydomain.com 5.14.0-511.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 19 06:52:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

But FCOS (I have another bare-metal server installed with v4.15 & Fedora CoreOS):

[root@serverFCOS core]# uname -a
Linux serverFCOS.mydomain.com 6.7.4-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Feb  5 22:21:14 UTC 2024 x86_64 GNU/Linux

That's the reason why I didn't have any issues with rpm-ostree on FCOS - the kernel there is newer.

@alrf
Copy link
Author

alrf commented Oct 10, 2024

The same results on v4.17.
I also found this discussion: #1997
IMO, neither 4.16 nor 4.17 can be released until the orphan_file issue is fixed.

@JaimeMagiera
Copy link
Contributor

Just to confirm, you're starting with fedora-coreos-39.20231101.3.0 as I suggested above?

@alrf
Copy link
Author

alrf commented Oct 15, 2024

Just to confirm, you're starting with fedora-coreos-39.20231101.3.0 as I suggested above?

Yes, I'm.

@JaimeMagiera
Copy link
Contributor

This may be an issue specific to RAID devices. We're looking into it. Can you try on a cluster with non-RAID storage?

@alrf
Copy link
Author

alrf commented Oct 17, 2024

This is software RAID configured by OKD during the installation like:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-master-storage
spec:
  config:
    ignition:
      version: 3.4.0
    storage:
      disks:
        - device: /dev/nvme0n1
          partitions:
            - label: bios-1
              sizeMiB: 1
              typeGuid: 21686148-6449-6E6F-744E-656564454649
            - label: esp-1
              sizeMiB: 127
              typeGuid: C12A7328-F81F-11D2-BA4B-00A0C93EC93B
            - label: boot-1
              sizeMiB: 384
            - label: root-1
          wipeTable: true
        - device: /dev/nvme1n1
          partitions:
            - label: bios-2
              sizeMiB: 1
              typeGuid: 21686148-6449-6E6F-744E-656564454649
            - label: esp-2
              sizeMiB: 127
              typeGuid: C12A7328-F81F-11D2-BA4B-00A0C93EC93B
            - label: boot-2
              sizeMiB: 384
            - label: root-2
          wipeTable: true
      filesystems:
        - device: /dev/disk/by-partlabel/esp-1
          format: vfat
          label: esp-1
          wipeFilesystem: true
        - device: /dev/disk/by-partlabel/esp-2
          format: vfat
          label: esp-2
          wipeFilesystem: true
        - device: /dev/md/md-boot
          format: ext4
          label: boot
          wipeFilesystem: true
        - device: /dev/md/md-root
          format: xfs
          label: root
          wipeFilesystem: true
      raid:
        - devices:
            - /dev/disk/by-partlabel/boot-1
            - /dev/disk/by-partlabel/boot-2
          level: raid1
          name: md-boot
          options:
            - --metadata=1.0
        - devices:
            - /dev/disk/by-partlabel/root-1
            - /dev/disk/by-partlabel/root-2
          level: raid1
          name: md-root

Let me think if I can try somewhere single-node non-RAID installation.

@alrf
Copy link
Author

alrf commented Oct 22, 2024

Following this procedure https://docs.okd.io/4.16/installing/installing_sno/install-sno-installing-sno.html (single node installation, no RAID configured), I can't even install OKD 4.16.0-0.okd-scos-2024-09-24-151747.
After the first boot of the server, it goes into an infinite loop with messages like these:

Oct 22 10:17:16 serverXXX.mydomain.com bootkube.sh[104707]: /usr/local/bin/bootkube.sh: line 81: oc: command not found
Oct 22 10:17:16 serverXXX.mydomain.com systemd[1]: bootkube.service: Main process exited, code=exited, status=127/n/a
Oct 22 10:17:16 serverXXX.mydomain.com systemd[1]: bootkube.service: Failed with result 'exit-code'.
Oct 22 10:17:16 serverXXX.mydomain.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=bootkube comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Oct 22 10:17:16 serverXXX.mydomain.com systemd[1]: bootkube.service: Consumed 2.305s CPU time.
Oct 22 10:17:16 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb621bcbb25daa4f02e8e3f294bb4a8d50f8ee909361a41c9a49faa7a8152939-userdata-shm.mount: Deactivated successfully.
Oct 22 10:17:16 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 22 10:17:21 serverXXX.mydomain.com systemd[1]: bootkube.service: Scheduled restart job, restart counter is at 126.
Oct 22 10:17:21 serverXXX.mydomain.com systemd[1]: Stopping install-to-disk.service - Install to disk...
Oct 22 10:17:21 serverXXX.mydomain.com install-to-disk.sh[104186]: Terminated
Oct 22 10:17:21 serverXXX.mydomain.com systemd[1]: Starting release-image-pivot.service - Pivot bootstrap to the OpenShift Release Image...
Oct 22 10:17:21 serverXXX.mydomain.com bootstrap-pivot.sh[104736]: touch: cannot touch '/usr/.test': Read-only file system
Oct 22 10:17:21 serverXXX.mydomain.com systemd[1]: install-to-disk.service: Deactivated successfully.
Oct 22 10:17:21 serverXXX.mydomain.com systemd[1]: Stopped install-to-disk.service - Install to disk.
Oct 22 10:17:21 serverXXX.mydomain.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=install-to-disk comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 22 10:17:21 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 22 10:17:21 serverXXX.mydomain.com podman[104764]: 2024-10-22 10:17:21.890314794 +0000 UTC m=+0.035518498 container create 328deee9457fd159007d3b37aa55bbda6a1be432e7cae8ba79aea2f35a11e3d1 (image=quay.io/okd/scos-release@sha256:5a4e9ae9bea98a8df86aaefd35cfd1adbebd150062813ee06ee00da1c6a0a9a3, name=elastic_ishizaka, io.openshift.release=4.16.0-0.okd-scos-2024-09-24-151747, io.openshift.release.base-image-digest=sha256:9eef78f2b1edaccfe8e670ade308bfe310645d00ad4c33d1894b84ce27777be4)

Oct 22 10:26:45 serverXXX.mydomain.com systemd[1]: bootkube.service: Consumed 2.288s CPU time.
Oct 22 10:26:45 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 22 10:26:47 serverXXX.mydomain.com approve-csr.sh[12253]: /usr/local/bin/approve-csr.sh: line 11: oc: command not found
Oct 22 10:26:51 serverXXX.mydomain.com systemd[1]: bootkube.service: Scheduled restart job, restart counter is at 12.
Oct 22 10:26:51 serverXXX.mydomain.com systemd[1]: Stopping install-to-disk.service - Install to disk...
Oct 22 10:26:51 serverXXX.mydomain.com install-to-disk.sh[11714]: Terminated
Oct 22 10:26:51 serverXXX.mydomain.com systemd[1]: Starting release-image-pivot.service - Pivot bootstrap to the OpenShift Release Image...
Oct 22 10:26:51 serverXXX.mydomain.com bootstrap-pivot.sh[12272]: touch: cannot touch '/usr/.test': Read-only file system
Oct 22 10:26:51 serverXXX.mydomain.com systemd[1]: install-to-disk.service: Deactivated successfully.
Oct 22 10:26:51 serverXXX.mydomain.com systemd[1]: Stopped install-to-disk.service - Install to disk.

Oct 22 10:27:17 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 22 10:27:17 serverXXX.mydomain.com rpm-ostree[2267]: client(id:cli dbus:1.532 unit:release-image-pivot.service uid:0) added; new total=1
Oct 22 10:27:17 serverXXX.mydomain.com rpm-ostree[2267]: Loaded sysroot
Oct 22 10:27:17 serverXXX.mydomain.com rpm-ostree[2267]: client(id:cli dbus:1.532 unit:release-image-pivot.service uid:0) vanished; remaining=0
Oct 22 10:27:17 serverXXX.mydomain.com rpm-ostree[2267]: In idle state; will auto-exit in 60 seconds
Oct 22 10:27:17 serverXXX.mydomain.com bootstrap-pivot.sh[14003]: error: Remounting /sysroot read-write: Permission denied
Oct 22 10:27:17 serverXXX.mydomain.com systemd[1]: release-image-pivot.service: Main process exited, code=exited, status=1/FAILURE
Oct 22 10:27:17 serverXXX.mydomain.com systemd[1]: release-image-pivot.service: Failed with result 'exit-code'.
Oct 22 10:27:17 serverXXX.mydomain.com systemd[1]: Failed to start release-image-pivot.service - Pivot bootstrap to the OpenShift Release Image.

@BeardOverflow
Copy link

BeardOverflow commented Oct 22, 2024

@alrf, it is not your fault, SCOS transition is in-progress.

I did a little researching effort to clarify the current status of OKD-SCOS and it is (historically) related to:

In short, since OKD 4.16, during initial bootstrap, the installer uses FCOS as base system and rebase it to SCOS using rpm-ostree (because OKD Working Group do not have enough capacity to build-and-ship SCOS live ISO directly). In the another side, Openshift uses RHCOS as base system, but they do not need pivot the image like us. This step is critical because FCOS does not include oc/podman/kubelet, but SCOS and RHCOS do.

The blocking issue here is pivoting FCOS->SCOS: release-image-pivot.service. Reviewing the /usr/local/bin/bootstrap-pivot.sh file, rpm-ostree fail to rebase the bootstrap's sysroot under the actual implementation.

Until OKD 4.15, the installer pull the rpms files for oc, kubectl, kubelet and install them in the classic way

Since OKD 4.16, rpms files are replaced by stream-coreos / stream-coreos-extensions ostree:

@alrf
Copy link
Author

alrf commented Oct 22, 2024

@BeardOverflow thank you for the clarifications, but currently it is absolutely unclear how to deal with all these issues and what OKD version to install if needed.
Version 4.16.0-0.okd-scos-2024-09-24-151747 is marked as Stable (4-scos-stable, here https://origin-release.apps.ci.l2s4.p1.openshiftapps.com/), but in my opinion it is not.

@JaimeMagiera
Copy link
Contributor

We do not currently have a final 4.16 release. Several nightly builds were dropped into the stable channel a few weeks ago because they passed AWS and vSphere IPI end-to-end tests (which is how OKD used to be released). We need more tasting from the community to find issues outside of IPI installs on the above platforms.

@BeardOverflow
Copy link

@alrf Use the following patch to fix /usr/local/bin/bootstrap-pivot.sh over the bootstrap ignition files:

cat original.ign | jq 'walk(if type == "object" and .path == "/usr/local/bin/bootstrap-pivot.sh" then .contents.source = "data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaApzZXQgLWV1byBwaXBlZmFpbAoKIyBFeGl0IGVhcmx5IGlmIHBpdm90IGlzIGF0dGVtcHRlZCBvbiBTQ09TIExpdmUgSVNPCnNvdXJjZSAvZXRjL29zLXJlbGVhc2UKaWYgW1sgISAkKHRvdWNoIC91c3IvLnRlc3QpIF1dICYmIFtbICR7SUR9ID1+IF4oY2VudG9zKSQgXV07IHRoZW4KICB0b3VjaCAvb3B0L29wZW5zaGlmdC8ucGl2b3QtZG9uZQogIGV4aXQgMApmaQoKIyBSZWJhc2UgdG8gT0tEJ3MgT1NUcmVlIGNvbnRhaW5lciBpbWFnZS4KIyBUaGlzIGlzIHJlcXVpcmVkIGluIE9LRCBhcyB0aGUgbm9kZSBpcyBmaXJzdCBwcm92aXNpb25lZCB3aXRoIHBsYWluIEZlZG9yYSBDb3JlT1MuCgojIHNoZWxsY2hlY2sgZGlzYWJsZT1TQzEwOTEKLiAvdXNyL2xvY2FsL2Jpbi9ib290c3RyYXAtc2VydmljZS1yZWNvcmQuc2gKLiAvdXNyL2xvY2FsL2Jpbi9yZWxlYXNlLWltYWdlLnNoCgojIFBpdm90IGJvb3RzdHJhcCBub2RlIHRvIE9LRCdzIE9TVHJlZSBpbWFnZQppZiBbICEgLWYgL29wdC9vcGVuc2hpZnQvLnBpdm90LWRvbmUgXTsgdGhlbgpNQUNISU5FX09TX0lNQUdFPSQoaW1hZ2VfZm9yIHN0cmVhbS1jb3Jlb3MpCmVjaG8gIlB1bGxpbmcgJHtNQUNISU5FX09TX0lNQUdFfS4uLiIKICB3aGlsZSB0cnVlCiAgZG8KICAgIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJwdWxsLW9rZC1vcy1pbWFnZSIKICAgIGlmIHBvZG1hbiBwdWxsIC0tcXVpZXQgIiR7TUFDSElORV9PU19JTUFHRX0iCiAgICB0aGVuCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2Vfc3VjY2VzcwogICAgICAgIGJyZWFrCiAgICBlbHNlCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2VfZmFpbHVyZQogICAgICAgIGVjaG8gIlB1bGwgZmFpbGVkLiBSZXRyeWluZyAke01BQ0hJTkVfT1NfSU1BR0V9Li4uIgogICAgZmkKICBkb25lCgogIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJyZWJhc2UtdG8tb2tkLW9zLWltYWdlIgogIG1udD0iJChwb2RtYW4gaW1hZ2UgbW91bnQgIiR7TUFDSElORV9PU19JTUFHRX0iKSIKICAjIFNOTyBzZXR1cCBib290cyBpbnRvIExpdmUgSVNPIHdoaWNoIGNhbm5vdCBiZSByZWJhc2VkCiAgIyBodHRwczovL2dpdGh1Yi5jb20vY29yZW9zL3JwbS1vc3RyZWUvaXNzdWVzLzQ1NDcKICAjbWtkaXIgL3Zhci9tbnQve3VwcGVyLHdvcmtlcn0KICAjbW91bnQgLXQgb3ZlcmxheSBvdmVybGF5IC1vICJsb3dlcmRpcj0vdXNyOiRtbnQvdXNyIiAvdXNyCiAgI21vdW50IC10IG92ZXJsYXkgb3ZlcmxheSAtbyAibG93ZXJkaXI9L2V0YzokbW50L2V0Yyx1cHBlcmRpcj0vdmFyL21udC91cHBlcix3b3JrZGlyPS92YXIvbW50L3dvcmtlciIgL2V0YwogIHJzeW5jIC1ybHR1ICRtbnQvZXRjLyAvZXRjLwogIGNwIC1hIC91c3IgL2xpYiAvbGliNjQgL3J1bi9lcGhlbWVyYWwKICByc3luYyAtcmx0IC0taWdub3JlLWV4aXN0aW5nICRtbnQvdXNyLyAvcnVuL2VwaGVtZXJhbC91c3IvCiAgcnN5bmMgLXJsdCAtLWlnbm9yZS1leGlzdGluZyAkbW50L2xpYi8gL3J1bi9lcGhlbWVyYWwvbGliLwogIHJzeW5jIC1ybHQgLS1pZ25vcmUtZXhpc3RpbmcgJG1udC9saWI2NC8gL3J1bi9lcGhlbWVyYWwvbGliNjQvCiAgbW91bnQgLS1iaW5kIC9ydW4vZXBoZW1lcmFsL3VzciAvdXNyCiAgbW91bnQgLS1iaW5kIC9ydW4vZXBoZW1lcmFsL2xpYiAvbGliCiAgbW91bnQgLS1iaW5kIC9ydW4vZXBoZW1lcmFsL2xpYjY0IC9saWI2NAogIHN5c3RlbWN0bCBkYWVtb24tcmVsb2FkCgogICMgQXBwbHkgcHJlc2V0cyBmcm9tIE9LRCBNYWNoaW5lIE9TCiAgc3lzdGVtY3RsIHByZXNldC1hbGwKCiAgIyBXb3JrYXJvdW5kIGZvciBTRUxpbnV4IGRlbmlhbHMgd2hlbiBsYXVuY2hpbmcgY3Jpby5zZXJ2aWNlIGZyb20gb3ZlcmxheWZzCiAgc2V0ZW5mb3JjZSBQZXJtaXNzaXZlCgogICMgY3Jpby5zZXJ2aWNlIGlzIG5vdCBwYXJ0IG9mIEZDT1MgYnV0IG9mIE9LRCBNYWNoaW5lIE9TLiBJdCB3aWxsIGxvYWRlZCBhZnRlciBzeXN0ZW1jdGwgZGFlbW9uLXJlbG9hZCBhYm92ZSBidXQgaGFzIHRvIGJlIHN0YXJ0ZWQgbWFudWFsbHkKICBzeXN0ZW1jdGwgcmVzdGFydCBjcmlvLWNvbmZpZ3VyZS5zZXJ2aWNlCiAgc3lzdGVtY3RsIHN0YXJ0IGNyaW8uc2VydmljZQoKICB0b3VjaCAvb3B0L29wZW5zaGlmdC8ucGl2b3QtZG9uZQogIHBvZG1hbiBpbWFnZSB1bW91bnQgIiR7TUFDSElORV9PU19JTUFHRX0iCiAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2Vfc3VjY2VzcwpmaQo=" else . end)' > fix.ign

This patch works for most OKD-SCOS versions I tested until now (4.16, 4.17 and 4.18) uploaded on quay.io and openshift ci repositories. In detail, it mounts stream-coreos image as a container and extracts the content on a FCOS live image to avoid rpm-ostree.

And please, do not consider the above patch as a final solution, it is a temporal fix until OKD workgroup release its own SCOS image.

Also, @alrf , do not use openshift ci repository (registry.ci.openshift.org/origin/release-scos), because the artifacts are pruned periodically and your cluster will not be able to pull them; instead, use quay.io repository (quay.io/okd/scos-release).

@alrf
Copy link
Author

alrf commented Oct 23, 2024

@BeardOverflow thank you for the patch and information, I will try it.

do not use openshift ci repository (registry.ci.openshift.org/origin/release-scos), because the artifacts are pruned periodically and your cluster will not be able to pull them; instead, use quay.io repository (quay.io/okd/scos-release).

Does this mean that an already installed with registry.ci.openshift.org/origin/release-scos cluster should be reinstalled using the quay.io repository?

@BeardOverflow
Copy link

BeardOverflow commented Oct 23, 2024

@alrf Yes, you should reinstall your cluster using quay.io repository, because registry.ci.openshift.org is just an ephemeral one.

Check the release tags here (not "officially" released yet, but usables):

And get the installation artifacts from the same release path to avoid surprises (e.g.: 4.17.0-0.okd-scos-2024-09-30-101739):

  • openshift-install
    • From quay.io:
      • Method A (from scos-release, purist):
        installer=$(podman run --rm -i quay.io/okd/scos-release:4.17.0-0.okd-scos-2024-09-30-101739 image installer-artifacts)
        mnt=$(podman image mount $installer)
        cp $mnt/usr/share/openshift/linux_amd64/openshift-install . # also: linux_arm64 mac mac_arm64
        podman image umount $installer
      • Method B (from scos-content, fast):
        mnt=$(podman image mount quay.io/okd/scos-content:4.17.0-0.okd-scos-2024-09-30-101739-installer-artifacts)
        cp $mnt/usr/share/openshift/linux_amd64/openshift-install . # also: linux_arm64 mac mac_arm64
        podman image umount quay.io/okd/scos-content:4.17.0-0.okd-scos-2024-09-30-101739-installer-artifacts)
    • From github (easier): https://github.com/okd-project/okd-scos/releases
      curl -RL https://github.com/okd-project/okd-scos/releases/download/4.17.0-0.okd-scos-2024-09-30-101739/openshift-install-linux-4.17.0-0.okd-scos-2024-09-30-101739.tar.gz -o- | tar xz
  • oc
    • From quay.io:
      • Method A (from scos-release, purist):
        cli=$(podman run --rm -i quay.io/okd/scos-release:4.17.0-0.okd-scos-2024-09-30-101739 image cli-artifacts)
        mnt=$(podman image mount $cli)
        cp $mnt/usr/share/openshift/linux_amd64/oc . # also: linux_arm64 linux_ppc64le linux_s390x mac mac_arm64 windows
        podman image umount $cli
      • Method B (from scos-content, fast):
        mnt=$(podman image mount quay.io/okd/scos-content:4.17.0-0.okd-scos-2024-09-30-101739-cli-artifacts)
        cp $mnt/usr/share/openshift/linux_amd64/oc . # also: linux_arm64 linux_ppc64le linux_s390x mac mac_arm64 windows
        podman image umount quay.io/okd/scos-content:4.17.0-0.okd-scos-2024-09-30-101739-cli-artifacts
    • From github (easier): https://github.com/okd-project/okd-scos/releases
      curl -RL https://github.com/okd-project/okd-scos/releases/download/4.17.0-0.okd-scos-2024-09-30-101739/openshift-client-linux-4.17.0-0.okd-scos-2024-09-30-101739.tar.gz -o openshift-client-linux.tgz -o- | tar xz
  • bootstrap iso
    • From quay.io:
      • Method A (from scos-release, purist):
        iso=$(podman run --rm -i quay.io/okd/scos-release:4.17.0-0.okd-scos-2024-09-30-101739 image machine-os-images)
        mnt=$(podman image mount iso)
        cp $mnt/coreos/coreos-x86_64.iso . # see $mnt/coreos/coreos-stream.json, it is really a fcos image
        podman image umount $iso
      • Method B (from scos-content, fast):
        mnt=$(podman image mount quay.io/okd/scos-release:4.17.0-0.okd-scos-2024-09-30-101739-machine-os-images)
        cp $mnt/coreos/coreos-x86_64.iso . # see $mnt/coreos/coreos-stream.json, it is really a fcos image
        podman image umount quay.io/okd/scos-release:4.17.0-0.okd-scos-2024-09-30-101739-machine-os-images
    • From openshift-installer (as documented in docs.okd.io)
      curl -JLOR $(./openshift-install coreos print-stream-json | grep location | grep x86_64 | grep iso | cut -d\" -f4)

I spent plenty of hours to figure it out!

For nightly/experimental builds, you can get the installation artifacts from registry.ci.openshift.org under the same procedure than quay.io (replacing quay.io/okd/scos-release to registry.ci.openshift.org/origin/release-scos), but remember it, builds are pruned. In any case, the https://amd64.origin.releases.ci.openshift.org web page is util to read the changelogs for stable/next/nightly builds.

A last note (easiest path): also, you can get them using ocp's oc binary (e.g.: 4.17.0-0.okd-scos-2024-09-30-101739):

curl -RL https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.17.0/openshift-client-linux-4.17.0.tar.gz -o- | tar zx
./oc adm release extract --tools quay.io/okd/scos-release:4.17.0-0.okd-scos-2024-09-30-101739
tar zxf openshift-client-linux-4.17.0-0.okd-scos-2024-09-30-101739.tar.gz # replaces previous ocp's oc binary to new okd's oc binary
tar zxf openshift-install-linux-4.17.0-0.okd-scos-2024-09-30-101739.tar.gz
curl -JLOR $(./openshift-install coreos print-stream-json | grep location | grep x86_64 | grep iso | cut -d\" -f4)

@JaimeMagiera
Copy link
Contributor

Nightlies are pruned every 72 hours. Any installs based on nightlies are for testing only. These details are noted in our Community Testing document.

https://okd.io/docs/community/community-testing/

@alrf
Copy link
Author

alrf commented Oct 23, 2024

@BeardOverflow I managed to install version 4.17.0-0.okd-scos-2024-09-30-101739 as a single node installation, no RAID configured, by using some tricks.

  1. When I used metal-raw image and a previous installation method (like for v4.15):
curl -L https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/39.20231101.3.0/x86_64/fedora-coreos-39.20231101.3.0-metal.x86_64.raw.xz -o fcos-raw.xz
curl -L https://download-host.mydomain.com/bootstrap-in-place-for-live-iso.ign -o bootstrap-in-place-for-live-iso.ign
coreos-installer install -i bootstrap-in-place-for-live-iso.ign -f fcos-raw.xz --insecure /dev/nvme0n1

I got these issues again:

Oct 23 12:27:36 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay-579a4a3093ddcdd693c89a75b4515fd1cde63196eb92720e47610cc88d29bdab-merged.mount: Deactivated successfully.
Oct 23 12:27:36 serverXXX.mydomain.com bootstrap-pivot.sh[15857]: Pulling quay.io/okd/scos-content@sha256:9ec5e17d984b248ac6fc7f7bfa64b44e44fcc7fb7beabe6ebd839a893f3291d3...
Oct 23 12:27:38 serverXXX.mydomain.com systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17.
Oct 23 12:27:38 serverXXX.mydomain.com bootstrap-pivot.sh[15936]: 6630f860b75fcaa26eedfb48632f7d7642399a644b30c80a0f85d0efbb2f8343
Oct 23 12:27:38 serverXXX.mydomain.com podman[15936]: 2024-10-23 12:27:38.629526533 +0000 UTC m=+1.738853585 image pull 6630f860b75fcaa26eedfb48632f7d7642399a644b30c80a0f85d0efbb2f8343 quay.io/okd/scos-content@sha256:9ec5e17d984b248ac6fc7f7bfa64b44e44fcc7fb7beabe6ebd839a893f3291d3
Oct 23 12:27:38 serverXXX.mydomain.com podman[15960]: 2024-10-23 12:27:38.750978625 +0000 UTC m=+0.030941150 image mount 6630f860b75fcaa26eedfb48632f7d7642399a644b30c80a0f85d0efbb2f8343
Oct 23 12:27:38 serverXXX.mydomain.com bootstrap-pivot.sh[15972]: cp: target '/run/ephemeral': No such file or directory
Oct 23 12:27:38 serverXXX.mydomain.com systemd[1]: release-image-pivot.service: Main process exited, code=exited, status=1/FAILURE
Oct 23 12:27:38 serverXXX.mydomain.com systemd[1]: release-image-pivot.service: Failed with result 'exit-code'.
Oct 23 12:27:38 serverXXX.mydomain.com systemd[1]: Failed to start release-image-pivot.service - Pivot bootstrap to the OpenShift Release Image.

Oct 23 12:28:45 serverXXX.mydomain.com bootkube.sh[20499]: /usr/local/bin/bootkube.sh: line 85: oc: command not found
Oct 23 12:28:45 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay-a9067c43e4b0bc103f3cf108d6b007026bdc6a1ff1d1733fb93c9cf9101adcbc-merged.mount: Deactivated successfully.
Oct 23 12:28:45 serverXXX.mydomain.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-425d06e517fc24ffaa5fe04c505abad8fbc628535d46abd0681e6f131a911d8b-userdata-shm.mount: Deactivated successfully.
Oct 23 12:28:45 serverXXX.mydomain.com systemd[1]: bootkube.service: Main process exited, code=exited, status=127/n/a
Oct 23 12:28:45 serverXXX.mydomain.com systemd[1]: bootkube.service: Failed with result 'exit-code'.

The installation failed in this case.

  1. When I used iso-live image like:
curl -L https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/39.20231101.3.0/x86_64/fedora-coreos-39.20231101.3.0-live.x86_64.iso -o fcos-live.iso
curl -L https://download-host.mydomain.com/bootstrap-in-place-for-live-iso.ign -o bootstrap-in-place-for-live-iso.ign
coreos-installer iso ignition embed -fi bootstrap-in-place-for-live-iso.ign fcos-live.iso
coreos-installer install -f fcos-live.iso --insecure /dev/nvme0n1

I got:

Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7045]: Waiting for /opt/openshift/.bootkube.done
Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7045]: Executing coreos-installer with the following options: install -i /opt/openshift/master.ign /dev/nvme0n1
Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7075]: Installing Fedora CoreOS 39.20231101.3.0 x86_64 (512-byte sectors)
Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7075]: Partitions in use on /dev/nvme0n1:
Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7075]:     /dev/nvme0n1p1 mounted on /run/media/iso
Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7075]: Error: checking for exclusive access to /dev/nvme0n1
Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7075]: Caused by:
Oct 23 12:41:18 serverXXX.mydomain.com install-to-disk.sh[7075]:     found busy partitions
Oct 23 12:41:18 serverXXX.mydomain.com systemd[1]: install-to-disk.service: Main process exited, code=exited, status=1/FAILURE
Oct 23 12:41:18 serverXXX.mydomain.com systemd[1]: install-to-disk.service: Failed with result 'exit-code'.

The fix in this case was:

umount /run/media/iso
coreos-installer install -i /opt/openshift/master.ign /dev/nvme1n1
wipefs -fa /dev/nvme0n1
reboot

Success, I got the server installed.

In both cases I used the patched ignition config and cleared in advance with wipefs (before installation) partition tables.
Some technical info, hwdata output from the server:

 CPU1: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (Cores 8)
 Memory:  64106 MB
 Disk /dev/nvme0n1: 512 GB (=> 476 GiB) doesn't contain a valid partition table
 Disk /dev/nvme1n1: 512 GB (=> 476 GiB) doesn't contain a valid partition table
 Total capacity 953 GiB with 2 Disks

The single-node install-config.yaml that I used:

apiVersion: v1
baseDomain: 'mydomain.com'
metadata:
  name: 'sno-cluster'
compute:
- hyperthreading: Enabled
  name: worker
  replicas: 0
controlPlane:
  hyperthreading: Enabled
  name: master
  replicas: 1
networking:
  clusterNetworks:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
  machineCIDR:
platform:
  none: {}
bootstrapInPlace:
  installationDisk: /dev/nvme0n1
pullSecret: "<my-secret-data>"
sshKey: "<my-ssh-data>"

I will also try the regular RAID-configured installation.

@alrf
Copy link
Author

alrf commented Oct 28, 2024

With normal, multi-host installation, I can't install the bootstrap node (version 4.17.0-0.okd-scos-2024-09-30-101739):

Oct 28 11:25:42 bootstrap01.my-cluster.mydomain.com bootkube.sh[100885]: /usr/local/bin/bootkube.sh: line 85: oc: command not found
Oct 28 11:25:42 bootstrap01.my-cluster.mydomain.com systemd[1]: bootkube.service: Main process exited, code=exited, status=127/n/a
Oct 28 11:25:42 bootstrap01.my-cluster.mydomain.com systemd[1]: bootkube.service: Failed with result 'exit-code'.

Oct 28 11:26:33 bootstrap01.my-cluster.mydomain.com systemd[1]: bootkube.service: Scheduled restart job, restart counter is at 136.
Oct 28 11:26:33 bootstrap01.my-cluster.mydomain.com systemd[1]: Starting release-image-pivot.service - Pivot bootstrap to the OpenShift Release Image...
Oct 28 11:26:33 bootstrap01.my-cluster.mydomain.com bootstrap-pivot.sh[103831]: touch: cannot touch '/usr/.test': Read-only file system
Oct 28 11:26:48 bootstrap01.my-cluster.mydomain.com systemd[1]: release-image-pivot.service: Failed with result 'exit-code'.
Oct 28 11:26:48 bootstrap01.my-cluster.mydomain.com systemd[1]: Failed to start release-image-pivot.service - Pivot bootstrap to the OpenShift Release Image.

Oct 28 11:27:11 bootstrap01.my-cluster.mydomain.com kubelet.sh[106216]: /usr/local/bin/kubelet.sh: line 7: /usr/bin/kubelet: No such file or directory
Oct 28 11:27:11 bootstrap01.my-cluster.mydomain.com systemd[1]: kubelet.service: Main process exited, code=exited, status=127/n/a
Oct 28 11:27:11 bootstrap01.my-cluster.mydomain.com systemd[1]: kubelet.service: Failed with result 'exit-code'.
Oct 28 11:27:11 bootstrap01.my-cluster.mydomain.com systemd[1]: Failed to start kubelet.service - Kubernetes Kubelet.

Oct 28 11:27:33 bootstrap01.my-cluster.mydomain.com podman[107578]: 2024-10-28 11:27:33.736288779 +0000 UTC m=+0.028532076 image mount 6630f860b75fcaa26eedfb48632f7d7642399a644b30c80a0f85d0efbb2f8343
Oct 28 11:27:33 bootstrap01.my-cluster.mydomain.com bootstrap-pivot.sh[107590]: cp: target '/run/ephemeral': No such file or directory
Oct 28 11:27:33 bootstrap01.my-cluster.mydomain.com systemd[1]: release-image-pivot.service: Main process exited, code=exited, status=1/FAILURE
Oct 28 11:27:33 bootstrap01.my-cluster.mydomain.com systemd[1]: release-image-pivot.service: Failed with result 'exit-code'.
Oct 28 11:27:33 bootstrap01.my-cluster.mydomain.com systemd[1]: Failed to start release-image-pivot.service - Pivot bootstrap to the OpenShift Release Image.

@BeardOverflow
Copy link

BeardOverflow commented Oct 28, 2024

@alrf
Looks like some scenarios in your setup is not triggering the ephemeral service in the bootstrap image (metal.xz and multi-node); again, it is not your fault.

I updated the previous patch with a special condition to verify /run/ephemeral is writable:

cat original.ign | jq 'walk(if type == "object" and .path == "/usr/local/bin/bootstrap-pivot.sh" then .contents.source = "data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaApzZXQgLWV1byBwaXBlZmFpbAoKIyBFeGl0IGVhcmx5IGlmIHBpdm90IGlzIGF0dGVtcHRlZCBvbiBTQ09TIExpdmUgSVNPCnNvdXJjZSAvZXRjL29zLXJlbGVhc2UKaWYgW1sgISAkKHRvdWNoIC91c3IvLnRlc3QpIF1dICYmIFtbICR7SUR9ID1+IF4oY2VudG9zKSQgXV07IHRoZW4KICB0b3VjaCAvb3B0L29wZW5zaGlmdC8ucGl2b3QtZG9uZQogIGV4aXQgMApmaQoKIyBSZWJhc2UgdG8gT0tEJ3MgT1NUcmVlIGNvbnRhaW5lciBpbWFnZS4KIyBUaGlzIGlzIHJlcXVpcmVkIGluIE9LRCBhcyB0aGUgbm9kZSBpcyBmaXJzdCBwcm92aXNpb25lZCB3aXRoIHBsYWluIEZlZG9yYSBDb3JlT1MuCgojIHNoZWxsY2hlY2sgZGlzYWJsZT1TQzEwOTEKLiAvdXNyL2xvY2FsL2Jpbi9ib290c3RyYXAtc2VydmljZS1yZWNvcmQuc2gKLiAvdXNyL2xvY2FsL2Jpbi9yZWxlYXNlLWltYWdlLnNoCgojIFBpdm90IGJvb3RzdHJhcCBub2RlIHRvIE9LRCdzIE9TVHJlZSBpbWFnZQppZiBbICEgLWYgL29wdC9vcGVuc2hpZnQvLnBpdm90LWRvbmUgXTsgdGhlbgpNQUNISU5FX09TX0lNQUdFPSQoaW1hZ2VfZm9yIHN0cmVhbS1jb3Jlb3MpCmVjaG8gIlB1bGxpbmcgJHtNQUNISU5FX09TX0lNQUdFfS4uLiIKICB3aGlsZSB0cnVlCiAgZG8KICAgIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJwdWxsLW9rZC1vcy1pbWFnZSIKICAgIGlmIHBvZG1hbiBwdWxsIC0tcXVpZXQgIiR7TUFDSElORV9PU19JTUFHRX0iCiAgICB0aGVuCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2Vfc3VjY2VzcwogICAgICAgIGJyZWFrCiAgICBlbHNlCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2VfZmFpbHVyZQogICAgICAgIGVjaG8gIlB1bGwgZmFpbGVkLiBSZXRyeWluZyAke01BQ0hJTkVfT1NfSU1BR0V9Li4uIgogICAgZmkKICBkb25lCgogIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJyZWJhc2UtdG8tb2tkLW9zLWltYWdlIgogIG1udD0iJChwb2RtYW4gaW1hZ2UgbW91bnQgIiR7TUFDSElORV9PU19JTUFHRX0iKSIKICAjIFNOTyBzZXR1cCBib290cyBpbnRvIExpdmUgSVNPIHdoaWNoIGNhbm5vdCBiZSByZWJhc2VkCiAgIyBodHRwczovL2dpdGh1Yi5jb20vY29yZW9zL3JwbS1vc3RyZWUvaXNzdWVzLzQ1NDcKICByc3luYyAtcmx0dSAkbW50L2V0Yy8gL2V0Yy8KICBpZiBbICEgLWQgL3J1bi9lcGhlbWVyYWwgXQogIHRoZW4KICAgIG1rZGlyIC1wIC9ydW4vZXBoZW1lcmFsCiAgICBtb3VudCAtdCB0bXBmcyAtbyBzaXplPTUwJSBub25lIC9ydW4vZXBoZW1lcmFsCiAgZmkKICBjcCAtYSAvdXNyIC9saWIgL2xpYjY0IC9ydW4vZXBoZW1lcmFsCiAgcnN5bmMgLXJsdCAtLWlnbm9yZS1leGlzdGluZyAkbW50L3Vzci8gL3J1bi9lcGhlbWVyYWwvdXNyLwogIHJzeW5jIC1ybHQgLS1pZ25vcmUtZXhpc3RpbmcgJG1udC9saWIvIC9ydW4vZXBoZW1lcmFsL2xpYi8KICByc3luYyAtcmx0IC0taWdub3JlLWV4aXN0aW5nICRtbnQvbGliNjQvIC9ydW4vZXBoZW1lcmFsL2xpYjY0LwogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC91c3IgL3VzcgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWIgL2xpYgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWI2NCAvbGliNjQKICBzeXN0ZW1jdGwgZGFlbW9uLXJlbG9hZAoKICAjIEFwcGx5IHByZXNldHMgZnJvbSBPS0QgTWFjaGluZSBPUwogIHN5c3RlbWN0bCBwcmVzZXQtYWxsCgogICMgV29ya2Fyb3VuZCBmb3IgU0VMaW51eCBkZW5pYWxzIHdoZW4gbGF1bmNoaW5nIGNyaW8uc2VydmljZSBmcm9tIG92ZXJsYXlmcwogIHNldGVuZm9yY2UgUGVybWlzc2l2ZQoKICAjIGNyaW8uc2VydmljZSBpcyBub3QgcGFydCBvZiBGQ09TIGJ1dCBvZiBPS0QgTWFjaGluZSBPUy4gSXQgd2lsbCBsb2FkZWQgYWZ0ZXIgc3lzdGVtY3RsIGRhZW1vbi1yZWxvYWQgYWJvdmUgYnV0IGhhcyB0byBiZSBzdGFydGVkIG1hbnVhbGx5CiAgc3lzdGVtY3RsIHJlc3RhcnQgY3Jpby1jb25maWd1cmUuc2VydmljZQogIHN5c3RlbWN0bCBzdGFydCBjcmlvLnNlcnZpY2UKCiAgdG91Y2ggL29wdC9vcGVuc2hpZnQvLnBpdm90LWRvbmUKICBwb2RtYW4gaW1hZ2UgdW1vdW50ICIke01BQ0hJTkVfT1NfSU1BR0V9IgogIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N1Y2Nlc3MKZmkK" else . end)' > fix.ign

I do not tested it yet, but theoretically the failure point should be fixed with it.

@alrf
Copy link
Author

alrf commented Oct 28, 2024

@BeardOverflow I managed to install the bootstrap node with the new patch, however the master node (with RAID1) is partially installed: rpm-ostree status fails, machine-config-daemon pod is in CrashLoopBackOff state, below the logs.

[systemd]
Failed Units: 2
  systemd-fsck@dev-disk-by\x2duuid-56c220da\x2dc032\x2d484f\x2da955\x2d7b76e63e27d2.service
  systemd-sysusers.service
[core@serverXXX ~]$ sudo -s
[systemd]
Failed Units: 2
  systemd-fsck@dev-disk-by\x2duuid-56c220da\x2dc032\x2d484f\x2da955\x2d7b76e63e27d2.service
  systemd-sysusers.service
[root@serverXXX core]# systemctl status systemd-fsck@dev-disk-by\x2duuid-56c220da\x2dc032\x2d484f\x2da955\x2d7b76e63e27d2.service --no-pager
○ systemd-fsck@dev-disk-byx2duuid-56c220dax2dc032x2d484fx2da955x2d7b76e63e27d2.service - File System Check on /dev/disk/byx2duuid/56c220dax2dc032x2d484fx2da955x2d7b76e63e27d2
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; static)
     Active: inactive (dead)
       Docs: man:[email protected](8)

Oct 28 14:59:56 serverXXX.mydomain.com systemd[1]: systemd-fsck@dev-disk-byx2duuid-56c220dax2dc032x2d484fx2da955x2d7b76e63e27d2.service: Bound to unit dev-disk-byx2duuid-56c220dax2dc032x2d484fx2da955x2d7…nit isn't active.
Oct 28 14:59:56 serverXXX.mydomain.com systemd[1]: Dependency failed for File System Check on /dev/disk/byx2duuid/56c220dax2dc032x2d484fx2da955x2d7b76e63e27d2.
Oct 28 14:59:56 serverXXX.mydomain.com systemd[1]: systemd-fsck@dev-disk-byx2duuid-56c220dax2dc032x2d484fx2da955x2d7b76e63e27d2.service: Job systemd-fsck@dev-disk-byx2duuid-56c220dax2dc032x2d484fx2da955x…ult 'dependency'.
[root@serverXXX core]#
[root@serverXXX core]# systemctl status systemd-sysusers.service
× systemd-sysusers.service - Create System Users
     Loaded: loaded (/usr/lib/systemd/system/systemd-sysusers.service; static)
     Active: failed (Result: exit-code) since Mon 2024-10-28 14:44:47 UTC; 7min ago
   Duration: 2.824s
       Docs: man:sysusers.d(5)
             man:systemd-sysusers.service(8)
   Main PID: 1067 (code=exited, status=1/FAILURE)
        CPU: 33ms

Oct 28 14:44:47 localhost systemd[1]: Starting Create System Users...
Oct 28 14:44:47 localhost systemd-sysusers[1067]: Creating group 'sgx' with GID 991.
Oct 28 14:44:47 localhost systemd-sysusers[1067]: /etc/gshadow: Group "sgx" already exists.
Oct 28 14:44:47 localhost systemd[1]: systemd-sysusers.service: Main process exited, code=exited, status=1/FAILURE
Oct 28 14:44:47 localhost systemd[1]: systemd-sysusers.service: Failed with result 'exit-code'.
Oct 28 14:44:47 localhost systemd[1]: Failed to start Create System Users.
[root@serverXXX core]#
[root@serverXXX core]# rpm-ostree status
A dependency job for rpm-ostreed.service failed. See 'journalctl -xe' for details.
○ rpm-ostreed.service - rpm-ostree System Management Daemon
     Loaded: loaded (/usr/lib/systemd/system/rpm-ostreed.service; static)
    Drop-In: /etc/systemd/system/rpm-ostreed.service.d
             └─10-mco-default-env.conf
             /run/systemd/system/rpm-ostreed.service.d
             └─bug2111817.conf
             /etc/systemd/system/rpm-ostreed.service.d
             └─mco-controlplane-nice.conf
     Active: inactive (dead)
       Docs: man:rpm-ostree(1)

Oct 28 14:49:17 serverXXX.mydomain.com systemd[1]: Dependency failed for rpm-ostree System Management Daemon.
Oct 28 14:49:17 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
error: Loading sysroot: exit status: 1
[root@serverXXX core]#
# oc logs machine-config-daemon-w8xmb -n openshift-machine-config-operator
Defaulted container "machine-config-daemon" out of: machine-config-daemon, kube-rbac-proxy
I1028 14:54:06.831293   24263 start.go:68] Version: machine-config-daemon-4.6.0-202006240615.p0-2967-g858c0a2e-dirty (858c0a2e5965fcff48716c1db01ee76bb1eed9f2)
I1028 14:54:06.831346   24263 update.go:2631] Running: mount --rbind /run/secrets /rootfs/run/secrets
I1028 14:54:06.832977   24263 update.go:2631] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin
I1028 14:54:06.834368   24263 daemon.go:517] using appropriate binary for source=rhel-9 target=rhel-9
I1028 14:54:06.867152   24263 daemon.go:570] Invoking re-exec /run/bin/machine-config-daemon
I1028 14:54:06.891116   24263 start.go:68] Version: machine-config-daemon-4.6.0-202006240615.p0-2967-g858c0a2e-dirty (858c0a2e5965fcff48716c1db01ee76bb1eed9f2)
E1028 14:54:06.891297   24263 image_manager_helper.go:133] Merged secret file does not exist; defaulting to cluster pull secret
I1028 14:54:06.891317   24263 image_manager_helper.go:194] Linking rpm-ostree authfile to /var/lib/kubelet/config.json
F1028 14:54:06.975161   24263 start.go:106] Failed to initialize single run daemon: error reading osImageURL from rpm-ostree: exit status 1

@BeardOverflow
Copy link

@alrf
Reading the docs, it seems to be software RAID is limited to strict cases:
https://docs.okd.io/latest/installing/install_config/installing-customizing.html#installation-special-config-storage_installing-customizing

I have not played enough with software RAID-1 (aka mirror devices) to say you how to fix your installation failures, but I can see some differences between your machine config and the reference docs. In your initial manifest files, you are using storage.disks key (supported for regular ignition files, but unreferenced from okd's docs, maybe unsupported?) instead of boot_device key (clearly documented):

variant: openshift
version: 4.0
metadata: [...]
boot_device:
  mirror: 
    devices: 
      - /dev/sda
      - /dev/sdb

Even if boot_device key will not work either, you would build it manually and invoke coreos-installer later (maybe would you need Intel VROC or similar?):

If all of the above did not help you, consider to use a RAID-data volume instead (for /var or /var/lib/containers mount points only):

@alrf
Copy link
Author

alrf commented Oct 30, 2024

@BeardOverflow the boot_device key is used in the butane config file which will be converted to the openshift/okd yaml config file by butane, you can see it in the provided document. The content of the created yaml file is exactly the same that I use, I played a lot with RAID configuration.
I used the same documentation to create these configs and they work - physically, software RAID1 is configured (I see it from cat /proc/mdstat output). But something wrong in v4.17.

@BeardOverflow
Copy link

@alrf
OK, presumably it is a OKD issue again. To trace it better, I would suggest you to open a different issue (because the initial topic, "bootstrap node is not serving ignition files", is pending to resolution by OKD maintainers and they should have enough information with our above answers).

Also, to confirm your statements, could you test OKD 4.16 with mirror disk setup too? https://github.com/okd-project/okd-scos/releases/tag/4.16.0-okd-scos.0

@alrf
Copy link
Author

alrf commented Oct 30, 2024

@BeardOverflow I tested 4.16.0-okd-scos.0 and 4.16.0-0.okd-scos-2024-09-24-151747, both have the same issue as v4.17 - rpm-ostree status fails due to systemd-fsck@dev-disk-by.. (as described here #2041 (comment))

@alrf
Copy link
Author

alrf commented Oct 30, 2024

@BeardOverflow I checked the journald more carefully and found FEATURE_C12 again:

Oct 30 13:45:45 serverXXX.mydomain.com systemd-fsck[12127]: /dev/md126 has unsupported feature(s): FEATURE_C12
Oct 30 13:45:45 serverXXX.mydomain.com systemd-fsck[12127]: e2fsck: Get a newer version of e2fsck!
Oct 30 13:45:45 serverXXX.mydomain.com systemd-fsck[12127]: boot: ********** WARNING: Filesystem still has errors **********
Oct 30 13:45:45 serverXXX.mydomain.com systemd-fsck[12125]: fsck failed with exit status 12.
Oct 30 13:45:45 serverXXX.mydomain.com systemd[1]: systemd-fsck@dev-disk-by\x2duuid-340234db\x2dde51\x2d4cd6\x2da39f\x2da6cc6bec3914.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ An ExecStart= process belonging to unit systemd-fsck@dev-disk-by\x2duuid-340234db\x2dde51\x2d4cd6\x2da39f\x2da6cc6bec3914.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Oct 30 13:45:45 serverXXX.mydomain.com systemd[1]: systemd-fsck@dev-disk-by\x2duuid-340234db\x2dde51\x2d4cd6\x2da39f\x2da6cc6bec3914.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ The unit systemd-fsck@dev-disk-by\x2duuid-340234db\x2dde51\x2d4cd6\x2da39f\x2da6cc6bec3914.service has entered the 'failed' state with result 'exit-code'.
Oct 30 13:45:45 serverXXX.mydomain.com systemd[1]: Failed to start File System Check on /dev/disk/by-uuid/340234db-de51-4cd6-a39f-a6cc6bec3914.
░░ Subject: A start job for unit systemd-fsck@dev-disk-by\x2duuid-340234db\x2dde51\x2d4cd6\x2da39f\x2da6cc6bec3914.service has failed
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ A start job for unit systemd-fsck@dev-disk-by\x2duuid-340234db\x2dde51\x2d4cd6\x2da39f\x2da6cc6bec3914.service has finished with a failure.
░░
░░ The job identifier is 4398 and the job result is failed.
Oct 30 13:45:45 serverXXX.mydomain.com systemd[1]: Dependency failed for CoreOS Dynamic Mount for /boot.

The same as previously in my comments:
#2041 (comment)
#2041 (comment)
So, it is the same issue related to the kernel version.

@BeardOverflow
Copy link

@alrf
It's related to FCOS -> SCOS pivoting mechanism, again!

  1. FCOS image formats the disk with FEATURE_C12 (aka orphan_file) enabled (e2fsprogs-1.47.0 installed)
  2. FCOS -> SCOS pivot
  3. SCOS image cannot use FEATURE_C12 (e2fsprogs-1.46.0 installed)

However, SCOS image does implement a fix for ensuring a version equal or greater than 1.47.0 (coreos/coreos-assembler@1c280e7), but never reaches it because you are using coreos-installer from FCOS image instead.

Ideas? A new patch for removing orphan_file feature from /etc/mke2fs.conf defaults

cat original.ign | jq 'walk(if type == "object" and .path == "/usr/local/bin/bootstrap-pivot.sh" then .contents.source = "data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaApzZXQgLWV1byBwaXBlZmFpbAoKIyBFeGl0IGVhcmx5IGlmIHBpdm90IGlzIGF0dGVtcHRlZCBvbiBTQ09TIExpdmUgSVNPCnNvdXJjZSAvZXRjL29zLXJlbGVhc2UKaWYgW1sgISAkKHRvdWNoIC91c3IvLnRlc3QpIF1dICYmIFtbICR7SUR9ID1+IF4oY2VudG9zKSQgXV07IHRoZW4KICB0b3VjaCAvb3B0L29wZW5zaGlmdC8ucGl2b3QtZG9uZQogIGV4aXQgMApmaQoKIyBSZWJhc2UgdG8gT0tEJ3MgT1NUcmVlIGNvbnRhaW5lciBpbWFnZS4KIyBUaGlzIGlzIHJlcXVpcmVkIGluIE9LRCBhcyB0aGUgbm9kZSBpcyBmaXJzdCBwcm92aXNpb25lZCB3aXRoIHBsYWluIEZlZG9yYSBDb3JlT1MuCgojIHNoZWxsY2hlY2sgZGlzYWJsZT1TQzEwOTEKLiAvdXNyL2xvY2FsL2Jpbi9ib290c3RyYXAtc2VydmljZS1yZWNvcmQuc2gKLiAvdXNyL2xvY2FsL2Jpbi9yZWxlYXNlLWltYWdlLnNoCgojIFBpdm90IGJvb3RzdHJhcCBub2RlIHRvIE9LRCdzIE9TVHJlZSBpbWFnZQppZiBbICEgLWYgL29wdC9vcGVuc2hpZnQvLnBpdm90LWRvbmUgXTsgdGhlbgpNQUNISU5FX09TX0lNQUdFPSQoaW1hZ2VfZm9yIHN0cmVhbS1jb3Jlb3MpCmVjaG8gIlB1bGxpbmcgJHtNQUNISU5FX09TX0lNQUdFfS4uLiIKICB3aGlsZSB0cnVlCiAgZG8KICAgIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJwdWxsLW9rZC1vcy1pbWFnZSIKICAgIGlmIHBvZG1hbiBwdWxsIC0tcXVpZXQgIiR7TUFDSElORV9PU19JTUFHRX0iCiAgICB0aGVuCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2Vfc3VjY2VzcwogICAgICAgIGJyZWFrCiAgICBlbHNlCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2VfZmFpbHVyZQogICAgICAgIGVjaG8gIlB1bGwgZmFpbGVkLiBSZXRyeWluZyAke01BQ0hJTkVfT1NfSU1BR0V9Li4uIgogICAgZmkKICBkb25lCgogIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJyZWJhc2UtdG8tb2tkLW9zLWltYWdlIgogIG1udD0iJChwb2RtYW4gaW1hZ2UgbW91bnQgIiR7TUFDSElORV9PU19JTUFHRX0iKSIKICAjIFNOTyBzZXR1cCBib290cyBpbnRvIExpdmUgSVNPIHdoaWNoIGNhbm5vdCBiZSByZWJhc2VkCiAgIyBodHRwczovL2dpdGh1Yi5jb20vY29yZW9zL3JwbS1vc3RyZWUvaXNzdWVzLzQ1NDcKICByc3luYyAtcmx0dSAkbW50L2V0Yy8gL2V0Yy8KICBpZiBbICEgLWQgL3J1bi9lcGhlbWVyYWwgXQogIHRoZW4KICAgIG1rZGlyIC1wIC9ydW4vZXBoZW1lcmFsCiAgICBtb3VudCAtdCB0bXBmcyAtbyBzaXplPTUwJSBub25lIC9ydW4vZXBoZW1lcmFsCiAgZmkKICBjcCAtYSAvdXNyIC9saWIgL2xpYjY0IC9ydW4vZXBoZW1lcmFsCiAgcnN5bmMgLXJsdCAtLWlnbm9yZS1leGlzdGluZyAkbW50L3Vzci8gL3J1bi9lcGhlbWVyYWwvdXNyLwogIHJzeW5jIC1ybHQgLS1pZ25vcmUtZXhpc3RpbmcgJG1udC9saWIvIC9ydW4vZXBoZW1lcmFsL2xpYi8KICByc3luYyAtcmx0IC0taWdub3JlLWV4aXN0aW5nICRtbnQvbGliNjQvIC9ydW4vZXBoZW1lcmFsL2xpYjY0LwogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC91c3IgL3VzcgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWIgL2xpYgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWI2NCAvbGliNjQKICBzZWQgLWkgLWUgInMvLG9ycGhhbl9maWxlLy9nIiAvZXRjL21rZTJmcy5jb25mCiAgc3lzdGVtY3RsIGRhZW1vbi1yZWxvYWQKCiAgIyBBcHBseSBwcmVzZXRzIGZyb20gT0tEIE1hY2hpbmUgT1MKICBzeXN0ZW1jdGwgcHJlc2V0LWFsbAoKICAjIFdvcmthcm91bmQgZm9yIFNFTGludXggZGVuaWFscyB3aGVuIGxhdW5jaGluZyBjcmlvLnNlcnZpY2UgZnJvbSBvdmVybGF5ZnMKICBzZXRlbmZvcmNlIFBlcm1pc3NpdmUKCiAgIyBjcmlvLnNlcnZpY2UgaXMgbm90IHBhcnQgb2YgRkNPUyBidXQgb2YgT0tEIE1hY2hpbmUgT1MuIEl0IHdpbGwgbG9hZGVkIGFmdGVyIHN5c3RlbWN0bCBkYWVtb24tcmVsb2FkIGFib3ZlIGJ1dCBoYXMgdG8gYmUgc3RhcnRlZCBtYW51YWxseQogIHN5c3RlbWN0bCByZXN0YXJ0IGNyaW8tY29uZmlndXJlLnNlcnZpY2UKICBzeXN0ZW1jdGwgc3RhcnQgY3Jpby5zZXJ2aWNlCgogIHRvdWNoIC9vcHQvb3BlbnNoaWZ0Ly5waXZvdC1kb25lCiAgcG9kbWFuIGltYWdlIHVtb3VudCAiJHtNQUNISU5FX09TX0lNQUdFfSIKICByZWNvcmRfc2VydmljZV9zdGFnZV9zdWNjZXNzCmZpCg==" else . end)' > fix.ign

@AlexStorm1313
Copy link

Trying to install on metal single node cluster, also running into the same issue systemd-fsck@dev-disk-by. Tried with version 4.15, 4.16 and 4.17 (all of them SCOS variants) all resulting in the same error.

For new installations this stream url can be found within the openshift-installer openshift-install coreos print-stream-json but this currently returns the coreos release stream. I can't seem to find the SCOS release stream anywhere and set it manually.

@alrf
Copy link
Author

alrf commented Oct 30, 2024

For new installations this stream url can be found within the openshift-installer openshift-install coreos print-stream-json but this currently returns the coreos release stream. I can't seem to find the SCOS release stream anywhere and set it manually.

@AlexStorm1313 please read this issue carefully, it was already discussed:
#2041 (comment)
#2041 (comment)

@alrf
Copy link
Author

alrf commented Nov 6, 2024

However, SCOS image does implement a fix for ensuring a version equal or greater than 1.47.0 (coreos/coreos-assembler@1c280e7), but never reaches it because you are using coreos-installer from FCOS image instead.

@BeardOverflow Is it possible to download somewhere the SCOS raw image? Or it should be built by the CoreOS Assembler (COSA) ?

@alrf
Copy link
Author

alrf commented Nov 11, 2024

@BeardOverflow I've tried the latest patch, no changes, FEATURE_C12 (orphan_file) again :(
However, there is no orphan_file in the /etc/mke2fs.conf file.

# journalctl -t systemd-fsck
Nov 11 13:45:35 serverXXX.mydomain.com systemd-fsck[2239]: boot: clean, 369/98304 files, 144760/393152 blocks
-- Boot 90dd3e7c8bec47b4826720c75309634f --
Nov 11 13:51:41 localhost systemd-fsck[833]: /usr/sbin/fsck.xfs: XFS file system.
Nov 11 13:51:44 localhost systemd-fsck[1047]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 11 13:51:44 localhost systemd-fsck[1047]: e2fsck: Get a newer version of e2fsck!
Nov 11 13:51:44 localhost systemd-fsck[1047]: boot: ********** WARNING: Filesystem still has errors **********
Nov 11 13:51:44 localhost systemd-fsck[1042]: fsck failed with exit status 12.
Nov 11 13:55:09 serverXXX.mydomain.com systemd-fsck[3109]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 11 13:55:09 serverXXX.mydomain.com systemd-fsck[3109]: e2fsck: Get a newer version of e2fsck!
Nov 11 13:55:09 serverXXX.mydomain.com systemd-fsck[3109]: boot: ********** WARNING: Filesystem still has errors **********
Nov 11 13:55:09 serverXXX.mydomain.com systemd-fsck[3107]: fsck failed with exit status 12.
Nov 11 13:56:57 serverXXX.mydomain.com systemd-fsck[3136]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 11 13:56:57 serverXXX.mydomain.com systemd-fsck[3136]: e2fsck: Get a newer version of e2fsck!

[root@serverXXX core]# grep orphan_file /etc/mke2fs.conf
[root@serverXXX core]#

@alrf
Copy link
Author

alrf commented Nov 13, 2024

@BeardOverflow @JaimeMagiera is it expected that bootstrap node still uses Fedora CoreOS in 4.16.0-okd-scos.0 version?

root@bootstrap01:/var/home/core130# cat /etc/os-release
NAME="Fedora Linux"
VERSION="39.20231101.3.0 (CoreOS)"
ID=fedora
VERSION_ID=39
VERSION_CODENAME=""
PLATFORM_ID="platform:f39"
PRETTY_NAME="Fedora CoreOS 39.20231101.3.0"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:39"
HOME_URL="https://getfedora.org/coreos/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-coreos/"
SUPPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
BUG_REPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=39
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=39
SUPPORT_END=2024-05-14
VARIANT="CoreOS"
VARIANT_ID=coreos
OSTREE_VERSION='39.20231101.3.0'
root@bootstrap01:/var/home/core# ll /opt/openshift/.pivot-done
-rw-r--r--. 1 root root 0 Nov 13 11:00 /opt/openshift/.pivot-done

crictl ps shows running containers.

@BeardOverflow
Copy link

BeardOverflow commented Nov 13, 2024

@alrf

About SCOS building question: yes, you can build your own SCOS image using COSA (see scos-content:xxx-stream-coreos); but unfortunately, no public builds are available to download it. It is not excessively difficult to do, but you have to build a SCOS image every time a new tag is released.

About 4.16.0-okd-scos.0 version question: yes, theoretically FCOS and SCOS both are currently supported for the installation process, but FCOS is being more prone to failure because it is not the target OS. In any case, FCOS still has some advantage, such as a more generic setup.

About the latest patch, I think you are showing us an incorrect log here #2041 (comment) Please, review which OS is booting, because FCOS does have FEATURE_C12/orphan_file feature (and the patch must be apply to FCOS). Are you reading the logs after coreos-installer is finished? (in another words, after the first reboot)

@alrf
Copy link
Author

alrf commented Nov 13, 2024

@BeardOverflow

Thank you for clarifications.

About the latest patch, I think you are showing us an incorrect log here #2041 (comment) Please, review which OS is booting, because FCOS does have FEATURE_C12/orphan_file feature (and the patch must be apply to FCOS). Are you reading the logs after coreos-installer is finished? (in another words, after the first reboot)

Yes, it fails after the first reboot. I also provided the last output below.

[root@serverXXX core]# cat /etc/os-release
NAME="CentOS Stream CoreOS"
ID="scos"
ID_LIKE="rhel fedora"
VERSION="416.9.202410282248-0"
VERSION_ID="4.16"
VARIANT="CoreOS"
VARIANT_ID=coreos
PLATFORM_ID="platform:el9"
PRETTY_NAME="CentOS Stream CoreOS 416.9.202410282248-0"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:9::coreos"
HOME_URL="https://centos.org/"
DOCUMENTATION_URL="https://docs.okd.io/latest/welcome/index.html"
BUG_REPORT_URL="https://access.redhat.com/labs/rhir/"
REDHAT_BUGZILLA_PRODUCT="OpenShift Container Platform"
REDHAT_BUGZILLA_PRODUCT_VERSION="4.16"
REDHAT_SUPPORT_PRODUCT="OpenShift Container Platform"
REDHAT_SUPPORT_PRODUCT_VERSION="4.16"
OPENSHIFT_VERSION="4.16"
OSTREE_VERSION="416.9.202410282248-0"

[root@serverXXX core]# grep orphan_file /etc/mke2fs.conf
[root@serverXXX core]#

[root@serverXXX core]# rpm-ostree status
A dependency job for rpm-ostreed.service failed. See 'journalctl -xe' for details.
○ rpm-ostreed.service - rpm-ostree System Management Daemon
     Loaded: loaded (/usr/lib/systemd/system/rpm-ostreed.service; static)
    Drop-In: /etc/systemd/system/rpm-ostreed.service.d
             └─10-mco-default-env.conf
             /run/systemd/system/rpm-ostreed.service.d
             └─bug2111817.conf
             /etc/systemd/system/rpm-ostreed.service.d
             └─mco-controlplane-nice.conf
     Active: inactive (dead)
       Docs: man:rpm-ostree(1)

Nov 13 14:53:12 serverXXX.mydomain.com systemd[1]: Dependency failed for rpm-ostree System Management Daemon.
Nov 13 14:53:12 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
Nov 13 14:58:20 serverXXX.mydomain.com systemd[1]: Dependency failed for rpm-ostree System Management Daemon.
Nov 13 14:58:20 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
Nov 13 15:03:08 serverXXX.mydomain.com systemd[1]: Dependency failed for rpm-ostree System Management Daemon.
Nov 13 15:03:08 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
Nov 13 15:03:31 serverXXX.mydomain.com systemd[1]: Dependency failed for rpm-ostree System Management Daemon.
Nov 13 15:03:31 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
Nov 13 15:05:27 serverXXX.mydomain.com systemd[1]: Dependency failed for rpm-ostree System Management Daemon.
Nov 13 15:05:27 serverXXX.mydomain.com systemd[1]: rpm-ostreed.service: Job rpm-ostreed.service/start failed with result 'dependency'.
error: Loading sysroot: exit status: 1
[root@serverXXX core]#

[root@serverXXX core]# journalctl -t systemd-fsck
Nov 13 14:16:16 serverXXX.mydomain.com systemd-fsck[2326]: boot: clean, 369/98304 files, 144760/393152 blocks
-- Boot 7013842d222748958c6db3601174020f --
Nov 13 14:19:31 localhost systemd-fsck[844]: /usr/sbin/fsck.xfs: XFS file system.
Nov 13 14:19:35 localhost systemd-fsck[1068]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:19:35 localhost systemd-fsck[1068]: e2fsck: Get a newer version of e2fsck!
Nov 13 14:19:35 localhost systemd-fsck[1068]: boot: ********** WARNING: Filesystem still has errors **********
Nov 13 14:19:35 localhost systemd-fsck[1063]: fsck failed with exit status 12.
Nov 13 14:32:08 serverXXX.mydomain.com systemd-fsck[10806]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:32:08 serverXXX.mydomain.com systemd-fsck[10806]: e2fsck: Get a newer version of e2fsck!
Nov 13 14:32:08 serverXXX.mydomain.com systemd-fsck[10806]: boot: ********** WARNING: Filesystem still has errors **********
Nov 13 14:32:08 serverXXX.mydomain.com systemd-fsck[10804]: fsck failed with exit status 12.
Nov 13 14:32:09 serverXXX.mydomain.com systemd-fsck[10943]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:32:09 serverXXX.mydomain.com systemd-fsck[10943]: e2fsck: Get a newer version of e2fsck!
Nov 13 14:32:09 serverXXX.mydomain.com systemd-fsck[10943]: boot: ********** WARNING: Filesystem still has errors **********
Nov 13 14:32:09 serverXXX.mydomain.com systemd-fsck[10941]: fsck failed with exit status 12.
Nov 13 14:32:24 serverXXX.mydomain.com systemd-fsck[12696]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:32:24 serverXXX.mydomain.com systemd-fsck[12696]: e2fsck: Get a newer version of e2fsck!
Nov 13 14:32:24 serverXXX.mydomain.com systemd-fsck[12696]: boot: ********** WARNING: Filesystem still has errors **********
Nov 13 14:32:24 serverXXX.mydomain.com systemd-fsck[12658]: fsck failed with exit status 12.
Nov 13 14:32:51 serverXXX.mydomain.com systemd-fsck[14633]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:32:51 serverXXX.mydomain.com systemd-fsck[14633]: e2fsck: Get a newer version of e2fsck!
Nov 13 14:32:51 serverXXX.mydomain.com systemd-fsck[14633]: boot: ********** WARNING: Filesystem still has errors **********
Nov 13 14:32:51 serverXXX.mydomain.com systemd-fsck[14604]: fsck failed with exit status 12.
Nov 13 14:33:35 serverXXX.mydomain.com systemd-fsck[16861]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:33:35 serverXXX.mydomain.com systemd-fsck[16861]: e2fsck: Get a newer version of e2fsck!
Nov 13 14:33:35 serverXXX.mydomain.com systemd-fsck[16861]: boot: ********** WARNING: Filesystem still has errors **********
Nov 13 14:33:35 serverXXX.mydomain.com systemd-fsck[16859]: fsck failed with exit status 12.
Nov 13 14:34:59 serverXXX.mydomain.com systemd-fsck[19341]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:34:59 serverXXX.mydomain.com systemd-fsck[19341]: e2fsck: Get a newer version of e2fsck!
Nov 13 14:34:59 serverXXX.mydomain.com systemd-fsck[19341]: boot: ********** WARNING: Filesystem still has errors **********
Nov 13 14:34:59 serverXXX.mydomain.com systemd-fsck[19335]: fsck failed with exit status 12.
Nov 13 14:37:41 serverXXX.mydomain.com systemd-fsck[24454]: /dev/md126 has unsupported feature(s): FEATURE_C12
Nov 13 14:37:41 serverXXX.mydomain.com systemd-fsck[24454]: e2fsck: Get a newer version of e2fsck!


[root@serverXXX core]# last
core     pts/0        xx.xx.xx.xx    Wed Nov 13 14:56   still logged in
reboot   system boot  5.14.0-522.el9.x Wed Nov 13 14:19   still running
reboot   system boot  6.5.9-300.fc39.x Wed Nov 13 14:16 - 14:18  (00:01)

wtmp begins Wed Nov 13 14:16:14 2024
[root@serverXXX core]#

@alrf
Copy link
Author

alrf commented Nov 13, 2024

@BeardOverflow the /usr/local/bin/bootstrap-pivot.sh script is executed only once, before the first reboot?

@alrf
Copy link
Author

alrf commented Nov 20, 2024

@JaimeMagiera @BeardOverflow the trick with /etc/mke2fs.conf doesn't work.
My workaround is to build e2fsprogs-1.47.1 once on SCOS, upload tune2fs to webserver and use it during the installation:

curl -L https://webserver.mydomain.com/packages/tune2fs -o /tmp/tune2fs && \
chmod +x /tmp/tune2fs && \
sudo /tmp/tune2fs -O ^orphan_file /dev/md126
sudo systemctl reboot

@BeardOverflow
Copy link

@alrf Sorry, I did not see your previous question. Yes, my patch just works before the first reboot.

I am curious, are you executing your workaround before or after the first reboot?

@alrf
Copy link
Author

alrf commented Nov 20, 2024

I am curious, are you executing your workaround before or after the first reboot?

@BeardOverflow After the first reboot, already on SCOS. It doesn't make sense to execute it before (on FCOS).

@BeardOverflow
Copy link

BeardOverflow commented Nov 21, 2024

@alrf

Uhm... I feel like there is a piece missing from this puzzle. Should not it be built the raid (with partition table included) before the first reboot? Because you are assuming a first mdadm/mkfs command if you use tune2fs in the second boot. I will convert to a question: who built the raid (FCOS or SCOS) and when (first or second boot)? To my knowledge, the answers are: FCOS on first boot, so also I think FCOS installer is not wiping the devices before using them [1]

[1] https://docs.fedoraproject.org/en-US/fedora-coreos/storage/

@alrf
Copy link
Author

alrf commented Nov 21, 2024

@BeardOverflow

Should not it be built the raid (with partition table included) before the first reboot?

Yes, the raid is built before the first reboot (i.e. on FCOS), you can check the time from the outputs below.
The raid was created at 12:29 - it matches the first boot (6.5.9-300.fc39.x), i.e. FCOS:

[root@serverXXX core]# journalctl -t ignition --no-pager |grep disk | grep md
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createRaids: op(b): [started]  creating "md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createRaids: op(b): executing: "mdadm" "--create" "md-boot" "--force" "--run" "--homehost" "any" "--level" "raid1" "--raid-devices" "2" "--metadata=1.0" "/run/ignition/dev_aliases/dev/disk/by-partlabel/boot-1" "/run/ignition/dev_aliases/dev/disk/by-partlabel/boot-2"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createRaids: op(b): [finished] creating "md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createRaids: op(c): [started]  creating "md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createRaids: op(c): executing: "mdadm" "--create" "md-root" "--force" "--run" "--homehost" "any" "--level" "raid1" "--raid-devices" "2" "/run/ignition/dev_aliases/dev/disk/by-partlabel/root-1" "/run/ignition/dev_aliases/dev/disk/by-partlabel/root-2"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createRaids: op(c): [finished] creating "md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(d): [started]  waiting for devices [/dev/disk/by-partlabel/esp-1 /dev/disk/by-partlabel/esp-2 /dev/md/md-boot /dev/md/md-root]
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(d): [finished] waiting for devices [/dev/disk/by-partlabel/esp-1 /dev/disk/by-partlabel/esp-2 /dev/md/md-boot /dev/md/md-root]
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: created device alias for "/dev/md/md-boot": "/run/ignition/dev_aliases/dev/md/md-boot" -> "/dev/md127"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: created device alias for "/dev/md/md-root": "/run/ignition/dev_aliases/dev/md/md-root" -> "/dev/md126"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): [started]  determining filesystem type of "/dev/md/md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(11): [started]  determining filesystem type of "/dev/md/md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(13): [finished] determining filesystem type of "/dev/md/md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): found  filesystem at "/dev/md/md-root" with uuid "" and label ""
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(14): [started]  wiping filesystem signatures from "/run/ignition/dev_aliases/dev/md/md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(14): executing: "wipefs" "-a" "/run/ignition/dev_aliases/dev/md/md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(14): [finished] determining filesystem type of "/dev/md/md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): found ext4 filesystem at "/dev/md/md-boot" with uuid "4600682a-b11c-4302-9f61-3b503a78716a" and label "boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(15): [started]  wiping filesystem signatures from "/run/ignition/dev_aliases/dev/md/md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(15): executing: "wipefs" "-a" "/run/ignition/dev_aliases/dev/md/md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(16): [finished] wiping filesystem signatures from "/run/ignition/dev_aliases/dev/md/md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(17): [started]  creating "xfs" filesystem on "/run/ignition/dev_aliases/dev/md/md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): op(17): executing: "mkfs.xfs" "-f" "-L" "root" "/run/ignition/dev_aliases/dev/md/md-root"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(10): [finished] wiping filesystem signatures from "/run/ignition/dev_aliases/dev/md/md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(19): [started]  creating "ext4" filesystem on "/run/ignition/dev_aliases/dev/md/md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): op(19): executing: "mkfs.ext4" "-F" "-L" "boot" "/run/ignition/dev_aliases/dev/md/md-boot"
Nov 21 12:29:26 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): op(f): [finished] creating "ext4" filesystem on "/run/ignition/dev_aliases/dev/md/md-boot"
Nov 21 12:29:29 serverXXX.mydomain.com ignition[1073]: disks: createFilesystems: op(e): [finished] creating "xfs" filesystem on "/run/ignition/dev_aliases/dev/md/md-root"
[root@serverXXX core]#
[root@serverXXX core]# last
core   pts/0        xx.xx.xx.xx   Thu Nov 21 12:53   still logged in
reboot   system boot  5.14.0-522.el9.x Thu Nov 21 12:38   still running
reboot   system boot  5.14.0-522.el9.x Thu Nov 21 12:34 - 12:37  (00:03)
reboot   system boot  5.14.0-522.el9.x Thu Nov 21 12:32 - 12:33  (00:00)
reboot   system boot  6.5.9-300.fc39.x Thu Nov 21 12:29 - 12:31  (00:02)

I will convert to a question: who built the raid (FCOS or SCOS) and when (first or second boot)? To my knowledge, the answers are: FCOS on first boot, so also I think FCOS installer is not wiping the devices before using them

From the logs above, it is FCOS before the first boot.

How I see the issue: the raid is built before the first boot on FCOS, but BEFORE the /usr/local/bin/bootstrap-pivot.sh script.
Check the time here, pivot started at 12:29:38, but ignition (i.e. the raid creation) is already finished at this time (from the output above, at 12:29:29):

[root@serverXXX core]# journalctl | grep pivot
Nov 21 12:29:38 serverXXX.mydomain.com ignition[1905]: files: op(5c): [started]  processing unit "pivot.service"
Nov 21 12:29:38 serverXXX.mydomain.com ignition[1905]: files: op(5c): op(5d): [started]  writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/pivot.service.d/10-mco-default-env.conf"
Nov 21 12:29:38 serverXXX.mydomain.com ignition[1905]: files: op(5c): op(5d): [finished] writing systemd drop-in "10-mco-default-env.conf" at "/sysroot/etc/systemd/system/pivot.service.d/10-mco-default-env.conf"
Nov 21 12:29:38 serverXXX.mydomain.com ignition[1905]: files: op(5c): [finished] processing unit "pivot.service"
Nov 21 12:29:39 serverXXX.mydomain.com systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Nov 21 12:29:39 serverXXX.mydomain.com dracut-pre-pivot[1974]: 38.981061 | /etc/multipath.conf does not exist, blacklisting all devices.
Nov 21 12:29:39 serverXXX.mydomain.com dracut-pre-pivot[1974]: 38.981105 | You can run "/sbin/mpathconf --enable" to create
Nov 21 12:29:39 serverXXX.mydomain.com dracut-pre-pivot[1974]: 38.981120 | /etc/multipath.conf. See man mpathconf(8) for more details
Nov 21 12:29:39 serverXXX.mydomain.com systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Nov 21 12:29:39 serverXXX.mydomain.com systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 21 12:29:39 serverXXX.mydomain.com systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Nov 21 12:30:14 serverXXX.mydomain.com silly_keller[2845]: I1121 12:30:14.753022    2847 rpm-ostree.go:308] Running captured: podman create --net=none --annotation=org.openshift.machineconfigoperator.pivot=true --name ostree-container-pivot-4cf4399d-ae96-407e-8f9c-e502c2e1fa66 quay.io/okd/scos-content@sha256:f9e85a6d1f779ad178a4268062a8d72f4027c31281f43eb61dc6334a53eb41a1
Nov 21 12:30:14 serverXXX.mydomain.com podman[2834]: I1121 12:30:14.753022    2847 rpm-ostree.go:308] Running captured: podman create --net=none --annotation=org.openshift.machineconfigoperator.pivot=true --name ostree-container-pivot-4cf4399d-ae96-407e-8f9c-e502c2e1fa66 quay.io/okd/scos-content@sha256:f9e85a6d1f779ad178a4268062a8d72f4027c31281f43eb61dc6334a53eb41a1
Nov 21 12:30:14 serverXXX.mydomain.com podman[3168]: 2024-11-21 12:30:14.831928451 +0100 CET m=+0.072881915 container create 1176d0ceea1d6a6db6a43f5f994e6111ff1f5b2def86a2a1c3d34e3a0dadeb00 (image=quay.io/okd/scos-content@sha256:f9e85a6d1f779ad178a4268062a8d72f4027c31281f43eb61dc6334a53eb41a1, name=ostree-container-pivot-4cf4399d-ae96-407e-8f9c-e502c2e1fa66, io.buildah.version=1.37.3, org.label-schema.build-date=20241022, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, version=416.9.202410282248-0)
Nov 21 12:30:16 serverXXX.mydomain.com podman[3267]: 2024-11-21 12:30:16.582830384 +0100 CET m=+0.064156637 container remove 1176d0ceea1d6a6db6a43f5f994e6111ff1f5b2def86a2a1c3d34e3a0dadeb00 (image=quay.io/okd/scos-content@sha256:f9e85a6d1f779ad178a4268062a8d72f4027c31281f43eb61dc6334a53eb41a1, name=ostree-container-pivot-4cf4399d-ae96-407e-8f9c-e502c2e1fa66, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, version=416.9.202410282248-0, io.buildah.version=1.37.3, org.label-schema.build-date=20241022, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)

That explains why the raid has FEATURE_C12/orphan_file feature enabled on the second boot (on SCOS) - the raid was already built with it.

@BeardOverflow
Copy link

How I see the issue: the raid is built before the first boot on FCOS, but BEFORE the /usr/local/bin/bootstrap-pivot.sh script. Check the time here, pivot started at 12:29:38, but ignition (i.e. the raid creation) is already finished at this time (from the output above, at 12:29:29):

@alrf Sorry for the late answer. Please, take the following patch:

cat original.ign | jq 'walk(if type == "object" and .path == "/usr/local/bin/bootstrap-pivot.sh" then .contents.source = "data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaApzZXQgLWV1byBwaXBlZmFpbAoKIyBFeGl0IGVhcmx5IGlmIHBpdm90IGlzIGF0dGVtcHRlZCBvbiBTQ09TIExpdmUgSVNPCnNvdXJjZSAvZXRjL29zLXJlbGVhc2UKaWYgW1sgISAkKHRvdWNoIC91c3IvLnRlc3QpIF1dICYmIFtbICR7SUR9ID1+IF4oY2VudG9zKSQgXV07IHRoZW4KICB0b3VjaCAvb3B0L29wZW5zaGlmdC8ucGl2b3QtZG9uZQogIGV4aXQgMApmaQoKIyBSZWJhc2UgdG8gT0tEJ3MgT1NUcmVlIGNvbnRhaW5lciBpbWFnZS4KIyBUaGlzIGlzIHJlcXVpcmVkIGluIE9LRCBhcyB0aGUgbm9kZSBpcyBmaXJzdCBwcm92aXNpb25lZCB3aXRoIHBsYWluIEZlZG9yYSBDb3JlT1MuCgojIHNoZWxsY2hlY2sgZGlzYWJsZT1TQzEwOTEKLiAvdXNyL2xvY2FsL2Jpbi9ib290c3RyYXAtc2VydmljZS1yZWNvcmQuc2gKLiAvdXNyL2xvY2FsL2Jpbi9yZWxlYXNlLWltYWdlLnNoCgojIFBpdm90IGJvb3RzdHJhcCBub2RlIHRvIE9LRCdzIE9TVHJlZSBpbWFnZQppZiBbICEgLWYgL29wdC9vcGVuc2hpZnQvLnBpdm90LWRvbmUgXTsgdGhlbgpNQUNISU5FX09TX0lNQUdFPSQoaW1hZ2VfZm9yIHN0cmVhbS1jb3Jlb3MpCmVjaG8gIlB1bGxpbmcgJHtNQUNISU5FX09TX0lNQUdFfS4uLiIKICB3aGlsZSB0cnVlCiAgZG8KICAgIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJwdWxsLW9rZC1vcy1pbWFnZSIKICAgIGlmIHBvZG1hbiBwdWxsIC0tcXVpZXQgIiR7TUFDSElORV9PU19JTUFHRX0iCiAgICB0aGVuCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2Vfc3VjY2VzcwogICAgICAgIGJyZWFrCiAgICBlbHNlCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2VfZmFpbHVyZQogICAgICAgIGVjaG8gIlB1bGwgZmFpbGVkLiBSZXRyeWluZyAke01BQ0hJTkVfT1NfSU1BR0V9Li4uIgogICAgZmkKICBkb25lCgogIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJyZWJhc2UtdG8tb2tkLW9zLWltYWdlIgogIG1udD0iJChwb2RtYW4gaW1hZ2UgbW91bnQgIiR7TUFDSElORV9PU19JTUFHRX0iKSIKICAjIFNOTyBzZXR1cCBib290cyBpbnRvIExpdmUgSVNPIHdoaWNoIGNhbm5vdCBiZSByZWJhc2VkCiAgIyBodHRwczovL2dpdGh1Yi5jb20vY29yZW9zL3JwbS1vc3RyZWUvaXNzdWVzLzQ1NDcKICByc3luYyAtcmx0dSAkbW50L2V0Yy8gL2V0Yy8KICBpZiBbICEgLWQgL3J1bi9lcGhlbWVyYWwgXQogIHRoZW4KICAgIG1rZGlyIC1wIC9ydW4vZXBoZW1lcmFsCiAgICBtb3VudCAtdCB0bXBmcyAtbyBzaXplPTUwJSBub25lIC9ydW4vZXBoZW1lcmFsCiAgZmkKICBjcCAtYSAvdXNyIC9saWIgL2xpYjY0IC9ydW4vZXBoZW1lcmFsCiAgcnN5bmMgLXJsdCAtLWlnbm9yZS1leGlzdGluZyAkbW50L3Vzci8gL3J1bi9lcGhlbWVyYWwvdXNyLwogIHJzeW5jIC1ybHQgLS1pZ25vcmUtZXhpc3RpbmcgJG1udC9saWIvIC9ydW4vZXBoZW1lcmFsL2xpYi8KICByc3luYyAtcmx0IC0taWdub3JlLWV4aXN0aW5nICRtbnQvbGliNjQvIC9ydW4vZXBoZW1lcmFsL2xpYjY0LwogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC91c3IgL3VzcgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWIgL2xpYgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWI2NCAvbGliNjQKICBzZWQgLWkgLWUgInMvLG9ycGhhbl9maWxlLy9nIiAvZXRjL21rZTJmcy5jb25mCiAgc3lzdGVtY3RsIGRhZW1vbi1yZWxvYWQKCiAgIyBBcHBseSBwcmVzZXRzIGZyb20gT0tEIE1hY2hpbmUgT1MKICBzeXN0ZW1jdGwgcHJlc2V0LWFsbAoKICAjIFdvcmthcm91bmQgZm9yIFNFTGludXggZGVuaWFscyB3aGVuIGxhdW5jaGluZyBjcmlvLnNlcnZpY2UgZnJvbSBvdmVybGF5ZnMKICBzZXRlbmZvcmNlIFBlcm1pc3NpdmUKCiAgIyBjcmlvLnNlcnZpY2UgaXMgbm90IHBhcnQgb2YgRkNPUyBidXQgb2YgT0tEIE1hY2hpbmUgT1MuIEl0IHdpbGwgbG9hZGVkIGFmdGVyIHN5c3RlbWN0bCBkYWVtb24tcmVsb2FkIGFib3ZlIGJ1dCBoYXMgdG8gYmUgc3RhcnRlZCBtYW51YWxseQogIHN5c3RlbWN0bCByZXN0YXJ0IGNyaW8tY29uZmlndXJlLnNlcnZpY2UKICBzeXN0ZW1jdGwgc3RhcnQgY3Jpby5zZXJ2aWNlCgogIHRvdWNoIC9vcHQvb3BlbnNoaWZ0Ly5waXZvdC1kb25lCiAgcG9kbWFuIGltYWdlIHVtb3VudCAiJHtNQUNISU5FX09TX0lNQUdFfSIKICByZWNvcmRfc2VydmljZV9zdGFnZV9zdWNjZXNzCmZpCg==" else . end) | .storage.files += [{"overwrite":true,"path":"/etc/mke2fs.conf","user":{"name":"root"},"contents":{"source":"data:text/plain;charset=utf-8;base64,W2RlZmF1bHRzXQoJYmFzZV9mZWF0dXJlcyA9IHNwYXJzZV9zdXBlcixsYXJnZV9maWxlLGZpbGV0eXBlLHJlc2l6ZV9pbm9kZSxkaXJfaW5kZXgsZXh0X2F0dHIKCWRlZmF1bHRfbW50b3B0cyA9IGFjbCx1c2VyX3hhdHRyCgllbmFibGVfcGVyaW9kaWNfZnNjayA9IDAKCWJsb2Nrc2l6ZSA9IDQwOTYKCWlub2RlX3NpemUgPSAyNTYKCWlub2RlX3JhdGlvID0gMTYzODQKCltmc190eXBlc10KCWV4dDMgPSB7CgkJZmVhdHVyZXMgPSBoYXNfam91cm5hbAoJfQoJZXh0NCA9IHsKCQlmZWF0dXJlcyA9IGhhc19qb3VybmFsLGV4dGVudCxodWdlX2ZpbGUsZmxleF9iZyxtZXRhZGF0YV9jc3VtLDY0Yml0LGRpcl9ubGluayxleHRyYV9pc2l6ZQoJfQoJcmhlbDZfZXh0NCA9IHsKCQlmZWF0dXJlcyA9IGhhc19qb3VybmFsLGV4dGVudCxodWdlX2ZpbGUsZmxleF9iZyx1bmluaXRfYmcsZGlyX25saW5rLGV4dHJhX2lzaXplCgkJaW5vZGVfc2l6ZSA9IDI1NgoJCWVuYWJsZV9wZXJpb2RpY19mc2NrID0gMQoJCWRlZmF1bHRfbW50b3B0cyA9ICIiCgl9CglyaGVsN19leHQ0ID0gewoJCWZlYXR1cmVzID0gaGFzX2pvdXJuYWwsZXh0ZW50LGh1Z2VfZmlsZSxmbGV4X2JnLHVuaW5pdF9iZyxkaXJfbmxpbmssZXh0cmFfaXNpemUsNjRiaXQKCQlpbm9kZV9zaXplID0gMjU2Cgl9CglyaGVsOF9leHQ0ID0gewoJCWZlYXR1cmVzID0gaGFzX2pvdXJuYWwsZXh0ZW50LGh1Z2VfZmlsZSxmbGV4X2JnLG1ldGFkYXRhX2NzdW0sNjRiaXQsZGlyX25saW5rLGV4dHJhX2lzaXplCgkJaW5vZGVfc2l6ZSA9IDI1NgoJfQoJc21hbGwgPSB7CgkJYmxvY2tzaXplID0gMTAyNAoJCWlub2RlX3JhdGlvID0gNDA5NgoJfQoJZmxvcHB5ID0gewoJCWJsb2Nrc2l6ZSA9IDEwMjQKCQlpbm9kZV9yYXRpbyA9IDgxOTIKCX0KCWJpZyA9IHsKCQlpbm9kZV9yYXRpbyA9IDMyNzY4Cgl9CglodWdlID0gewoJCWlub2RlX3JhdGlvID0gNjU1MzYKCX0KCW5ld3MgPSB7CgkJaW5vZGVfcmF0aW8gPSA0MDk2Cgl9CglsYXJnZWZpbGUgPSB7CgkJaW5vZGVfcmF0aW8gPSAxMDQ4NTc2CgkJYmxvY2tzaXplID0gLTEKCX0KCWxhcmdlZmlsZTQgPSB7CgkJaW5vZGVfcmF0aW8gPSA0MTk0MzA0CgkJYmxvY2tzaXplID0gLTEKCX0KCWh1cmQgPSB7CgkgICAgIGJsb2Nrc2l6ZSA9IDQwOTYKCSAgICAgaW5vZGVfc2l6ZSA9IDEyOAoJICAgICB3YXJuX3kyMDM4X2RhdGVzID0gMAoJfQo="},"mode":384}]' > fix.ign

It writes /etc/mke2fs.conf as previous step to release-image-pivot.service.

@alrf
Copy link
Author

alrf commented Dec 4, 2024

How I see the issue: the raid is built before the first boot on FCOS, but BEFORE the /usr/local/bin/bootstrap-pivot.sh script. Check the time here, pivot started at 12:29:38, but ignition (i.e. the raid creation) is already finished at this time (from the output above, at 12:29:29):

@alrf Sorry for the late answer. Please, take the following patch:

cat original.ign | jq 'walk(if type == "object" and .path == "/usr/local/bin/bootstrap-pivot.sh" then .contents.source = "data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaApzZXQgLWV1byBwaXBlZmFpbAoKIyBFeGl0IGVhcmx5IGlmIHBpdm90IGlzIGF0dGVtcHRlZCBvbiBTQ09TIExpdmUgSVNPCnNvdXJjZSAvZXRjL29zLXJlbGVhc2UKaWYgW1sgISAkKHRvdWNoIC91c3IvLnRlc3QpIF1dICYmIFtbICR7SUR9ID1+IF4oY2VudG9zKSQgXV07IHRoZW4KICB0b3VjaCAvb3B0L29wZW5zaGlmdC8ucGl2b3QtZG9uZQogIGV4aXQgMApmaQoKIyBSZWJhc2UgdG8gT0tEJ3MgT1NUcmVlIGNvbnRhaW5lciBpbWFnZS4KIyBUaGlzIGlzIHJlcXVpcmVkIGluIE9LRCBhcyB0aGUgbm9kZSBpcyBmaXJzdCBwcm92aXNpb25lZCB3aXRoIHBsYWluIEZlZG9yYSBDb3JlT1MuCgojIHNoZWxsY2hlY2sgZGlzYWJsZT1TQzEwOTEKLiAvdXNyL2xvY2FsL2Jpbi9ib290c3RyYXAtc2VydmljZS1yZWNvcmQuc2gKLiAvdXNyL2xvY2FsL2Jpbi9yZWxlYXNlLWltYWdlLnNoCgojIFBpdm90IGJvb3RzdHJhcCBub2RlIHRvIE9LRCdzIE9TVHJlZSBpbWFnZQppZiBbICEgLWYgL29wdC9vcGVuc2hpZnQvLnBpdm90LWRvbmUgXTsgdGhlbgpNQUNISU5FX09TX0lNQUdFPSQoaW1hZ2VfZm9yIHN0cmVhbS1jb3Jlb3MpCmVjaG8gIlB1bGxpbmcgJHtNQUNISU5FX09TX0lNQUdFfS4uLiIKICB3aGlsZSB0cnVlCiAgZG8KICAgIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJwdWxsLW9rZC1vcy1pbWFnZSIKICAgIGlmIHBvZG1hbiBwdWxsIC0tcXVpZXQgIiR7TUFDSElORV9PU19JTUFHRX0iCiAgICB0aGVuCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2Vfc3VjY2VzcwogICAgICAgIGJyZWFrCiAgICBlbHNlCiAgICAgICAgcmVjb3JkX3NlcnZpY2Vfc3RhZ2VfZmFpbHVyZQogICAgICAgIGVjaG8gIlB1bGwgZmFpbGVkLiBSZXRyeWluZyAke01BQ0hJTkVfT1NfSU1BR0V9Li4uIgogICAgZmkKICBkb25lCgogIHJlY29yZF9zZXJ2aWNlX3N0YWdlX3N0YXJ0ICJyZWJhc2UtdG8tb2tkLW9zLWltYWdlIgogIG1udD0iJChwb2RtYW4gaW1hZ2UgbW91bnQgIiR7TUFDSElORV9PU19JTUFHRX0iKSIKICAjIFNOTyBzZXR1cCBib290cyBpbnRvIExpdmUgSVNPIHdoaWNoIGNhbm5vdCBiZSByZWJhc2VkCiAgIyBodHRwczovL2dpdGh1Yi5jb20vY29yZW9zL3JwbS1vc3RyZWUvaXNzdWVzLzQ1NDcKICByc3luYyAtcmx0dSAkbW50L2V0Yy8gL2V0Yy8KICBpZiBbICEgLWQgL3J1bi9lcGhlbWVyYWwgXQogIHRoZW4KICAgIG1rZGlyIC1wIC9ydW4vZXBoZW1lcmFsCiAgICBtb3VudCAtdCB0bXBmcyAtbyBzaXplPTUwJSBub25lIC9ydW4vZXBoZW1lcmFsCiAgZmkKICBjcCAtYSAvdXNyIC9saWIgL2xpYjY0IC9ydW4vZXBoZW1lcmFsCiAgcnN5bmMgLXJsdCAtLWlnbm9yZS1leGlzdGluZyAkbW50L3Vzci8gL3J1bi9lcGhlbWVyYWwvdXNyLwogIHJzeW5jIC1ybHQgLS1pZ25vcmUtZXhpc3RpbmcgJG1udC9saWIvIC9ydW4vZXBoZW1lcmFsL2xpYi8KICByc3luYyAtcmx0IC0taWdub3JlLWV4aXN0aW5nICRtbnQvbGliNjQvIC9ydW4vZXBoZW1lcmFsL2xpYjY0LwogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC91c3IgL3VzcgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWIgL2xpYgogIG1vdW50IC0tYmluZCAvcnVuL2VwaGVtZXJhbC9saWI2NCAvbGliNjQKICBzZWQgLWkgLWUgInMvLG9ycGhhbl9maWxlLy9nIiAvZXRjL21rZTJmcy5jb25mCiAgc3lzdGVtY3RsIGRhZW1vbi1yZWxvYWQKCiAgIyBBcHBseSBwcmVzZXRzIGZyb20gT0tEIE1hY2hpbmUgT1MKICBzeXN0ZW1jdGwgcHJlc2V0LWFsbAoKICAjIFdvcmthcm91bmQgZm9yIFNFTGludXggZGVuaWFscyB3aGVuIGxhdW5jaGluZyBjcmlvLnNlcnZpY2UgZnJvbSBvdmVybGF5ZnMKICBzZXRlbmZvcmNlIFBlcm1pc3NpdmUKCiAgIyBjcmlvLnNlcnZpY2UgaXMgbm90IHBhcnQgb2YgRkNPUyBidXQgb2YgT0tEIE1hY2hpbmUgT1MuIEl0IHdpbGwgbG9hZGVkIGFmdGVyIHN5c3RlbWN0bCBkYWVtb24tcmVsb2FkIGFib3ZlIGJ1dCBoYXMgdG8gYmUgc3RhcnRlZCBtYW51YWxseQogIHN5c3RlbWN0bCByZXN0YXJ0IGNyaW8tY29uZmlndXJlLnNlcnZpY2UKICBzeXN0ZW1jdGwgc3RhcnQgY3Jpby5zZXJ2aWNlCgogIHRvdWNoIC9vcHQvb3BlbnNoaWZ0Ly5waXZvdC1kb25lCiAgcG9kbWFuIGltYWdlIHVtb3VudCAiJHtNQUNISU5FX09TX0lNQUdFfSIKICByZWNvcmRfc2VydmljZV9zdGFnZV9zdWNjZXNzCmZpCg==" else . end) | .storage.files += [{"overwrite":true,"path":"/etc/mke2fs.conf","user":{"name":"root"},"contents":{"source":"data:text/plain;charset=utf-8;base64,W2RlZmF1bHRzXQoJYmFzZV9mZWF0dXJlcyA9IHNwYXJzZV9zdXBlcixsYXJnZV9maWxlLGZpbGV0eXBlLHJlc2l6ZV9pbm9kZSxkaXJfaW5kZXgsZXh0X2F0dHIKCWRlZmF1bHRfbW50b3B0cyA9IGFjbCx1c2VyX3hhdHRyCgllbmFibGVfcGVyaW9kaWNfZnNjayA9IDAKCWJsb2Nrc2l6ZSA9IDQwOTYKCWlub2RlX3NpemUgPSAyNTYKCWlub2RlX3JhdGlvID0gMTYzODQKCltmc190eXBlc10KCWV4dDMgPSB7CgkJZmVhdHVyZXMgPSBoYXNfam91cm5hbAoJfQoJZXh0NCA9IHsKCQlmZWF0dXJlcyA9IGhhc19qb3VybmFsLGV4dGVudCxodWdlX2ZpbGUsZmxleF9iZyxtZXRhZGF0YV9jc3VtLDY0Yml0LGRpcl9ubGluayxleHRyYV9pc2l6ZQoJfQoJcmhlbDZfZXh0NCA9IHsKCQlmZWF0dXJlcyA9IGhhc19qb3VybmFsLGV4dGVudCxodWdlX2ZpbGUsZmxleF9iZyx1bmluaXRfYmcsZGlyX25saW5rLGV4dHJhX2lzaXplCgkJaW5vZGVfc2l6ZSA9IDI1NgoJCWVuYWJsZV9wZXJpb2RpY19mc2NrID0gMQoJCWRlZmF1bHRfbW50b3B0cyA9ICIiCgl9CglyaGVsN19leHQ0ID0gewoJCWZlYXR1cmVzID0gaGFzX2pvdXJuYWwsZXh0ZW50LGh1Z2VfZmlsZSxmbGV4X2JnLHVuaW5pdF9iZyxkaXJfbmxpbmssZXh0cmFfaXNpemUsNjRiaXQKCQlpbm9kZV9zaXplID0gMjU2Cgl9CglyaGVsOF9leHQ0ID0gewoJCWZlYXR1cmVzID0gaGFzX2pvdXJuYWwsZXh0ZW50LGh1Z2VfZmlsZSxmbGV4X2JnLG1ldGFkYXRhX2NzdW0sNjRiaXQsZGlyX25saW5rLGV4dHJhX2lzaXplCgkJaW5vZGVfc2l6ZSA9IDI1NgoJfQoJc21hbGwgPSB7CgkJYmxvY2tzaXplID0gMTAyNAoJCWlub2RlX3JhdGlvID0gNDA5NgoJfQoJZmxvcHB5ID0gewoJCWJsb2Nrc2l6ZSA9IDEwMjQKCQlpbm9kZV9yYXRpbyA9IDgxOTIKCX0KCWJpZyA9IHsKCQlpbm9kZV9yYXRpbyA9IDMyNzY4Cgl9CglodWdlID0gewoJCWlub2RlX3JhdGlvID0gNjU1MzYKCX0KCW5ld3MgPSB7CgkJaW5vZGVfcmF0aW8gPSA0MDk2Cgl9CglsYXJnZWZpbGUgPSB7CgkJaW5vZGVfcmF0aW8gPSAxMDQ4NTc2CgkJYmxvY2tzaXplID0gLTEKCX0KCWxhcmdlZmlsZTQgPSB7CgkJaW5vZGVfcmF0aW8gPSA0MTk0MzA0CgkJYmxvY2tzaXplID0gLTEKCX0KCWh1cmQgPSB7CgkgICAgIGJsb2Nrc2l6ZSA9IDQwOTYKCSAgICAgaW5vZGVfc2l6ZSA9IDEyOAoJICAgICB3YXJuX3kyMDM4X2RhdGVzID0gMAoJfQo="},"mode":384}]' > fix.ign

It writes /etc/mke2fs.conf as previous step to release-image-pivot.service.

@BeardOverflow The system doesn’t boot, I've attached the screenshot.
Screenshot 2024-12-04 at 12 55 25

@BeardOverflow
Copy link

@alrf
First or second boot? OKD version?

I have tested fresh install SNO 4.17.0-okd-scos.0 + patch with no problem over baremetal machine.

Also, in your screenshot, I see a hybrid system (BIOS+EFI). As testing, could you setup your server to EFI only? (disable legacy/CSM support).

@alrf
Copy link
Author

alrf commented Dec 4, 2024

@BeardOverflow I tend to think this is the first boot (but unsure). This is OKD 4.16.0-okd-scos.1

I have tested fresh install SNO 4.17.0-okd-scos.0 + patch with no problem over baremetal machine.

Have you tested software raid or just single disk installation?

As testing, could you setup your server to EFI only?

No, not really.

@BeardOverflow
Copy link

Have you tested software raid or just single disk installation?

I am sorry, I did a single disk installation. I am not familiar with software raid installations, can you share your install-config.yml? (without sshKeys/pullSecret).

However, your screenshot is enigmatic, it looks like a detection error between EFI-SYSTEM vs BIOS-BOOT on FCOS. I find it hard to imagine that the latest patch caused it (which it writes /etc/mke2fs.conf). Another alternative is to supply e2fsprogs package via ignition.

@alrf
Copy link
Author

alrf commented Dec 5, 2024

@BeardOverflow My install-config.yaml:

apiVersion: v1
baseDomain: 'mydomain.com'
metadata:
  name: 'test-env'
compute:
- hyperthreading: Enabled
  name: worker
  replicas: 0
controlPlane:
  hyperthreading: Enabled
  name: master
  replicas: 3
networking:
  clusterNetworks:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
  machineCIDR:
platform:
  none: {}
pullSecret: "..."
sshKey: "..."

But RAID configuration is not in install-config.yaml, you can find it here: #2041 (comment). It is created from butane config (#2041 (comment)) as described in the documentation.

Another alternative is to supply e2fsprogs package via ignition.

Could you please share an ignition example?

Another question: what is the right way to install additional packages to OKD4 servers? (e.g. automation tools/configuration management tools) How it can impact the OKD4 updates?

@BeardOverflow
Copy link

@alrf


My install-config.yaml

Sorry, I have not had time to test your setup.
However, I have good news: I found an issue in the openshift/installer which is tracking our issue: openshift/installer#8742

I redo my patch to adapt it to the above pull request and it works better than previous one. Please, try it out for your setup.

cat original.ign | jq 'walk(if type == "object" and .path == "/usr/local/bin/bootstrap-pivot.sh" then .contents.source = "data:text/plain;charset=utf-8;base64,#!/usr/bin/env bash
set -euo pipefail

# Exit early if pivot is attempted on SCOS Live ISO
source /etc/os-release
if [[ ! $(touch /usr/.test) ]] && [[ ${ID} =~ ^(centos)$ ]]; then
  touch /opt/openshift/.pivot-done
  exit 0
fi
# Rebase to OKD's OSTree container image.
# This is required in OKD as the node is first provisioned with plain Fedora CoreOS.

# shellcheck disable=SC1091
. /usr/local/bin/bootstrap-service-record.sh
. /usr/local/bin/release-image.sh

# Pivot bootstrap node to OKD's OSTree image
if [ ! -f /opt/openshift/.pivot-done ]; then
MACHINE_OS_IMAGE=$(image_for stream-coreos)
echo "Pulling ${MACHINE_OS_IMAGE}..."
  while true
  do
    record_service_stage_start "pull-okd-os-image"
    if podman pull --quiet "${MACHINE_OS_IMAGE}"
    then
        record_service_stage_success
        break
    else
        record_service_stage_failure
        echo "Pull failed. Retrying ${MACHINE_OS_IMAGE}..."
    fi
  done

  record_service_stage_start "rebase-to-okd-os-image"
  chmod 0644 /etc/containers/registries.conf

  # First mention: https://github.com/openshift/enhancements/pull/1637#issuecomment-2231865281
  # BUG: https://github.com/openshift/installer/pull/8742
  # Discussion:
  #   - https://github.com/coreos/rpm-ostree/issues/4547
  #   - https://github.com/okd-project/okd/issues/2041

  ### BEGIN: https://github.com/openshift/installer/blob/c0912cae4f83f6d8851d41604ccb15eb5256e5e1/data/data/bootstrap/files/usr/local/bin/bootstrap-pivot.sh.template
    # Workaround for SELinux denials when launching crio.service from overlayfs
    setenforce 0
  ### END: https://github.com/openshift/installer/blob/c0912cae4f83f6d8851d41604ccb15eb5256e5e1/data/data/bootstrap/files/usr/local/bin/bootstrap-pivot.sh.template

  ### BEGIN: https://github.com/openshift/installer/blob/4c29e7f0be51e625415cfe9519141a0605058f26/data/data/bootstrap/files/etc/systemd/system/node-image-pull.service
    # we need to call ostree container (i.e. rpm-ostree), which has install_exec_t,
    # but by default, we'll run as unconfined_service_t, which is not allowed that
    # transition. Relabel the script itself.
    # secon -t -f /usr/bin/ostree
    # secon -t -f /usr/local/bin/bootstrap-pivot.sh
    #
    # TODO: Relocate the following line to /etc/systemd/system/release-image-pivot.service as ExecStartPre=
    chcon --reference=/usr/bin/ostree /usr/local/bin/bootstrap-pivot.sh
  ### END: https://github.com/openshift/installer/blob/4c29e7f0be51e625415cfe9519141a0605058f26/data/data/bootstrap/files/etc/systemd/system/node-image-pull.service

  ### BEGIN: https://github.com/openshift/installer/blob/4c29e7f0be51e625415cfe9519141a0605058f26/data/data/bootstrap/files/usr/local/bin/node-image-pull.sh.template
    # try to do this in the system repo so we get hardlinks and the checkout is
    # read-only, but fallback to using /var if we're in the live environment since
    # that's truly read-only
    ostree_repo=/ostree/repo
    ostree_checkout="${ostree_repo}/tmp/node-image"
    hardlink='-H'
    if grep -q coreos.liveiso= /proc/cmdline; then
        ostree_repo=/var/ostree-container/repo
        ostree_checkout=/var/ostree-container/checkout
        mkdir -p "${ostree_repo}"
        ostree init --mode=bare --repo="${ostree_repo}"
        # if there are layers, import all the content in the system repo for
        # layer-level deduping
        if [ -d /ostree/repo/refs/heads/ostree/container ]; then
            ostree pull-local --repo="${ostree_repo}" /ostree/repo ostree/1/1/0
        fi
        # but we won't be able to force hardlinks cross-device
        hardlink=''
    else
        # (remember, we're MountFlags=slave)
        mount -o rw,remount /sysroot
    fi

    # Use ostree stack to pull the container here. This gives us efficient
    # downloading with layers we already have, and also handles SELinux.
    while ! ostree container image pull --authfile "/root/.docker/config.json" \
      "${ostree_repo}" ostree-unverified-image:docker://"${MACHINE_OS_IMAGE}"; do
        echo 'Failed to fetch release image; retrying...'
        sleep 10
    done

    # ideally, `ostree container image pull` would support `--write-ref` or a
    # command to escape a pullspec, but for now it's pretty easy to tell which ref
    # it is since it's the only docker one
    ref=$(ostree refs --repo "${ostree_repo}" | grep ^ostree/container/image/docker)
    if [ $(echo "$ref" | wc -l) != 1 ]; then
        echo "Expected single docker ref, found:"
        echo "$ref"
        exit 1
    fi
    ostree refs --repo "${ostree_repo}" "$ref" --create coreos/node-image

    # massive hack to make ostree admin config-diff work in live ISO where /etc
    # is actually on a separate mount and not the deployment root proper... should
    # enhance libostree for this (remember, we're MountFlags=slave)
    if grep -q coreos.liveiso= /proc/cmdline; then
        mount -o bind,ro /etc /ostree/deploy/*/deploy/*/etc
    fi

    # get all state files in /etc; this is a cheap way to get "3-way /etc merge" semantics
    etc_keep=$(ostree admin config-diff | cut -f5 -d' ' | sed -e 's,^,/usr/etc/,')

    # check out the commit
    ostree checkout --repo "${ostree_repo}" ${hardlink} coreos/node-image "${ostree_checkout}" --skip-list=<(cat <<< "$etc_keep")

    # in the assisted-installer case, nuke the temporary repo to save RAM
    if grep -q coreos.liveiso= /proc/cmdline; then
        rm -rf "${ostree_repo}"
    fi
  ### END: https://github.com/openshift/installer/blob/4c29e7f0be51e625415cfe9519141a0605058f26/data/data/bootstrap/files/usr/local/bin/node-image-pull.sh.template

  ### BEGIN: https://github.com/openshift/installer/blob/4c29e7f0be51e625415cfe9519141a0605058f26/data/data/bootstrap/files/usr/local/bin/node-image-overlay.sh
    ostree_checkout=/ostree/repo/tmp/node-image
    if [ ! -d "${ostree_checkout}" ]; then
        ostree_checkout=/var/ostree-container/checkout
    fi

    # keep /usr/lib/modules from the booted deployment for kernel modules
    mount -o bind,ro "/usr/lib/modules" "${ostree_checkout}/usr/lib/modules"
    mount -o rbind,ro "${ostree_checkout}/usr" /usr
    rsync -a "${ostree_checkout}/usr/etc/" /etc

    # reload the new policy
    semodule -R
  ### END: https://github.com/openshift/installer/blob/4c29e7f0be51e625415cfe9519141a0605058f26/data/data/bootstrap/files/usr/local/bin/node-image-overlay.sh

  touch /opt/openshift/.pivot-done
  record_service_stage_success
fi
" else . end) | .storage.files += [{"overwrite":true,"path":"/etc/mke2fs.conf","user":{"name":"root"},"contents":{"source":"data:text/plain;charset=utf-8;base64,W2RlZmF1bHRzXQoJYmFzZV9mZWF0dXJlcyA9IHNwYXJzZV9zdXBlcixsYXJnZV9maWxlLGZpbGV0eXBlLHJlc2l6ZV9pbm9kZSxkaXJfaW5kZXgsZXh0X2F0dHIKCWRlZmF1bHRfbW50b3B0cyA9IGFjbCx1c2VyX3hhdHRyCgllbmFibGVfcGVyaW9kaWNfZnNjayA9IDAKCWJsb2Nrc2l6ZSA9IDQwOTYKCWlub2RlX3NpemUgPSAyNTYKCWlub2RlX3JhdGlvID0gMTYzODQKCltmc190eXBlc10KCWV4dDMgPSB7CgkJZmVhdHVyZXMgPSBoYXNfam91cm5hbAoJfQoJZXh0NCA9IHsKCQlmZWF0dXJlcyA9IGhhc19qb3VybmFsLGV4dGVudCxodWdlX2ZpbGUsZmxleF9iZyxtZXRhZGF0YV9jc3VtLDY0Yml0LGRpcl9ubGluayxleHRyYV9pc2l6ZQoJfQoJcmhlbDZfZXh0NCA9IHsKCQlmZWF0dXJlcyA9IGhhc19qb3VybmFsLGV4dGVudCxodWdlX2ZpbGUsZmxleF9iZyx1bmluaXRfYmcsZGlyX25saW5rLGV4dHJhX2lzaXplCgkJaW5vZGVfc2l6ZSA9IDI1NgoJCWVuYWJsZV9wZXJpb2RpY19mc2NrID0gMQoJCWRlZmF1bHRfbW50b3B0cyA9ICIiCgl9CglyaGVsN19leHQ0ID0gewoJCWZlYXR1cmVzID0gaGFzX2pvdXJuYWwsZXh0ZW50LGh1Z2VfZmlsZSxmbGV4X2JnLHVuaW5pdF9iZyxkaXJfbmxpbmssZXh0cmFfaXNpemUsNjRiaXQKCQlpbm9kZV9zaXplID0gMjU2Cgl9CglyaGVsOF9leHQ0ID0gewoJCWZlYXR1cmVzID0gaGFzX2pvdXJuYWwsZXh0ZW50LGh1Z2VfZmlsZSxmbGV4X2JnLG1ldGFkYXRhX2NzdW0sNjRiaXQsZGlyX25saW5rLGV4dHJhX2lzaXplCgkJaW5vZGVfc2l6ZSA9IDI1NgoJfQoJc21hbGwgPSB7CgkJYmxvY2tzaXplID0gMTAyNAoJCWlub2RlX3JhdGlvID0gNDA5NgoJfQoJZmxvcHB5ID0gewoJCWJsb2Nrc2l6ZSA9IDEwMjQKCQlpbm9kZV9yYXRpbyA9IDgxOTIKCX0KCWJpZyA9IHsKCQlpbm9kZV9yYXRpbyA9IDMyNzY4Cgl9CglodWdlID0gewoJCWlub2RlX3JhdGlvID0gNjU1MzYKCX0KCW5ld3MgPSB7CgkJaW5vZGVfcmF0aW8gPSA0MDk2Cgl9CglsYXJnZWZpbGUgPSB7CgkJaW5vZGVfcmF0aW8gPSAxMDQ4NTc2CgkJYmxvY2tzaXplID0gLTEKCX0KCWxhcmdlZmlsZTQgPSB7CgkJaW5vZGVfcmF0aW8gPSA0MTk0MzA0CgkJYmxvY2tzaXplID0gLTEKCX0KCWh1cmQgPSB7CgkgICAgIGJsb2Nrc2l6ZSA9IDQwOTYKCSAgICAgaW5vZGVfc2l6ZSA9IDEyOAoJICAgICB3YXJuX3kyMDM4X2RhdGVzID0gMAoJfQo="},"mode":384}]' > fix.ign

But RAID configuration is not in install-config.yaml, you can find it here: #2041 (comment). It is created from butane config (#2041 (comment)) as described in the documentation.

Another alternative is to supply e2fsprogs package via ignition.

Could you please share an ignition example?

Do a second ignition file and prepare an installation service for e2fsprogs package [1]:

variant: fcos
version: 1.4.0
ignition:
  config:
    merge:
      - local: original.ign
storage:
  files:
  - path: /usr/local/bin/e2fsprog.sh
    mode: 0755
    overwrite: true
    contents:
      inline: |
        #!/bin/env bash
        set -euo pipefail
        mkdir -d /tmp/rpm
        # TODO: Download rpms in /tmp/rpm folder
        rpm -Uvh /tmp/rpm/*
systemd:
  units:
  - contents: |
      [Service]
      Type=oneshot
      ExecStart=/usr/local/bin/e2fsprog.sh

      [Unit]
      Wants=network-online.target
      After=network-online.target

      [Install]
      WantedBy=multi-user.target
    enabled: true
    name: e2fsprog.service

Another question: what is the right way to install additional packages to OKD4 servers? (e.g. automation tools/configuration management tools) How it can impact the OKD4 updates?

Best path may be a machine config object, which will install a service [1][2] with ExecStart=rpm-ostree rebase ... to apply your changes [3]. If rpm-ostree is too advanced, you can use a plain shell script also.

[1] https://upstreamwithoutapaddle.com/blog%20post/2023/05/21/Pull-Youself-Up-By-Your-Bootstraps.html
[2] https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_configuration/machine-configs-configure#installation-special-config-chrony_machine-configs-configure
[3] https://coreos.github.io/rpm-ostree/layering/

@GingerGeek GingerGeek changed the title Bootstrap node is not serving ignition files Pivots to SCOS fail due to newer ext4 features enabled in FCOS Dec 15, 2024
@GingerGeek
Copy link
Member

This will be included as a known issue for the release of OKD 4.16/4.17

Recommended workarounds in the short term would be to start on older versions of FCOS (<39) if you need the RAID config.

In the medium term, SCOS boot artifacts should be available shortly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
OKD SCOS 4.16 OKD SCOS 4.17 pre-release-testing Items related to testing nightlies before a release.
Projects
None yet
Development

No branches or pull requests

5 participants