-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OL8: LVM volume not mounted on reboot after systemd-239-78.0.3 #136
Comments
Thanks for submitting this issue and providing the comprehensive info. We will take a look at this internally. |
Would it be possible to provide an update on this issue? If any additional information is needed, please let me know. |
Please tell me if the following modifications temporarily address the issue? On my end, starting LVM/VG within the service file before "systemctl daemon-reload" is invoked worked. With OL8 using systemd-252-78.0.3 or greater (as root) B) Remember to backup the service file (just in case), then change /usr/lib/systemd/system/systemd-fstab-generator-reload-targets.service to the following: [Unit] [Service] [Install] C) systemctl enable systemd-fstab-generator-reload-targets.service D) Try rebooting multiple times to ensure it works |
It is Oracle Linux 8.9.
Yes, it does work fine. I bounced the system 6 times and the LV was mounted after each reboot: [ec2-user@ip-172-31-34-92 ~]$ last reboot
reboot system boot 5.15.0-200.131.2 Tue May 21 17:55 still running
reboot system boot 5.15.0-200.131.2 Tue May 21 17:54 - 17:55 (00:00)
reboot system boot 5.15.0-200.131.2 Tue May 21 17:54 - 17:54 (00:00)
reboot system boot 5.15.0-200.131.2 Tue May 21 17:53 - 17:53 (00:00)
reboot system boot 5.15.0-200.131.2 Tue May 21 17:50 - 17:52 (00:02)
reboot system boot 5.15.0-200.131.2 Tue May 21 17:46 - 17:50 (00:03)
wtmp begins Tue May 21 17:46:26 2024
[ec2-user@ip-172-31-34-92 ~]$ journalctl -u systemd-fstab-generator-reload-targets.service
-- Logs begin at Tue 2024-05-21 17:55:45 GMT, end at Tue 2024-05-21 17:56:19 GMT. --
May 21 17:55:45 ip-172-31-34-92.ec2.internal systemd[1]: Starting systemd-fstab-generator-reload-targets.service...
May 21 17:55:45 ip-172-31-34-92.ec2.internal sh[479]: Found volume group "testvg" using metadata type lvm2
May 21 17:55:45 ip-172-31-34-92.ec2.internal sh[491]: 1 logical volume(s) in volume group "testvg" now active
May 21 17:55:46 ip-172-31-34-92.ec2.internal systemd[1]: Started systemd-fstab-generator-reload-targets.service. |
Could you please provide an update about the fix and its timeline? We are still hitting the same issue even with Oracle Linux 8.10.
We found that although the problem did not happen in simple scenarios, it still happens in more complex tests even when |
Hi, I'm afraid we can't provide an ETA, but I have followed up with the developer to see if they need anything else. |
Hi, any updates on this issue. We appear to have encountered the same issue on a few OL_8.10 servers today. |
LVM volumes are not always mounted after reboot after applying
systemd-239-78.0.3
and above.I constructed several test cases to demonstrate the issue using an Oracle provided AMI
ami-076b18946a12c27d6
on AWS.Here is a sample Cloud Formation template that is used to demonstrate the issue:
non-working-standard.yml.txt
User data:
Once the template is deployed, confirm that cloud-init completed without errors and
/u01
is mounted. Then reboot the EC2 instance, e.g. viareboot
.When it comes back,
/u01
is not mounted anymore:/var/log/messages
contains:I created several Cloud Formation templates: test-cases.zip
non-working-standard
: the deployment when systemd is updated to the currently available latest version239-78.0.4
and multipathd is disabled./u01
is not mounted on rebootnon-working-systemd
: the deployment to demonstrate that/u01
is not mounted on reboot if systemd is updated to239-78.0.3
- the version that introduced this problemworking-fstab-generator-reload-targets-disabled
: the deployment wheresystemd-fstab-generator-reload-targets.service
is disabled. It is the service that Oracle introduced insystemd-239-78.0.3
. There is no such a service in the upstream./u01
is mounted after reboot.working-multipathd-enabled
: the deployment wheremultipathd.service
is enabled./u01
is mounted after rebootworking-systemd
: the deployment that usessystemd-239-78.0.1
- the one that is shipped with the AMI and it does not have the issue./u01
is mounted on rebootFor each of the deployments above, I ran the following commands:
after deployment
after reboot
date uptime df -h journalctl -b -o short-precise > /tmp/journalctl.txt sudo cp /var/log/messages /tmp/messages.txt sudo chmod o+r /tmp/messages.txt
The logs of the command executions are in the
commands.txt
files inside the archive along withjournalctl.txt
andmessages.txt
.Thus, the issue happens when all of the following conditions are true:
systemd >= 239-78.0.3
multipathd
disabledThe following workarounds are known to prevent the issue, so that an LVM volume
/u01
is mounted after reboot:systemd < 239-78.0.3
multipathd
systemd-fstab-generator-reload-targets
I have been able to reproduce this issue only on AWS with different instance types (AMD/Intel based). I was not able to reproduce the issue on Azure with both NVMe and non-NVMe based VM sizes.
What is really happening here is that
[email protected]
is not invoked sometimes after applyingsystemd-239-78.0.3
. Therefore, LVM auto-activation is not performed. If I reboot the EC2 instance and find that an LVM volume is not mounted, I can manually activate problem volume groups viavgchange -a y
, or I can also run:sudo /usr/sbin/lvm pvscan --cache --activate ay 259:1
for a problem device as it is demonstrated below (the command used by[email protected]
):The text was updated successfully, but these errors were encountered: