Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] LVM for VMs #24

Open
uumas opened this issue Jul 2, 2019 · 7 comments
Open

[Question] LVM for VMs #24

uumas opened this issue Jul 2, 2019 · 7 comments

Comments

@uumas
Copy link

uumas commented Jul 2, 2019

I have a volume group (vg0) on a vm host platform, which has all vm root partitions as volumes (/dev/vg0/testvm-root). The vm sees the volume as /dev/vda. I've been trying for a while now, but can't figure out a way to handle this with this role. Is it possible without modifying this too much or do I basically need to reimplement the handling of partitions?

@nilsmeyer
Copy link
Owner

Do you want to bootstrap the VM from the host system, without booting into a VM? I think this may be possible, you would specify the layout with the logical volume as underlying device, e.g.:

layout:
  - device: '/dev/vg0/testvm-root'
    partitions:
      - num: 1
        size: 1M
        type: ef02
      - num: 2
        size: 1022M
        type: 8300
      - num: 3
        type: 8300

What could be difficult is that the partition table may not be detected automatically (one would need to add a step to call partprobe) and perhaps the detection of device names is off - devices would probably be named /dev/maper/vg0-testvm--root--part1 etc.. I think I already wrote some code to better determine the device names when putting in nvme support.

Creating the fstab and finding the boot device should work already since it uses UUID.

Does this help at all?

@uumas
Copy link
Author

uumas commented Jul 2, 2019

I should have been clearer. Yes, bootstrapping from the host would be ideal. I don't want multiple partitions, just one with no partition table so the root partition would be just /dev/vg0/testvm-root (/dev/vda, not /dev/vda1 on the vm). I use direct kernel boot, so a bootloader isn't needed at all.

@nilsmeyer
Copy link
Owner

I see, that would require a few modification since I haven't thought about that particular use-case, so for example:

layout:
  - device: '/dev/vg0/testvm-root'
    fs: ext4
    mount: /

This doesn't work currently but shouldn't be too difficult to add.

@uumas
Copy link
Author

uumas commented Jul 2, 2019

Would it be possible to implement it so that the lvm volume could be created automatically too, something like this maybe:

layout:
  - lvm:
      lv: "testvm-root"  
      vg: vg0  
      size: 5G  
      mount: /  
      fs: ext4

This isn't a great syntax, but you get the point.

@nilsmeyer
Copy link
Owner

I think that would be somewhat out of scope for this ansible role, you can easily create the volume in your playbook beforehand. There are two additional caveats that I just now remembered:

  • You can only install one VM at a time
  • The role may not fully clean up after itself.

@uumas
Copy link
Author

uumas commented Jul 3, 2019

Yeah, you're right.
Installing one vm at a time is okay.
What do you mean by may not fully clean up after itself

@nilsmeyer
Copy link
Owner

I mean that it may create things on the host system that don't get removed, I've just looked at the cleanup task and it seems like at the very least I forgot to remove the bootstrap user and its SSH configuration from the host system. I haven't yet tried running this twice from the same box.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants