Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QEMU driver doesn't support IPv6 network #3832

Open
Shmillerov opened this issue Dec 11, 2024 · 5 comments
Open

QEMU driver doesn't support IPv6 network #3832

Shmillerov opened this issue Dec 11, 2024 · 5 comments
Labels
bug needs triage Issue needs to be triaged

Comments

@Shmillerov
Copy link

Describe the bug
I want to migrate from LXD to QEMU driver as multipass got support of bridged networks for QEMU with 1.15. But I got a different behavior between LXD and QEMU drivers.

VM launched with LXD driver by default have a global inet6 address on default interface:

2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 52:54:00:dd:ba:5d brd ff:ff:ff:ff:ff:ff
    inet 10.174.43.244/24 metric 100 brd 10.174.43.255 scope global dynamic enp5s0
       valid_lft 2063sec preferred_lft 2063sec
    inet6 fd42:51c9:7f9d:c15b:5054:ff:fedd:ba5d/64 scope global mngtmpaddr noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fedd:ba5d/64 scope link
       valid_lft forever preferred_lft forever

VM launched with QEMU not:

2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:5e:19:2f brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.59.1.96/24 metric 100 brd 10.59.1.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe5e:192f/64 scope link
       valid_lft forever preferred_lft forever

As far as I understand the problem is in multipass bridge created by default.
With LXD driver, mpbr looks like:

4: mpbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:3f:66:55 brd ff:ff:ff:ff:ff:ff
    inet 10.174.43.1/24 scope global mpbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:51c9:7f9d:c15b::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe3f:6655/64 scope link
       valid_lft forever preferred_lft forever

With QEMU it doesn't have global inet6 addr:

3: mpqemubr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:a7:9a:f3 brd ff:ff:ff:ff:ff:ff
    inet 10.59.1.1/24 brd 10.59.1.255 scope global mpqemubr0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fea7:9af3/64 scope link
       valid_lft forever preferred_lft forever

To Reproduce
How, and what happened?

  1. snap install multipass --beta
  2. multipass launch --name test
  3. multipass exec -- ip -c a - ens3 interface doesn't have default inet6 address

Expected behavior

  • expect to have the same behavior with different drivers to be able to migrate between them without any issues
  • expect to have ipv6 enabled by default or some configuration flag to enable it.

Additional info

  • OS: Ubuntu 22.04.5 LTS
  • multipass version: multipass 1.15.0
  • multipass info:
Name:           master
State:          Running
Snapshots:      0
IPv4:           10.59.1.96
Release:        Ubuntu 24.04.1 LTS
Image hash:     b63f266fa4bd (Ubuntu 24.04 LTS)
CPU(s):         2
Load:           0.16 0.10 0.02
Disk usage:     2.1GiB out of 9.6GiB
Memory usage:   333.9MiB out of 1.9GiB
Mounts:         --
  • multipass get local.driver: qemu
@Shmillerov Shmillerov added bug needs triage Issue needs to be triaged labels Dec 11, 2024
@georgeliao
Copy link
Contributor

@Shmillerov
Thanks for reporting this.
Currently, multipass qemu only supports ipv4. While enabling ipv6 support is something we want to do, it may not happen in the near future. Does this limitation really pose a problem to vm migration? Specifically, would the vms be unable to fall back to ipv4 network for communication?

@Shmillerov
Copy link
Author

Shmillerov commented Dec 11, 2024

Hello, thanks for the quick answer. No, the basic functionality works fine, the issue is that I don't have IPv6 network on a virtual machine with QEMU, but before (with LXD) I had it.

It's good that you have a plan to add this support. But at the moment, do we have any WA? Maybe I can add IPv6 support manually? I don't want to have multipass <-> VM communication over IPv6. I just want to have an IPv6 network inside the VM.

Thanks

@georgeliao
Copy link
Contributor

georgeliao commented Dec 12, 2024

@Shmillerov
Unfortunately we do not have a tested workaround. If you really need that very badly, you would need to manipulate the mpqemubr0 bridge via ip command, and change the configuration of dnsmasq accordingly and restart. Additional steps may also be required.

@Lucky112
Copy link

If you need IPv6 connectivity between VMs, you can set up an IPv6-only bridge as follows:

# create a bridge with static ipv6 
sudo ip link add br0 type bridge
sudo ip link set br0 up
sudo ip -6 addr add 2001:db8::1/64 dev br0

# enable ipv6 forwarding
sudo sysctl -w net.ipv6.conf.all.forwarding=1

# launch multipass Vms with this bridge connected
multipass launch VM-1 --network br0

# assign a static IPv6 address and configure a default route for the VM
multipass exec VM-1 -- sudo ip -6 addr add 2001:db8::2/64 dev ens4 # note ens4, this is the interface for br0, while ens3 is for mpqemubr0
multipass exec VM-1 -- sudo ip -6 route add default via 2001:db8::1 dev ens4

# enable ipv6 masquerade to provide internet access to VM
sudo ip6tables -t nat -A POSTROUTING -o ens3 -j MASQUERADE  # note ens3, this is your host machine's internet access interface

After doing this you should achieve ipv6 connvectivity

multipass exec VM-1 -- ping6 -c 4 google.com

For multiple VMs, you can repeat the process for each VM by assigning unique IPv6 addresses within the same subnet (e.g., 2001:db8::3, 2001:db8::4, etc.).

Please note, this configuration is not persistent and will disappear after system reboot.

@Lucky112
Copy link

Persistent configuration is a whole other story.

One option is to use netplan for network settings and dnsmasq for dhcp. ip6table also should be replaced with nftables.

Create bridge using netplan by adding a file /etc/netplan/10-br0.yaml

network:
  version: 2
  renderer: networkd
  bridges:
    br0:
      dhcp4: no
      addresses:
        - 2001:db8::1/64

and applying it

sudo netplan apply

New bridge should be now shown in an ip device list with a proper address

ip -c address show dev br0
8: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 26:98:66:85:64:e3 brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8::1/64 scope global
       valid_lft forever preferred_lft forever

bridge will be down as long as no VM is connected to it yet
launch multipass VM with this bridge connected

multipass launch VM-1 --network br0

start a dnsmasq service to provide dhcp over br0. If you realy don't want to install separate dnsmasq you can try to set up another instance of multipass dnsmasq as a service as follows:

[Unit]
Description=DHCP server for br0
BindsTo=sys-devices-virtual-net-br0.device
After=sys-devices-virtual-net-br0.device

[Service]
ExecStart=/snap/multipass/current/usr/sbin/dnsmasq -k \
    --interface=br0 \
    --except-interface=lo \
    --strict-order --bind-interfaces --enable-ra \
    --dhcp-range=::,constructor:br0,ra-names,ra-stateless \
    --listen-address=2001:db8::1 \
    --dhcp-leasefile=/var/dnsmasq/br0/dnsmasq.leases
Restart=always
RestartSec=5s
User=root
Group=root

[Install]
WantedBy=multi-user.target

Note: a folder for lease file /var/dnsmasq/br0/ should exist!
This is the dnsmasq service for br0, it should wait for br0 to be UP before starting. To make it a system service save the config above into /etc/systemd/system/dnsmasq-br0.service and use systemctl:

sudo systemctl daemon-reload
sudo systemctl enable dnsmasq-br0.service
sudo systemctl restart dnsmasq-br0.service

This service should provide all the VMs connected to the br0 with an ip from 2001:db8/64 subnet and a default route via ens4.

The final step is to persist ipv6 forwarding and masquerading:

# persist ipv6 forwarding
echo net.ipv6.conf.all.forwarding=1 | sudo tee "/etc/sysctl.d/br0.conf" > /dev/null
sudo sysctl -f "/etc/sysctl.d/br0.conf"

# Install nftables and enable it as a service to load the configuration after restarts
sudo apt install nftables
sudo systemctl enable nftables

# establish ipv6 masquerading and save it into nftables.conf
sudo nft add table ip6 nat
sudo nft add chain ip6 nat POSTROUTING { type nat hook postrouting priority 100 \; }
sudo nft add rule ip6 nat POSTROUTING oif ens3 masquerade
sudo nft list ruleset | sudo tee /etc/nftables.conf > /dev/null

Check your ipv6 connectivity from VM-1

multipass exec VM-1 -- ping6 -c 4 google.com

This configuration should happily survive host system reboot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug needs triage Issue needs to be triaged
Projects
None yet
Development

No branches or pull requests

3 participants