With a new server finally installed and running, it was time to retire the Raspberry Pi running the Unifi controller managing the two WAPs. The server runs FreeBSD and the controller needs Linux. That meant a VM was needed.

I originally intended CentOS to run as the base OS because I am vastly more familiar with using that as a VM host. That ran into all sorts of issues when I couldn’t get any install ISO to actually boot, FreeBSD or Debian. I saw libvirt supports bhyve, albeit in a limited form. The basics are all I needed though.

Searching around the web found limited and old information about using libvirt with bhyve. Craig Rodrigues wrote something about using libvirt with bhyve that was initially useful but also out of date now. He has libvirt built from ports with the BHYVE and QEMU options enabled. The package has BHYVE enabled but not QEMU. I really hoped I would not need to bring in QEMU as it would involve lots of building. Because I had difficulties setting up the VM, this post exists so hopefully someone else won’t have to repeat what I learned.

I wiped everything clean and installed FreeBSD 10.3. There are two SSDs to run in a RAID1 configuration. It was quite nice to hand that to ZFS and not need multiple partitions with RAID running on top of each partition.

First, a quick digression about the hardware setup. The motherboard has two 1 gigabit ethernet ports (and two 10 gigabit SFP+ ports that I can’t use yet). I connected both ports to the switch but only use DHCP on one, igb0. The other, igb1, is connected to a bridge, bridge0. I followed the FreeBSD handbook’s page on using FreeBSD as a VM host with bhyve. I did only the first section; the rest will happen a bit differently. This is what I added to /etc/rc.conf to persist the changes:

cloned_interfaces="bridge0"
ifconfig_bridge0="addm igb1"
ifconfig_igb1="up"

These lines were added to /boot/loader.conf:

vmm_load="YES"
nmdm_load="YES"
if_bridge_load="YES"
bridgestp_load="YES"
if_tap_load="YES"

After installing libvirt from packages, I enabled it in rc.conf and started it. I used the example XML configuration for a Linux guest from libvirt’s page on bhyve and edited it lightly to adapt to what I needed. Checking the Raspberry Pi showed I only needed 512 MiB of RAM. I also didn’t need the CD-ROM device. The disk was updated to point to a raw image that was an enlarged version of Debian’s cloud image. It defaults to 2 GiB and I needed at least that, so I expanded it to 4 GiB. The network bridge was changed to the newly created bridge0. Below the example config is a snippet about connecting to the guest console. Fortuitously predicting problems, I added that snippet to the devices tree. Finally, the name was changed to unifi.

Enter another few days of debugging. Defining the VM in libvirt worked as expected (virsh define unifi.xml). Starting it was another matter. I was used to running virsh start unifi and the VM started. Here, the command hung and wouldn’t return to the shell. It took me a while to figure out that grub wasn’t able to automatically boot the image. I connected to the serial console with cu, cu -l /dev/nmdm0B, pressed enter, and was greeted with the dubious grub> prompt.

After loading the kernel and initramfs and instructing grub to boot, nothing happened. I tweaked the domain XML a few times to no avail. Checking the logs had these three lines repeated each boot:

/usr/sbin/bhyve -c 1 -m 512 -A -I -u -H -P -s 0:0,hostbridge -s 2:0,virtio-net,tap0,mac=52:54:00:ac:ad:a0 -s 0:0,ahci-hd,/vm/unifi/debian-unifi.raw -s 1,lpc -l com1,/dev/nmdm0A unifi
/usr/local/sbin/grub-bhyve --root hd0,msdos1 --device-map /var/run/libvirt/bhyve/grub_bhyve-unifi-device.map --memory 512 --cons-dev /dev/nmdm0A unifi
pci slot 0:0 already occupied!

It seemed clear from the invocation of bhyve that the hostbridge and disk devices were clashing so bhyve wasn’t running anything after grub finished. However, I never included the PCI slots to use in the domain’s XML and could not force the slots to be different. At this point I noticed the bridge used virtio-net. The libvirt docs say only the SATA bus is supported but it seemed unusual to support virtio-net and not virtio-blk. Switched the bus to virtio, started the domain, told grub how to boot the disk and Debian booted after a few seconds.

This is the domain XML currently being used:

<domain type='bhyve'>
  <name>unifi</name>
  <uuid>9dd0222a-8359-4f9d-b46c-3b283de81cf5</uuid>
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <bootloader>/usr/local/sbin/grub-bhyve</bootloader>
  <os>
    <type arch='x86_64'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <disk type='file' device='disk'>
      <driver name='file' type='raw'/>
      <source file='/vm/unifi/debian-unifi.raw'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <interface type='bridge'>
      <source bridge='bridge0'/>
      <model type='virtio'/>
    </interface>
    <serial type='nmdm'>
      <source master='/dev/nmdm0A' slave='/dev/nmdm0B'/>
    </serial>
  </devices>
</domain>

Needing to manually tell grub how to boot the VM is frustrating but being able to use virtio for the disk and network is good enough.