OpenBSD as a VM host
This post was published on 22 Mar 2025
Firstly, just as a general website update, I have added a new section to my website simple titled “Tech” where you can find some more detailed information about the tech I use and that which powers my websites! With that out of the way, let’s start talking about VM stuff on OpenBSD!
I am sure there are other ways of doing this, but, once again, I wanted to use only the tools that natively come with OpenBSD (vmm
/ vmd
), even if they’re perhaps a bit underdeveloped compared to other, more established tools. Major drawbacks are that only one core can be assigned to a VM (no multi-threaded VMs possible so far) and that running anything but OpenBSD seems to be very hit or miss, unfortunately. I briefly tried getting Debian 12 and EndeavourOS to run, but had trouble getting either to even boot; I will, however, be looking into virtualising some other operating systems soon and (probably) write another blog post about that, but I believe the main issue is the fact that you can only connect to a VM through serial on OpenBSD and I doubt most modern Linux distributions have an open serial port you can connect to.
Nevertheless, OpenBSD’s built-in virtualisation does work and works decently well, though you should absolutely not expect anything full-fledged yet. For a lot of what I tend to use VMs for, however, it works perfectly fine! Networking also works decently well with your being able to create virtual switches / networks to which the VMs connect (if you wish) or you can have your VMs connect to the network the host is connected to as well.
To get up and running, I would recommend you read through the OpenBSD FAQ on virtualisation. I will not be going over how to set up VMs running OpenBSD in too much detail as the FAQ already goes through that in great detail; instead, I’ll be going through my own setup. And be sure to read the setup process thoroughly! I have the bad tendency to skip paragraphs my brain just deems unimportant and I ended up skipping the following:
In some cases, virtualization capabilities must be manually enabled in the system’s BIOS. Be sure to run the fw_update(8) command after doing so to get the required vmm-firmware package.
My skipping this part made it so that none of my VMs would start; running vmctl start <vm_name>
gave me the following error vmctl: vmm bios firmware file not found.
It took me a while to troubleshoot until I finally realised I just … accidentally skipped one step!
My own VM configuration
First of all (and as also mentioned in the previous blog post), I have a separate drive in my OpenBSD machine mounted at /mnt/ssd1
which is where I store all of my VM-related files (well, not all of them, but the VM images at any rate and ISO files as well). The FAQ mentioned that you can start a VM using vmctl
but I wanted to save my VM configs in /etc/vm.conf
so that my VMs would actually start automatically after the host has finished booting up. I also created two virtual switches, one for my own VMs and one for the VM’s of one of my partners, Aely.
Network configuration (on the host)
All devices in my homelab are in the 10.0.0.0/8
subnet range, mostly just because I like it. My first PVE node is in 10.10.10.0/24
, my second node is in 10.10.20.0/24
and so naturally, my third server would have its VMs be in 10.10.30.0/24
and (for Aely) in 10.10.31.0/24
. Therefore, I created two new interfaces. Here’s what the (trimmed) output of ifconfig
looks like now:
ifconfig output
em0: flags=a48843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6TEMP,AUTOCONF6,AUTOCONF4> mtu 1500 lladdr <redacted> index 1 priority 0 llprio 3 groups: egress media: Ethernet autoselect (1000baseT full-duplex,master,rxpause,txpause) status: active inet6 fe80::<redacted>%em0 prefixlen 64 scopeid 0x1 inet 192.168.178.138 netmask 0xffffff00 broadcast 192.168.178.255 enc0: flags=0<> index 2 priority 0 llprio 3 groups: enc status: active veb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> description: switch4-vswitch1 index 4 llprio 3 groups: veb vport0 flags=3<LEARNING,DISCOVER> port 5 ifpriority 0 ifcost 0 tap0 flags=3<LEARNING,DISCOVER> port 19 ifpriority 0 ifcost 0 tap1 flags=3<LEARNING,DISCOVER> port 20 ifpriority 0 ifcost 0 tap2 flags=3<LEARNING,DISCOVER> port 21 ifpriority 0 ifcost 0 vport0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 lladdr <redacted> index 5 priority 0 llprio 3 groups: vport inet 10.10.30.1 netmask 0xffffff00 broadcast 10.10.30.255 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 lladdr <redacted> description: vm1-if0-dokuwiki index 19 priority 0 llprio 3 groups: tap status: active tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 lladdr <redacted> description: vm2-if0-websites index 20 priority 0 llprio 3 groups: tap status: active tap2: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 lladdr <redacted> description: vm3-if0-ruby index 21 priority 0 llprio 3 groups: tap status: active vport1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 lladdr <redacted> index 23 priority 0 llprio 3 groups: vport inet 10.10.31.1 netmask 0xffffff00 broadcast 10.10.31.255 veb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> description: switch5-vswitch2 index 24 llprio 3 groups: veb vport1 flags=3<LEARNING,DISCOVER> port 23 ifpriority 0 ifcost 0 tap3 flags=3<LEARNING,DISCOVER> port 27 ifpriority 0 ifcost 0 tap3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 lladdr <redacted> description: vm4-if0-aely index 27 priority 0 llprio 3 groups: tap status: active
As you can see, it looks a tad confusing. em0
is the physical NIC connected to my router from which it gets a (static) DHCP lease. veb0
and veb1
are basically, as I understand it, virtual switches that I created to which you can connect either your VMs or the virtual network adapters (vportX
) on your host. I am not entirely sure if this is an accurate description, but thinking of it this way makes sense to me, at any rate. veb0
, for example, has four things connected to it: vport0
which is the virtual interface belonging to the host with an IP of 10.10.30.1
and tap0
through tap3
which are the VMs connected to this switch (you connect the VMs to the virtual switch using the /etc/vm.conf
file I will talk about soon). If you look at the description of these TAP devices, you can see that they bear the name of the VM they are attached to, such as description: vm2-if0-websites
. The same goes for the second virtual switch and interface (veb1
and vport1
) just that this time, the only VM connected to the switch is that belonging to Aely.
The creation of these interfaces is rather easy, you simply have to create two files for each interface (one for the virtual interface of the host and one for the switch), namely /etc/hostname.vportX
(host’s virtual interface) and hostname.vebX
(virtual switch). You then simply give the vportX
interface an IP and connect the vportX
interface to the vebX
switch. The two configs look as follows in the case of vport0
and veb0
on my machine:
# /etc/hostname.vport0 inet 10.10.30.1 255.255.255.0 up # /etc/hostname.veb0 add vport0 up
Once we have done that, we still have to change a few things in the firewall. I am still not entirely used to using pf
so I am sure this isn’t perfect, but it works for now. I have added the following lines as mentioned in the FAQ:
match out on egress from vport0:network to any nat-to (egress) match out on egress from vport1:network to any nat-to (egress)
/etc/vm.conf
Now that we have the networking set up on the host, we can move to the VM configuration itself. Below is my current config file.
/etc/vm.conf
socket owner :vmusers switch "vswitch1" { interface veb0 } switch "vswitch2" { interface veb1 } vm "dokuwiki" { memory 512M enable disk /mnt/ssd1/dokuwiki.qcow2 owner hex interface { switch "vswitch1" } } vm "websites" { memory 512M enable disk /mnt/ssd1/websites.qcow2 owner hex interface { switch "vswitch1" } } vm "ruby" { memory 512M enable disk /mnt/ssd1/ruby.qcow2 owner hex interface { switch "vswitch1" } } vm "aely" { memory 512M enable disk /mnt/ssd1/aely.qcow2 owner aely interface { switch "vswitch2" } }
The first thing you can see is that I have changed the owner of the vmd.sock
socket to the group vmusers
and I added both my own user account and that of Aely to that group so that we can both access our own VMs without needing to use doas
or both being in the wheel
group. Next are the two virtual switches that are connected to the vebX
interfaces we talked about earlier. Afterwards, we get the configuration for each VM.
The VM configuration is mostly pretty straightforward. You can assign a certain amount of memory to it, enable
the VM so that it starts automatically, provide a path to the disk image the VM will use for its storage, assign an owner to the VM and also an interface. Core count can, as of yet, not be adjusted, so every VM is limited to 1 CPU vCore. Obviously, if you still need to actually install an operating system on to the attached disk, you need to add the ISO and force vmm
to boot from it. This can be done by adding the following two lines in the config:
boot device cdrom cdrom /path/to/iso.iso
And that’s about it! You can now start your VMs with vmctl start <vm_name>
and connect to its serial console with vmctl console <vm_name>
. To exit the serial console, you have to use the rather strange escape sequence that is ~~.
(i. e. two tilde and one dot).
Conclusion
Honestly, I am quite happy with how easy it was to set everything up! Despite what I read online, virtualisation with vmm
and vmd
is definitely possible, though it is, obviously, very limited compare to other options out there. However, for my use cases, it is more than enough and works perfectly (at least so far). So much so, in fact, that I have moved my websites from an OpenBSD VM running on Proxmox over to an OpenBSD VM running on… well, OpenBSD! Nevertheless, there are still plenty of things you cannot yet feasibly do with these tools, so anyone who needs more than the bare minimum will probably have to use something else.