This short guide is for VFIO newcomers. If you are reading this, then you probably want the benefits of a passthrough VM – namely the comfort and flexibility of a Linux distribution alongside the ability to run both software and games chained to Windows.

Also Read: Binding a GPU to vfio-pci in DebianĀ 

Read the Wiki

Before beginning, it should be noted that most (if not all) of this information can be found on the Arch-Wiki. If you have not already, it would perhaps be a good idea to spend some time reading through the linked page and become familiar with some of the core-concepts used with vfio. This guide will assume that you have already enabled IOMMU via AMD-Vi or Intel Vt-d in your BIOS, and you have already enabled IOMMU through a bootloader kernel option. With that being said, let’s get started:

Isolating the GPU

One of the first things you will want to do is isolate your GPU. The goal of this is to prevent the Linux kernel from loading drivers that would take control of the GPU. Because of this, it is necessary to have two GPUs installed and functional within your system. One will be used for interacting with your Linux host (just like normal), and the other will be passed-through to your Windows guest. In the past, this had to be achieved through using a driver called pci-stub. While it is still possible to do so, it is older and holds no advantage over its successor –vfio-pci.

First, we need to find the device ID of the GPU that will be passed through. Simply run the command:

lspci -nn

and look through the given output until you find your desired GPU. So that this guide may be compatible with the more comprehensible one given in the Arch-Wiki, I will be using the same GPU and ID numbers provided there:

06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)

You will need to isolate both the GPU as well as its attached sound device. The ID numbers we are looking for are those contained in brackets at the end of the examples listed above. So:

10de:13c2 10de:0fbb

Configuring vfio-pci and Regenerating your Initramfs

Next, we need to instruct vfio-pci to target the device in question through the ID numbers gathered above. This can be done by editing your /etc/modprobe.d/vfio.conffile and adding the following line:

options vfio-pci ids=10de:13c2,10de:0fbb

Of course, it goes without saying that the device IDs listed above will need to be substituted with your own.

Next, we will need to ensure that vfio-pciis loaded before other graphics drivers. This is accomplished by editing another config file: /etc/mkinitcpio.conf. At the very top of your file you should see a section titled MODULES. Towards the bottom of this section you should see the uncommented line: MODULES= . Add the in the following order before any other drivers (nouveau, radeon, nvidia, etc) which may be listed: vfio vfio_iommu_type1 vfio_pci vfio_virqfd. A sample line may look like the following:

MODULES="vfio vfio_iommu_type1 vfio_pci vfio_virqfd nouveau"

In the same file, also add modconf to the HOOKSline:

HOOKS="modconf"

Now that we have vfio-pci configured to both load before other drivers and target our desired GPU, we need to rebuild out initramfs. Without going into too much detail, the initramfs is used to ensure that our system boots correctly and that the root filesystem is mounted. You can read more about it here.

Arch provides a simple and easy way for us to do this:

mkinitcpio -g /boot/linux-custom.img

 

Does it Work?

Now for the moment of truth – reboot and make sure that vfio-pci has properly loaded and is bound to your desired GPU. Run:

lspci -nnk

Find your GPU and ensure that under “Kernel driver in use:” vfio-pci is displayed:

06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel

If your is GPU bound correctly then congratulations – you are nearly ready to create a Windows guest and enjoy the sweet sweet fruits of your labor.

 

Configuring OVMF and Running libvirt

If you have not already, download libvirt, virt-manager, ovmf, and qemu (these are all available in the AUR). OVMF is an open-source UEFI firmware designed for KVM and QEMU virtual machines. ovmf may be omitted if your hardware does not support it, or if you would prefer to use SeaBIOS. However, configuring it is very simple and typically worth the effort. Simply open up /etc/libvirt/qemu.conf and add the path to your OVMF firmware image:

nvram = ["/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"]

From here, you need to start both libvirtd and its logger, virtlogd.socket. This can be accomplished by running (with root permissions):

$ service start libvirtd.service $ service start virtlogd.socket

If you want to avoid having to start both whenever you plan on using your virtual machine, simply set both to start at bootup:

$ service enable libvirtd.service $ service enable virtlogd.socket

With libvirt running, and your GPU bound, you are now prepared to open up virt-manager and begin configuring your virtual machine. virt-manager has a fairly comprehensive and intuitive GUI, so you should have little trouble getting your Windows guest up and running. When configuring your guest, navigate to the “Add Hardware” section and select both the GPU and its sound device that was isolated previously. From the same “Add Hardware” menu, you will be able to add USB devices, storage drives, and make other changes to your virtual machine.

 

The Next Step

To get the best possible performance, consider some of the options outlined in the “Performance Tuning” section of the Arch-Wiki. Despite the comprehensibility of virt-manager, some things, like CPU pinning, need to be done manually. Before the NPT fix, users of AMD processors were particularly reliant upon the performance gains offered by CPU pinning. Even still, manually allocating threads to your virtual machine will increase performance significantly, and time invested in doing so will be well rewarded.

Although this short tutorial is far from being comprehensive, I hope that it proves helpful to those who are looking to make their first steps towards creating functional and usable virtual machines featuring a PCIe passthrough. For those who already have these machines set up, check back within the following weeks for more advanced guides and tutorials.

Join our Discord for support and to chat with our readers and writers!

 


Images courtesy Arch Linux