/FAQ
FAQ 2018-01-10T22:48:09+00:00

FAQ

What is PCI passthrough? 2017-12-14T03:39:17+00:00

In short: PCI passthrough (sometimes called VFIO or GPU passthrough) is the act of passing a PCI device directly into a virtual machine.
This allows the virtual machine to directly access the PCI device without having to go through any overhead-causing virtualization layers.
Essentially, it allows the virtual machine to use the PCI device with near-native performance.

What is PCI passthrough used for? 2017-12-14T03:13:56+00:00

PCI Passthrough is most commonly used to pass a dedicated video card from a Linux host to a Windows guest.
This way, the Windows virtual machine can handle any graphically-intensive content, like games, rendering- and media editing software, at close-to-baremetal performance.

Many people want to switch to Linux as their main operating system, but don’t want to give up their games or other Windows-only software.
With PCI passthrough you can run Linux as your main desktop and fire up your Windows VM whenever you need any Windows-only software.

PCI Passthrough can also be used to pass other devices, like USB controllers, PCIe SSDs and network cards. You can even use it to pass an entire SATA controller.
This way you can, for instance, set up a PFSense VM on Linux, pass a dedicated network card to it and use it to set up a high-performance low-energy router.

What do I need for PCI passthrough? 2017-12-14T03:13:23+00:00

The requirements for PCI passthrough are as follows:

  • A CPU with support hardware virtualization (Intel VT-d and VT-x, or AMD-V)
  • A motherboard (and CPU) with support for IOMMU
  • If passing a GPU, it must have a UEFI ROM.
What is performance vs baremetal? 2017-12-14T03:38:10+00:00

Nearly identical, if you refer to this spreadsheet, compiled by user GrayBoltWolf on level1techs forums, you can see performance data comparisons. Link to the full thread here.

How do I enable looking glass? 2017-12-17T23:21:50+00:00

To use looking glass, you need to enable IVSHMEM as follows:

  1. Open your xml in an editor:
    ​# virsh edit {guest}
  2. ​Include qemu’s XML namespace declaration in the ‘domain’ root element:
    <domain type=’kvm’ xmlns:qemu=’http://libvirt.org/schemas/domain/qemu/1.0′>
  3. In the domain element, add IVSHMEM:
    <qemu:commandline>
      <qemu:arg value=’-device’/>
      <qemu:arg value=’ivshmem-doorbell,chardev=ivshmem,vectors=1’/>
      <qemu:arg value=’-chardev’/>
      <qemu:arg value=’socket,path=/tmp/ivshmem_socket,id=ivshmem’/>
    </qemu:commandline>
  4. Start the IVSHMEM server on the host (adjust values as necessary):
    ivshmem-server -p /tmp/ivshmem.pid -S /tmp/ivshmem_socket -l 16M -n 8
What’s the best distro to use as host? 2017-12-14T04:07:09+00:00

The best distro to use for host is the one that you are familiar with, as long as the distro has kernel version 2.6.20 or higher, it will be able to run guests via KVM.

What hardware is recommended? 2017-12-14T08:40:08+00:00

For IOMMU groupings, modern (x99+) Intel chipsets work nearly flawlessly. Modern (A300+) AMD chipsets work similarly well.

If you are planning on passing a GPU to a guest, and are using a CPU that lacks integrated graphics (without onboard video), the common recommendation is an Nvidia gt710, which is single slot, passively cooled, and can easily be picked up for less than $60 USD.

For KVMs, the only model that is really recommended is available for both single monitor and dual monitor

How can I get vfio-pci to bind before my GPU drivers? 2017-12-14T14:11:44+00:00

There are actually two methods for this, starting with the preferred method, for the tools that your distribution uses.

Method one, module order (preferred):

mkinitcpio:
In your base configuration (/etc/mkinitcpio.conf), make sure that the vfio modules precede the GPU driver module, as well as ensuring that modconf is present in your hooks, for example:

MODULES=(… vfio vfio_iommu_type1 vfio_pci vfio_virqfd amdgpu …)

HOOKS=(… modconf …)

initramfs-tools:
In the configuration file (/etc/initramfs-tools/modules), set the vfio modules as dependencies for the GPU driver module, for example:

softdep amdgpu pre: vfio vfio_pci

In some cases, the GPU driver module still binds before vfio, if that is the case, put the same line in the modprobe rule for binding your gpu with vfio (usually /etc/modprobe.d/vfio), for example:

softdep amdgpu pre: vfio vfio_pci
options vfio-pci ids=8086:1c02

 

Method two, blacklisting:

mkinitcpio:
Create a modprobe rule blacklisting the GPU driver module, for example (using /etc/modprobe.d/blacklist.conf):

​blacklist amdgpu

Now, add the file to your mkinitcpio configuration (Usually /etc/mkinitcpio.conf) example:


FILES=”… /etc/modprobe.d/blacklist.conf”​

initramfs-tools:
Create a modprobe rule blacklisting the GPU driver module, for example (using /etc/modprobe.d/blacklist.conf):initramfs-tools:

blacklist amdgpu

dracut:
Add the GPU driver module to the dracut.conf.d file for adding vfio to your initramfs (usually /etc/dracut.conf.d/vfio.conf), for example:

add_drivers+=”vfio vfio_iommu_type1 vfio_pci vfio_virqfd”
omit_drivers+=”amdgpu”
Do You use Affiliate Links? 2017-12-19T00:04:21+00:00

Yes. We keep the site ad free via donations, and through affiliate links in our reviews and articles. We will always link to the retailer with the best price at the time of writing, but some of those links will be provided by sales affiliates.

We will never recommend a product without first vetting and reviewing it, or list irrelevant or inferior products on our affiliates’ request. We will link to any articles containing affiliate links here, as a disclaimer.

Currently we’re active in the Amazon Associates Program and as such need to declare the following:

“We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.”