For quite some time now various guides on the internet have told us to use modprobe rules on most devices when doing PCI and VGA(GPU) pass-through. While in most use cases you will need to pre-bind your device to vfio-pci, or blacklist the kernel module beforehand, due to libvirt not wanting to dynamically re-bind with your GPU/PCI devices, sometimes it is not needed.
Read Also: Early Results — TPP Hardware Survey
Commonly, due to the size and complexity of GPU/PCI device drivers, a fair portion of these devices will not support dynamic re-binding well, and because of this, it is generally advised to bind to a driver such as vfio-pci due to it having less caveats from the re-binding of each driver during the start-up and shutdown of a guest OS.
In a few edge cases all that will be needed for GPU/PCI device pass-through to work is by simply adding the device in virt-managers “Add Hardware” menu, by selecting the PCI device that you want to pass, and libvirt will automagically do the rest of the work for you with their Node Device API and managed mode attributes.
Managed mode Overview
In 2011 the developers of libvirt added the managed mode attribute to help bind and unbind from the vfio-pci kernel module gracefully. To do this, libvirt has a Node Device API that has the ability of being able to detach/re-attach the PCI device in question.
With managed mode, the device that the user is passing through is detached from the host devices driver when the user starts the guest OS, and then re-attached back to the hosts driver when the user is done using the guest OS.
When using managed mode, libvirt is also allowed to perform a reset of the devices that are being passed through when the guest OS starts, and then reset them again when the guest OS is shutdown. This is critical to assure proper isolation of the PCI devices being passed between the host and guest OS.
In short, manual editing of vfio-pci and modprobe rule additions may not always be necessary.
Modprobe-Free GPU Passthrough Is Possible
This has been tested with a AMD RX 550 & HD 7790 GPU on the host, and a Nvidia 1070 FE as the guest GPU, using Arch Linux using the FOSS gpu drivers on both GPUs. I urge others in the VFIO community to test this on setups with Intel + Nvidia and/or AMD setups(with the radeon kernel module, not the amdgpu kernel module, due to the re-initialization bug, and other issues)on any other distribution of choice to see what does work, and what does not.
Note: this was not tested to see if it works with Nvidia proprietary drivers at the time of writing, nor two Nvidia GPU’s).
All that I needed in my case is to set a Xorg configuration file setting the specific GPU’s BusID that I wanted to use for the host, which we parse from “ lspci -k ” :
02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Polaris12 (rev c7) Subsystem: XFX Pine Group Inc. Polaris12 Kernel driver in use: amdgpu Kernel modules: amdgpu
/etc/xorg.conf.d/10-radeon.conf this will translate to:
Section "Device" Identifier "AMD" # Typically this line is set to whatever name you want Driver "amdgpu" BusID "PCI:2:0:0" # Note that you’ll want to exclude the first digit when parsed as 0x:00.0 EndSection
Join our discord to discuss this and other articles!
Images courtesy Pixabay