KVM/QEMU – Windows Gast mit GPU-Hardwarebeschleunigung

Diese Anleitung ist derzeit ein Entwurf und vermutlich weder vollständig noch zwangsläufig korrekt! Sie sollte wirklich nur verwendet werden, wenn man genau weiß, was man tut und welche Konsequenzen dies hat!

Voraussetzungen

  • dedizierte Grafikkarte
    • GPU ROM muss UEFI unterstützen
  • CPU muss Hardware-Virtualisation und IOMMU unterstützen
    • Intel: VT-x und VT-d
    • AMD: AMD-V und AMD-Vi
  • Mainboard (sowohl Chipsatz als auch BIOS) muss IOMMU unterstützen
    • Grafikkarten sollten jeweils in eigener IOMMU-Gruppe sein
  • Windows 7/8/10 (Lizenz + Installationsmedium)
Windows 7/8 können nach einer erfolgreichen Aktivierung mit dem Windows 10-Upgrade-Assistenten dauerhaft auf Windows 10 aktualisiert werden.

IOMMU

Mit dem folgenden Bash-Skript können die aktuellen IOMMU-Gruppen aufgelistet werden.

for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
  echo "IOMMU Group ${g##*/}:";
  for d in $g/devices/*; do
    echo -e "\t$(lspci -nns ${d##*/})";
  done;
done;

Alternatives Skript:

for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*};
  n=${n%%/*};
  printf 'IOMMU Group %s ' "$n";
  lspci -nns "${d##*/}";
done;

IOMMU im BIOS aktivieren

IOMMU zum Grub-Befehl hinzufügen:

  • Für Intel: intel_iommu=on
  • Für AMD: amd_iommu=on

Außerdem die Option IOMMU-Passthrough aktivieren.

iommu=pt

Diese Option gibt an, für welche Geräte VFIO benutzt werden soll.

vfio-pci.ids=10de:13c2,10de:0fbb

Alternativ können die IDs zu einer modprobe conf-Datei hinzugefügt werden. Da diese conf-Dateien in das initramfs-Image eingebettet sind, müssen bei Änderungen jedes Mal neue Images generiert werden:

/etc/modprobe.d/vfio.conf

options vfio-pci ids=10de:13c2,10de:0fbb
IOMMU Group 12:
	01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43d0] (rev 01)
	01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller [1022:43c8] (rev 01)
	01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge [1022:43c6] (rev 01)
	02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
	02:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
	02:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
	02:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
	02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
	02:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
	03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
	07:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7)
	07:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]
IOMMU Group 13:
	09:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7)
	09:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]

Diese IOMMU-Gruppen zwingen mich die Grafikkarte (0000.09:00.0,0000.09:00.1) in der IOMMU-Gruppe 13 fürs GPU Passthrough zu verwenden, weil die Grafikkarte in der Gruppe 12 mit Geräten in einer Gruppe ist, die ich nicht durchreichen möchte/sollte.

Alternativ kann auch der ACS override Patch verwendet werden, um kleinere Gruppen zu erhalten, die im Idealfall immer nur genau ein Gerät enthalten.

ACS override Patch

Der Patch sollte wirklich nur verwendet werden, wenn man genau weiß, was man tut und welche Konsequenzen dies hat!
  1. Kernel mit Patch compilieren

RPM Fusion Repository

sudo dnf install \
https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
sudo dnf install fedpkg fedora-packager rpmdevtools ncurses-devel pesign \
fedora-packager fedora-review rpmdevtools numactl-devel pesign
sudo dnf groupinstall "Development Tools"
sudo dnf build-dep kernel

Einrichten des Build Verzeichnises (falls mit diesem Konto noch nie RPMs erstellt wurden)

rpmdev-setuptree

Installieren der Kernel Quellen und Abhängigkeiten

cd ~/rpmbuild/SOURCES
sudo dnf download --source kernel
rpm2cpio kernel-* | cpio -i --make-directories
mv kernel-*.src.rpm ../SRPMS
cd ../SRPMS
rpm -Uvh kernel-*.src.rpm
vim ~/rpmbuild/SPECS/kernel.spec

Die kernel.spec muss an zwei Stellen geändert werden. Zunächst muss die buildid relativ weit am Anfang der Datei definiert werden. Außerdem muss der Patch in den Patch-Bereich eingebaut werden.

# Set buildid
%define buildid .acs

# ACS override patch
Patch1000: add-acs-override.patch

Download des ACS Patches

cd ~/rpmbuild/SOURCES/
wget https://smart-tux.de/files/2021/2/add-acs-override.patch
Patch Anzeigen

From 29ba13919fb1c57cd96724ab2199ac685a34dd5f Mon Sep 17 00:00:00 2001
From: Fedora Kernel Team <kernel-team@fedoraproject.org>
Date: Thu, 14 May 2020 15:06:48 -0600
Subject: [PATCH 86/86] Add ACS overrides for Fedora at 5.6.x

This an updated version of Alex Williamson's patch from:
https://lkml.org/lkml/2013/5/30/513

Original commit message follows:
---
PCIe ACS (Access Control Services) is the PCIe 2.0+ feature that
allows us to control whether transactions are allowed to be redirected
in various subnodes of a PCIe topology.  For instance, if two
endpoints are below a root port or downsteam switch port, the
downstream port may optionally redirect transactions between the
devices, bypassing upstream devices.  The same can happen internally
on multifunction devices.  The transaction may never be visible to the
upstream devices.

One upstream device that we particularly care about is the IOMMU.  If
a redirection occurs in the topology below the IOMMU, then the IOMMU
cannot provide isolation between devices.  This is why the PCIe spec
encourages topologies to include ACS support.  Without it, we have to
assume peer-to-peer DMA within a hierarchy can bypass IOMMU isolation.

Unfortunately, far too many topologies do not support ACS to make this
a steadfast requirement.  Even the latest chipsets from Intel are only
sporadically supporting ACS.  We have trouble getting interconnect
vendors to include the PCIe spec required PCIe capability, let alone
suggested features.

Therefore, we need to add some flexibility.  The pcie_acs_override=
boot option lets users opt-in specific devices or sets of devices to
assume ACS support.  The "downstream" option assumes full ACS support
on root ports and downstream switch ports.  The "multifunction"
option assumes the subset of ACS features available on multifunction
endpoints and upstream switch ports are supported.  The "id:nnnn:nnnn"
option enables ACS support on devices matching the provided vendor
and device IDs, allowing more strategic ACS overrides.  These options
may be combined in any order.  A maximum of 16 id specific overrides
are available.  It's suggested to use the most limited set of options
necessary to avoid completely disabling ACS across the topology.
Note to hardware vendors, we have facilities to permanently quirk
specific devices which enforce isolation but not provide an ACS
capability.  Please contact me to have your devices added and save
your customers the hassle of this boot option.
---
 .../admin-guide/kernel-parameters.txt         |   8 ++
 drivers/pci/quirks.c                          | 103 ++++++++++++++++++
 2 files changed, 111 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 20aac80..e625ef8 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3472,6 +3472,14 @@
 		nomsi		[MSI] If the PCI_MSI kernel config parameter is
 				enabled, this kernel boot option can be used to
 				disable the use of MSI interrupts system-wide.
+		pci_acs_override [PCIE] Override missing PCIe ACS support for:
+				downstream
+					All downstream ports - full ACS capabilities
+				multifunction
+					Add multifunction devices - multifunction ACS subset
+				id:nnnn:nnnn
+					Specific device - full ACS capabilities
+					Specified as vid:did (vendor/device ID) in hex
 		noioapicquirk	[APIC] Disable all boot interrupt quirks.
 				Safety option to keep boot IRQs enabled. This
 				should never be necessary.
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index ca9ed57..2a9bc81 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -192,6 +192,107 @@ static int __init pci_apply_final_quirks(void)
 }
 fs_initcall_sync(pci_apply_final_quirks);
 
+static bool acs_on_downstream;
+static bool acs_on_multifunction;
+
+#define NUM_ACS_IDS 16
+struct acs_on_id {
+	unsigned short vendor;
+	unsigned short device;
+};
+static struct acs_on_id acs_on_ids[NUM_ACS_IDS];
+static u8 max_acs_id;
+
+static __init int pcie_acs_override_setup(char *p)
+{
+	if (!p)
+		return -EINVAL;
+
+	while (*p) {
+		if (!strncmp(p, "downstream", 10))
+			acs_on_downstream = true;
+		if (!strncmp(p, "multifunction", 13))
+			acs_on_multifunction = true;
+		if (!strncmp(p, "id:", 3)) {
+			char opt[5];
+			int ret;
+			long val;
+
+			if (max_acs_id >= NUM_ACS_IDS - 1) {
+				pr_warn("Out of PCIe ACS override slots (%d)\n",
+						NUM_ACS_IDS);
+				goto next;
+			}
+
+			p += 3;
+			snprintf(opt, 5, "%s", p);
+			ret = kstrtol(opt, 16, &val);
+			if (ret) {
+				pr_warn("PCIe ACS ID parse error %d\n", ret);
+				goto next;
+			}
+			acs_on_ids[max_acs_id].vendor = val;
+		p += strcspn(p, ":");
+		if (*p != ':') {
+			pr_warn("PCIe ACS invalid ID\n");
+			goto next;
+			}
+
+			p++;
+			snprintf(opt, 5, "%s", p);
+			ret = kstrtol(opt, 16, &val);
+			if (ret) {
+				pr_warn("PCIe ACS ID parse error %d\n", ret);
+				goto next;
+			}
+			acs_on_ids[max_acs_id].device = val;
+			max_acs_id++;
+		}
+next:
+		p += strcspn(p, ",");
+		if (*p == ',')
+			p++;
+	}
+
+	if (acs_on_downstream || acs_on_multifunction || max_acs_id)
+		pr_warn("Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA\n");
+
+	return 0;
+}
+early_param("pcie_acs_override", pcie_acs_override_setup);
+
+static int pcie_acs_overrides(struct pci_dev *dev, u16 acs_flags)
+{
+	int i;
+
+	/* Never override ACS for legacy devices or devices with ACS caps */
+	if (!pci_is_pcie(dev) ||
+		pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS))
+			return -ENOTTY;
+
+	for (i = 0; i < max_acs_id; i++)
+		if (acs_on_ids[i].vendor == dev->vendor &&
+			acs_on_ids[i].device == dev->device)
+				return 1;
+
+switch (pci_pcie_type(dev)) {
+	case PCI_EXP_TYPE_DOWNSTREAM:
+	case PCI_EXP_TYPE_ROOT_PORT:
+		if (acs_on_downstream)
+			return 1;
+		break;
+	case PCI_EXP_TYPE_ENDPOINT:
+	case PCI_EXP_TYPE_UPSTREAM:
+	case PCI_EXP_TYPE_LEG_END:
+	case PCI_EXP_TYPE_RC_END:
+		if (acs_on_multifunction && dev->multifunction)
+			return 1;
+	}
+
+	return -ENOTTY;
+}
+
+
 /*
  * Decoding should be disabled for a PCI device during BAR sizing to avoid
  * conflict. But doing so may cause problems on host bridge and perhaps other
@@ -4796,6 +4897,8 @@ static const struct pci_dev_acs_enabled {
 	{ PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
 	/* Zhaoxin Root/Downstream Ports */
 	{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
+	/* allow acs for any */
+	{ PCI_ANY_ID, PCI_ANY_ID, pcie_acs_overrides },
 	{ 0 }
 };
 
-- 
2.26.2

Erstellen des neuen Quellpaketes (.src.rpm).

rpmbuild -bs ~/rpmbuild/SPECS/kernel.spec
  1. Grub anpassen

GRUB /etc/default/grub

pcie_acs_override =
        [PCIE] Override missing PCIe ACS support for:
    downstream
        All downstream ports - full ACS capabilties
    multifunction
        All multifunction devices - multifunction ACS subset
    id:nnnn:nnnn
        Specfic device - full ACS capabilities
        Specified as vid:did (vendor/device ID) in hex

pcie_acs_override=downstream sollte ausreichen

pcie_acs_override=downstream,multifunction sollte möglichst alle Geräte in eigene IOMMU-Gruppen zerlegen.

GRUB_CMDLINE_LINUX="rd.driver.pre=vfio-pci \
rd.driver.blacklist=nouveau modprobe.blacklist=nouveau \
rhgb quiet intel_iommu=on iommu=pt pcie_acs_override=downstream"

GRUB Konfiguration aktualisieren

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

Create or edit Erstellen oder Bearbeiten von /etc/modprobe.d/local.conf. Folgende Zeile hinzufügen:

install vfio-pci /sbin/vfio-pci-override.sh

Erstellen oder Bearbeiten von /etc/dracut.conf.d/local.conf. Folgende Zeile hinzufügen:

add_drivers+="vfio vfio_iommu_type1 vfio_pci vfio_virqfd"
install_items+="/sbin/vfio-pci-override.sh /usr/bin/find /usr/bin/dirname"

Datei /sbin/vfio-pci-override.sh mit den Rechten 755 erstellen.

#!/bin/sh

# This script overrides the default driver to be the vfio-pci driver (similar
# to the pci-stub driver) for the devices listed. In this case, it only uses
# two devices that both belong to one nVidia graphics card (graphics, audio).

# Located at /sbin/vfio-pci-override.sh

DEVS="0000:09:00.0 0000:09:00.1"

if [ ! -z "$(ls -A /sys/class/iommu)" ] ; then
  for DEV in $DEVS; do
    echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
  done
fi

modprobe -i vfio-pci

Initramfs-Image mit dracut neu erstellen.

sudo dracut -f --kver `uname -r`
Falls im System zwei NVIDIA Grafikkarten vorhanden sind

Die proprietären NVIDIA Treiber müssen installiert und die Open Source nouveau Treiber deinstalliert oder geblacklistet werden.

sudo su -
dnf install xorg-x11-drv-nvidia akmod-nvidia "kernel-devel-uname-r == $(uname -r)" xorg-x11-drv-nvidia-cuda vulkan vdpauinfo libva-vdpau-driver libva-utils
dnf remove *nouveau*
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
  1. Copr-Repo

Ein Fedora-Konto erstellen

Die Quell-RPM aus ~/rpmbuild/SRPMS/kernel-*.src.rpm hochladen (Fedora-Konto notwendig)

(Das Bauen der RPM aus der SRC.RPM kann einige Stunden dauern)

Neues Repository aktivieren

dnf copr enable user/pkg-name

Neuen Kernel installieren

sudo dnf update \
kernel-*.acs.* kernel-devel-*.acs.* \
--disableexcludes all --refresh
Update und Reboot

OVMF

Die Open Virtual Machine Firmware (OVMF) ist ein Projekt zur Aktivierung der UEFI-Unterstützung für virtuelle Maschinen. Ab Linux 3.9 und neueren Versionen von QEMU ist es möglich, eine Grafikkarte an die VM durchzureichen, sodass diese eine nahezu native Grafikleistung erreicht.

VFIO, OVMF, GPU, and You – The state of GPU assignment in QEMU/KVM eine Präsentation von Alex Williamson (Red Hat).

PCI passthrough via OVMF

Alle Grafikkarten außer der primären durchreichen

Skript /usr/local/bin/vfio-pci-override.sh erstellen, um vfio-pci an alle GPUs außer der primären zu binden.

#!/bin/sh

for i in /sys/bus/pci/devices/*/boot_vga; do
    if [ $(cat "$i") -eq 0 ]; then
        GPU="${i%/boot_vga}"
        AUDIO="$(echo "$GPU" | sed -e "s/0$/1/")"
        USB="$(echo "$GPU" | sed -e "s/0$/2/")"
        echo "vfio-pci" > "$GPU/driver_override"
        if [ -d "$AUDIO" ]; then
            echo "vfio-pci" > "$AUDIO/driver_override"
        fi
        if [ -d "$USB" ]; then
            echo "vfio-pci" > "$USB/driver_override"
        fi
    fi
done

modprobe -i vfio-pci

Ausgewählte GPU durchreichen

In diesem Fall muss defininiert werden, welche GPU durchgereicht werden soll.

#!/bin/sh

DEVS="0000:03:00.0 0000:03:00.1"

if [ ! -z "$(ls -A /sys/class/iommu)" ]; then
    for DEV in $DEVS; do
        echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
    done
fi

modprobe -i vfio-pci

Performance Tuning

Arch Linux Wiki – Performance Tuning

  • CPU pinning
  • CPU topology
    • Sockets: 1
    • Cores: 6
    • Threads: 2
  • host-model ist inzwischen ähnlich schnell wie host-passthrough
  • virtio für Festplatte und Netzwerk
  • raw ist schneller als qcow2

NVIDIA

Anpassung der libvirt-Konfiguration notwendig, damit der Grafiktreiber später unter Windows nicht erkennt, dass er in einer virtuellen Maschine läuft und seinen Dienst mit dem Fehler-Code 43 einstellt.

  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <vendor_id state="on" value="smart-tux"/>
    </hyperv>
    <kvm>
      <hidden state="on"/>
    </kvm>
    <vmport state="off"/>
    <ioapic driver="kvm"/>
  </features>

Vorbereitung

dnf install \
binutils-devel \
cmake \
egl-wayland-devel \
fontconfig-devel \
libXfixes-devel \
libXi-devel
libX11-devel \
make \
mesa-libGLU-devel \
mesa-libGLES-devel \
mesa-libGL-devel \
mesa-libEGL-devel \
nettle-devel \
SDL2-devel \
SDL2_ttf-devel \
spice-protocol \
wayland-devel \
wayland-protocols-devel

Single GPU Passthrough

In dieser Variante verfügt der Rechner nur über genau eine dedizierte GPU. Diese wird solange vom Host-System genutzt, bis das Gast-System gestartet wird. Vor dem Start des Gast-Systems wird die GPU vom Host entfernt und solange die VM läuft, dem Gast zugewiesen. Wurde der Gast heruntergefahren, kann die GPU wieder dem Host-System zugeordnet werden. So laufen beide Systeme mit GPU Hardware-Beschleunigung, jedoch nie gleichzeitig.

Der Datei /etc/libvirt/hooks/qemu wird ein Skript hinzugefügt, welches von libvirt vor dem Start der VM prepare und nach dem Beenden der VM release für die drei virtuellen Maschinen windows, macOS und VMname ausgeführt wird und das Handling der GPU übernimmt.

#!/bin/bash
if [[ $1 == "windows" ]] || [[ $1 == "macOS" ]] || [[ $1 == "VMname" ]]; then
  # Load the config file with our environmental variables
  source "/etc/libvirt/hooks/kvm.conf"

  if [[ $2 == "prepare" ]]; then
    # Stop GNOME Display Manager
    systemctl stop gdm.service;
    # Unbind the GPU from display driver
    virsh nodedev-detach $VIRSH_GPU_VIDEO
    virsh nodedev-detach $VIRSH_GPU_AUDIO
  fi

  if [[ $2 == "release" ]]; then
    # Re-Bind GPU to our display drivers
    virsh nodedev-reattach $VIRSH_GPU_VIDEO
    virsh nodedev-reattach $VIRSH_GPU_AUDIO
    # Start GNOME Display Manager
    systemctl start gdm.service;
  fi
fi

Zur besseren Übersicht werden in der Datei /etc/libvirt/hooks/kvm.conf die Geräte-Nummern der Grafikkarte als Variable gespeichert, die an die VM übergeben werden sollen.

# Virsh devices
VIRSH_GPU_VIDEO=pci_0000_09_00_0
VIRSH_GPU_AUDIO=pci_0000_09_00_1

Dual GPU Passthrough

KVM/QEMU

libvirt Konfiguration der Windows-VM.

<domain type="kvm">
  <name>windows</name>
  <uuid>12345678-abcd-1234-abcd-12345678abcd</uuid>
  <title>Windows</title>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit="KiB">8388608</memory>
  <currentMemory unit="KiB">8388608</currentMemory>
  <vcpu placement="static">12</vcpu>
  <os>
    <type arch="x86_64" machine="pc-q35-5.1">hvm</type>
    <loader readonly="yes" type="pflash">/usr/share/edk2/ovmf/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/windows_VARS.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <vendor_id state="on" value="smart-tux"/>
    </hyperv>
    <kvm>
      <hidden state="on"/>
    </kvm>
    <vmport state="off"/>
    <ioapic driver="kvm"/>
  </features>
  <cpu mode="custom" match="exact" check="partial">
    <model fallback="allow">EPYC-IBPB</model>
    <topology sockets="1" dies="1" cores="6" threads="2"/>
  </cpu>
  <clock offset="localtime">
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="pit" tickpolicy="delay"/>
    <timer name="hpet" present="no"/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled="no"/>
    <suspend-to-disk enabled="no"/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type="file" device="cdrom">
      <driver name="qemu" type="raw"/>
      <target dev="sdb" bus="sata"/>
      <readonly/>
      <address type="drive" controller="0" bus="0" target="0" unit="1"/>
    </disk>
    <disk type="file" device="disk">
      <driver name="qemu" type="raw"/>
      <source file="/Windows.img"/>
      <target dev="vda" bus="virtio"/>
      <boot order="1"/>
      <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
    </disk>
    <controller type="usb" index="0" model="qemu-xhci" ports="15">
      <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
    </controller>
    <controller type="sata" index="0">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
    </controller>
    <controller type="pci" index="0" model="pcie-root"/>
    <controller type="pci" index="1" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="1" port="0x10"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
    </controller>
    <controller type="pci" index="2" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="2" port="0x11"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
    </controller>
    <controller type="pci" index="3" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="3" port="0x12"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
    </controller>
    <controller type="pci" index="4" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="4" port="0x13"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
    </controller>
    <controller type="pci" index="5" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="5" port="0x14"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
    </controller>
    <controller type="pci" index="6" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="6" port="0x15"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
    </controller>
    <controller type="pci" index="7" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="7" port="0x16"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
    </controller>
    <controller type="pci" index="8" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="8" port="0x17"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
    </controller>
    <controller type="pci" index="9" model="pcie-to-pci-bridge">
      <model name="pcie-pci-bridge"/>
      <address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
    </controller>
    <controller type="pci" index="10" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="10" port="0x18"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
    </controller>
    <controller type="pci" index="11" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="11" port="0x19"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
    </controller>
    <controller type="pci" index="12" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="12" port="0x1a"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
    </controller>
    <controller type="virtio-serial" index="0">
      <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
    </controller>
    <interface type="network">
      <mac address="12:23:34:45:56:78"/>
      <source network="default"/>
      <model type="e1000e"/>
      <link state="down"/>
      <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
    </interface>
    <channel type="spicevmc">
      <target type="virtio" name="com.redhat.spice.0"/>
      <address type="virtio-serial" controller="0" bus="0" port="1"/>
    </channel>
    <input type="mouse" bus="ps2"/>
    <input type="keyboard" bus="ps2"/>
    <input type="keyboard" bus="virtio">
      <address type="pci" domain="0x0000" bus="0x0a" slot="0x00" function="0x0"/>
    </input>
    <graphics type="spice" autoport="yes">
      <listen type="address"/>
      <image compression="off"/>
      <gl enable="no"/>
    </graphics>
    <sound model="ich9">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1b" function="0x0"/>
    </sound>
    <video>
      <model type="none"/>
    </video>
    <hostdev mode="subsystem" type="pci" managed="yes">
      <source>
        <address domain="0x0000" bus="0x09" slot="0x00" function="0x0"/>
      </source>
      <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
    </hostdev>
    <hostdev mode="subsystem" type="pci" managed="yes">
      <source>
        <address domain="0x0000" bus="0x09" slot="0x00" function="0x1"/>
      </source>
      <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
    </hostdev>
    <redirdev bus="usb" type="spicevmc">
      <address type="usb" bus="0" port="2"/>
    </redirdev>
    <redirdev bus="usb" type="spicevmc">
      <address type="usb" bus="0" port="3"/>
    </redirdev>
    <memballoon model="virtio">
      <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
    </memballoon>
    <shmem name="looking-glass">
      <model type="ivshmem-plain"/>
      <size unit="M">32</size>
      <address type="pci" domain="0x0000" bus="0x09" slot="0x01" function="0x0"/>
    </shmem>
  </devices>
</domain>

Looking Glass

[Looking Glass](https://looking-glass.io)
Looking Glass

Installation aus dem Looking Glass Wiki.

Download Source Code und Windows Host (64-bit) derselben Version von Looking Glass.

Compilieren der Client-Anwendung

mkdir client/build
cd client/build
cmake ../
make
ln -s $(pwd)/looking-glass-client /usr/local/bin/
user=$(whoami);
sudo touch /dev/shm/looking-glass && \
sudo chown ${user}:kvm /dev/shm/looking-glass && \
sudo chmod 660 /dev/shm/looking-glass

Installation des IVSHMEM Treiber

VirtIO

Windows VM starten

  1. Monitor oder HDMI-Dummy an die Grafikkarte anschließen. (Dieser Schritt war bei allen getesteten Grafikkarten notwendig.)
  2. Windows VM starten
  3. looking-glass-client [EINSTELLUNGEN] [OPTIONEN]
  • Einstellungen
    • win:minimizeOnFocusLoss=no verhindert, dass das Fenster minimiert wird, wenn es den Fokus verliert
  • Optionen
    • -s deaktiviert Spice
    • -S deaktiviert den Bildschirmschoner
    • -F startet automatisch im Vollbildmodus
  • Beispiel: looking-glass-client win:minimizeOnFocusLoss=no -F

SR-IOV

Aktuell gibt es nur Server-Grafikkarten, die SR-IOV unterstützen. AMD nennt diese Multiuser-Grafikkarten (MxGPU).

Benchmark

Zum Video

Quellen