https://anadoxin.org/blog

Fixing: `Could not access KVM kernel module`

Mon, 16 March 2026 :: #linux :: #qemu :: #libvirt

Maybe you've seen this error, when you sometimes use virt-manager:

Error starting domain: internal error: process exited while connecting to monitor:
qemu-system-x86_64: -accel kvm: Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: -accel kvm: failed to initialize kvm: No such file or directory

it sounds like something is pretty broken. Potential problems include:

  • KVM module is missing,
  • virtualization is disabled in BIOS,
  • or /dev/kvm does not exist due to missing support from the OS.

But there is another subtle cause: /dev/kvm is created with wrong ownership/permissions because the kvm group is not treated as a system group by udev.

This short blog post will dig through a solution to this problem.

The problem

On an affected host, checks often look like this:

  • lsmod | grep kvm shows kvm and kvm_intel/kvm_amd loaded.
  • CPU supports virtualization (vmx or svm exists in /proc/cpuinfo).
  • /dev/kvm exists, but is owned by root:root and often mode 0600:
crw------- 1 root root ... /dev/kvm
  • journalctl -u systemd-udevd may contain:
Failed to resolve group 'kvm' ... Not a system group

That line is the key. "Not a system group" tells you exactly what is happening.

Root cause

udev rules (for example in /usr/lib/udev/rules.d/50-udev-default.rules) typically expect:

KERNEL=="kvm", GROUP="kvm", MODE="0666"

If your kvm group has a GID outside the system range (commonly above SYS_GID_MAX from /etc/login.defs), then group resolution can fail during device setup.

When that happens, /dev/kvm may fall back to root:root and restrictive permissions, and QEMU launched by libvirt fails to initialize KVM.

So the problem is not that /dev/kvm has wrong permissions, but rather that the chosen group has wrong ID (outside of the SYS_GID range).

Verification

# 1) Device ownership/mode
ls -l /dev/kvm

# 2) KVM modules
lsmod | grep -E 'kvm|kvm_intel|kvm_amd'

# 3) Group + GID
getent group kvm

# 4) System group range
grep -E '^(SYS_GID_MIN|SYS_GID_MAX)' /etc/login.defs

# 5) udev hint
journalctl -u systemd-udevd --no-pager | grep -i 'Failed to resolve group.*kvm'

If /dev/kvm is root:root and logs mention “Not a system group”, this guide applies.

Temporary workaround (until reboot)

sudo chgrp kvm /dev/kvm
sudo chmod 0660 /dev/kvm

Then retry VM start.

This will simply give access to /dev/kvm for everyone from the kvm group (you should be included).

Permanent fix

  1. Move kvm to a free system GID (inside SYS_GID_MIN..SYS_GID_MAX).
  2. Reload/trigger udev (or reboot).
  3. Restart libvirt.

Example:

sudo groupmod -g 78 kvm    # be sure that 78 is not allocated to a group on your system
sudo udevadm control --reload
sudo udevadm trigger --name-match=/dev/kvm
sudo systemctl restart libvirtd

After that:

ls -l /dev/kvm
# expected: root kvm and mode usable for qemu/libvirt

A bash script that fixes the problem

I prepared a script that:

  • validates prerequisites,
  • finds a free system GID,
  • optionally updates kvm group GID,
  • reloads udev,
  • fixes /dev/kvm,
  • restarts libvirt,
  • prints final diagnostics.

See: fix-kvm-group.sh. Remember to inspect the script yourself before running. Make sure it's safe.

Run it as root:

sudo bash ./fix-kvm-group.sh

Optional dry run:

sudo bash ./fix-kvm-group.sh --dry-run

Takeaway

  • Always check systemd-udevd logs when /dev/* ownership looks wrong.

If you maintain a podman image or something similar, ensure kvm is provisioned as a system group from the start.