Linux Kernel Analysis using QEMU and GDB
Abstract
The Linux kernel has evolved a tremendous amount since Linux Torvalds created it in 1991 ( Linux History.) As with all signficant software projects, the kernel code base has been extended with a variety of mechanisms for analysis, profiling and debugging. All are (more-or-less) documented at kernel docs, which is generated from structured text files in the $K/Documentation subtree. These include the following:
- kernel probe (kprobe)
- trace technologies including: trace, ftrace, kprobe tracing
- dev tools: coccinelle, gcov, kasan, kgdb, kmemleak, sparse, etc.
- User-space filesystems exposing kernel operations: procfs, sysfs, debugfs
- QEMU and GDB
This article delves into the final mechanism: setting up a QEMU environment for analyzing/running a Linux kernel under GDB.
The goal is accomplished by the following steps:
- Create a QEMU VM running a Linux/GNU Debian system.
- Build and install a custom Linux kernel from source into the VM.
- Start the QEMU VM and attach to the kernel from the host GDB through the QEMU gdbstub. Then use the host GDB interface to run, break, monitor the kernel running in the VM guest.
While this may sound like a length process, once understand the steps it takes about 20 minutes of hands-on work and another 20–60 minutes (depending on download and compile power) to set up. Also, you only need to do all the steps the first time you build the VM; subsequent re-builds are much quicker.
Glossary and Definitions
- D-I: Debian Installer
- EDK2: EFI Development Kit version 2
- ISO 9660: A standard format for optical disc media
- $K: the Linux kernel top of source tree
- OVMF: Open Virtual Machine Firmware; a very nice port of the EDK2 to QEMU.
- $Q_P: the QEMU executable, in my case: /usr/local/bin/qemu-system-x86_64
- VM: Virtual Machine
Create a QEMU VM Running a Linux/GNU Debian System
Build and install a contemporary QEMU
I use Ubuntu 18.04, which is distributed with a very old QEMU executable(2.12 I think.) In order to pick up A LOT of new features, download, build, install QEMU 5.1.0 with the following configuration:
host> cd <installation location>
host> mkdir -p ./bin/debug/native
host> cd bin/debug/native
host> ../../../configure --prefix=/opt/qemu \
--target-list=x86_64-softmmu \
--enable-debug \
--enable-spice \
--enable-virtfs \
--enable-curses \
--enable-libusb
host> time make -j8 > MAKE.QEMU_X86_64 2>&1 # install QEMU bin/, include/, libexec/, var/ under **/opt/qemu**
host> sudo make install
Then I define a set of environment vars for directories, documentation and executables, most importantly $Q_P to the installed location of the QEMU executable.
Download an Installation ISO
There are a number of Linux/GNU vendors that provide system installation support as a series of ISO 9660 ( ISO) files. Each ISO file in the distribution is structured so it will fit on a target media (CD, DVD, USB flash drive.) Typically the distribution files reside on a public server, accessible from the internet.
For the most part, I use Debian and Ubuntu (based on Debian) Linux/GNU distros. Occasionally, on a bad day, I need a Microsoft Windows distro but it requires a unique installation key to run properly, even in a VM.
Here are the public sites for each (but there are many mirror sites):
For this project, I chose the most current (at the time) AMD 64-bit Debian Buster installation disk: debian-10.5.0-amd64-netinst.iso. The net installation is a smallish ISO containing enough software to install essential programs and get the rest from the network (using a fast network connection!) See Debian Distros for more information:
host> cd /opt/ISO
host> wget -v https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.5.0-amd64-netinst.is
Note that the current release has since been updated to debian-10.9.0-amd64-netinst.iso. Any recent Debian or Ubuntu release will be fine because we are separate building a custom kernel.
Run QEMU D-I Installation
Now that we have a suitable installation ISO, run it in a QEMU VM to install the distro. This will require some tedious manual responses to set up the QEMU VM disk ($Q_DISK) but this should be an infrequent step.
However, if you find the manual install steps become tiresome, look at the D-I Preseed capability. For a good way to add your preseed.cfg see instructions at D-I Preseed Edit ISO. For frequent/novice installs this is invaluable, saves time and increases reliability. It takes some preparation so we will not discuss it further here.
Here are the steps to create a $Q_DISK VM and run the installation ISO:
host> cd $Q_VM
host> Q_DISK=d10q35.raw
host> ISO=$Q_ISO/debian-10.5.0-amd64-netinst.iso
host> FW_OVMFVARS=d10_ovmfvars.fd
host> FW_OVMFCODE=/opt/distros/qemu-5.1.0/pc-bios/edk2-x86_64-code.fd host> qemu-img create -f raw $Q_DISK 4G
host> $Q_P -m 1024 -machine q35,accel=kvm -cpu host \
-drive if=pflash,format=raw,readonly=on,file=${FW_OVMFCODE} \
-drive if=pflash,format=raw,file=${FW_OVMFVARS} \
-drive format=raw,file=$Q_DISK \
-cdrom $ISO \
-vga qxl \
-nic user,hostfwd=tcp::10022-:22 \
-monitor stdio
Now let’s document each runtime argument to figure out what the heck is going on. Note the QEMU command line has solid descriptions of the various options (e.g. $Q_P help, $Q_P -machine help, etc.)
- -m 1024 : 1G of memory for the VM. Make this as small as possible because the D-I sizes the $Q_DISK swap partition to be same as this, which takes disk space from the root filesystem.
- -machine q35,accel=kvm: The Q35 chipset is the most recent X86 machine supported by QEMU.accel=kvm uses the host kernel kvm/kvm_intel modules which increases performance. We will need to set accel=tcg for the final GDB session.
- -cpu host : Provide all host features to the VM. This can only be used with accel=kvm.
- -drive if=pflash,format=raw,readonly=on,file=$FW_OVMFCODE : Use the OVMF bootloader distributed in the QEMU package. See EDK2 for more info on this. This is a pure text area, so mark as readonly.
- -drive if=pflash,format=raw,file=$FW_OVMFVARS : The OVMF configuration area. This can be modified by going into the bootloader or using man:efibootmgr in the guest.
- -drive format=raw,file=$Q_DISK : the VM image created using qemu-img create. This is the persistent storage for the VM and any changes to the runtime system will be saved here. I use the raw format rather than the qcow2 format because it is a little faster.
- -cdrom $ISO : the downloaded D-I ISO file. This contains the network installation software that will set up a Debian 10.5.0 rootfs on $Q_DISK.
- -vga qxl : the most recent VGA graphics driver implementation based on QXL.
- -nic user,hostfwd=tcp::10022-:22 : a QEMU shortcut for networking settings with default values. For more info, in the guest> ip addr and in the QEMU Monitor (mon) info network. The hostfwd option maps host port 10022 to guest 22 for SSH access.
- -monitor stdio: Start the QEMU Monitor on stdio. You will see a (qemu) prompt almost immediately, from which you can control and monitor the QEMU VM.
After entering the $Q_P command line, the VM will start and you will see the D-I menu on the VGA console. The D-I questions should be explanatory but here are some notes that may help:
- This is a private VM so make a simple root password and user name/password. Let’s assume the two accounts are root:root and user:user.
- The hostname and domain can be the same but make it descriptive so later you can identify it. For example, I used d10q35 for the Debian 10 (Buster) release and the QEMU --machine q35 type.
- Make sure you enable the SSH server during installation. This will be essential to manage the running VM. Do not enable any desktop environment for this exercise, that just grows the start time and disk size.
GRUB will be important later for installing new kernels so a little explanation is in order for how it gets invoked from the UEFI (via the QEMU OVMF implementation.) For Debian, the UEFI uses small programs in /boot/efi/EFI/debian identified with a .efi extension. These are built as Portable Executable files that can be run by the EDK2/OVMF modules. The D-I default configuration has OVMF loading GRUB as a “second” bootloader that manages loading the desired kernel/initram, etc. This is all transparent to you but will give some context for later when you will use update-grub
to add your custom kernel to the/boot/grub/grub.cfg.
The nice thing about using a VM is: if you screw up, remove $Q_DISK and start over (which is why preseed files are helpful!)
Post Installation Configuration
After the installation is completed, the $Q_DISK holds a fully running Debian system. Try to SSH to it from the host: ssh user@localhost -p 10022
. This should connect and drop you into the guest/home/user home directory.
The SSH connection will be our main way to run the VM and copy the new kernel to it. To make this more useful I copy an SSH pubkey and .bashrc to the VM user:
host> ssh user@localhost -p 10022 mkdir -p ./.ssh
host> scp -P 10022 ~/.ssh/id_rsa.qemu.pub \
user@localhost:./.ssh/authorized_keys
host> ssh user@localhost -p 10022 chmod 600 ./.ssh/authorized_keys host> scp -P 10022 $GK/qemukvm/bashrc.qemu user@localhost:./.bashrc
Now re-start the VM using the $Q_P installation command to restart. This is possible because GRUB will boot from$Q_DISK before the $ISO.
Build and Install A Linux Kernel into the VM
This is the easy part!
Download and Branch Kernel Source
It seems like most distros have some team that adds patches to “vanilla” kernel. Some are for tuning but most are to security or driver patches. QEMU is a sandbox with very limited driver support. Don’t worry about getting the latest Debian or Ubuntu source. The “vanilla” distro direct from Greg KH is fine.
To pull the source use GIT and put it in where ever you have defined $K
. There are lot’s of literature and stackoverflow questions on using GIT.
THEN checkout your desired branch or tag with on a local branch (to reduce future confusion.)
host> cd $K/..
host> git clone \
git://git.kernel.org/pub/scm/linux/kernel/git/stable/ \
linux-stable.git
host> cd $K/linux-stable
host> git branch -a
host> git tag --list
host> git checkout v5.0.21 -b my5.0.21
Configure and Build Kernel
Now you have the v5.0.21 source. The goal is to build the debian installation packages for the kernel and support libraries AND the kernel debug package.
First generate a .config to be consumed by the kernel Makefile. There are a lot of ways to do this but I prefer make olddefconfig
. Then, optionally, use make menuconfig
to tune the kernel build.
Pro tip: if you know a single config but don’t want to step through the menus, you can use scripts/config — help
to change .config values. For example, to disable module signing:
scripts/config --disable MODULE_SIG
Don’t spend too much time on this, the kernel build is VERY forgiving and efficient.
Now build it as a set of Debian packages, writing the build steps to a file referred as $logbuild:
make -j8 LOCALVERSION=-MY5 deb-pkg > $logbuild 2>&1
LOCALVERSION contributes to uniquely name the resulting packages. For each rebuild I bump the LOCALVERSION number to reduce future confusion about the kernel config. For example, MY5 is the fifth build cycle mostly to enable debugging and remove unnecessary components (e.g. filesystems, drivers, etc.)
If the build fails see $logbuild. If it succeeds you’ll see in , the following dpkgs:
- linux-image-5.0.21-MY5_5.0.21-MY5_amd64.deb: the package containing the linux kernel and /lib/modules. This is THE most important!
- linux-libc-dev_5.0.21-MY5_amd64.deb: the package containing the user-space libs to access kernel services. This is the second most important, you’ll never know why things don’t run without this.
- linux-headers-5.0.21-MY5_5.0.21-MY5_amd64.deb: the package containing the /usr/src/linux-headers-5.0.21-* files. Not really important if you’re not building kernel source or modules in the VM, but still worth doing.
- linux-image-5.0.21-MY5-dbg_5.0.21-MY5_amd64.deb: the HUGE package matching the original kernel package but with debug symbols included in the executables. This is roughly 15x larger with the debug info.
Copy and Install Custom Kernel in VM
Now that the kernel Deb Pkg files are created, let’s install in the VM.
First, scp the three necessary packages into user home directory on the guest VM:
host> scp -P 10022 \
linux-image-5.0.21-MY5_5.0.21-MY5_amd64.deb user@localhost:.
host> scp -P 10022 \
linux-headers-5.0.21-MY5_5.0.21-MY5_amd64.deb dave@localhost:. host> scp -P 10022 \
linux-libc-dev_5.0.21-MY5_amd64.deb dave@localhost:.
Then SSH to the guest, install the packages and update GRUB:
d10q35> sudo dpkg -i linux-*.deb
d10q35> F=/etc/default/grub
d10q35> sudo grub-mkconfig | grep 'menuentry'
d10q35> sudo sed -i -E \
's/GRUB_DEFAULT=0/GRUB_DEFAULT="Debian GNU\/Linux, with Linux 5.0.21-MY5"/' $F
d10q35> sudo sed -i -E \ 's/GRUB_CMDLINE_LINUX_DEFAULT=""/GRUB_CMDLINE_LINUX_DEFAULT="nokaslr"/' $F
d10q35> sudo sed -i -E
's/GRUB_TIMEOUT=5/GRUB_TIMEOUT=2/' $F d10q35> sudo update-grub
- Step 1: install the recently copied kernel packages
- Step 2: Use
grub-mkconfig
to find the name of the new kernel, then change the GRUB_DEFAULT kernel to it. - Step 3: Add
nokaslr
to the boot command line. This disables KASLR symbol randomization, which allows the debug symbol table addresss to match the runtime code. - Step 4: Not necessary but lower GRUB_TIMEOUT for quicker booting.
Finally, reboot and verify everything is as expected:
d10q35> uname -a
Linux d10q35 5.0.21-MY55 #1 SMP Tue Dec 15 11:53:40 EST 2020 x86_64 GNU/Linux d10q35> cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-5.0.21-MY55 \
root=UUID=571a582d-d97b-40c5-85f5-536fe1450b2d \
ro console=tty1 console=ttyS0,9600n8 nokaslr d10q35> file /boot/vmlinuz-5.0.21-MY5
/boot/vmlinuz-5.0.21-MY5: Linux kernel x86 boot executable bzImage, \
version 5.0.21-MY5 (dturvene@linger) #1 SMP Tue Dec 15 11:53:40 EST 2020, \
RO-rootFS, swap_dev 0x8, Normal VGA
Also make sure the disk partitions are all there and reasonable:
d10q35> sudo fdisk -l
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5EBAFBB2-3529-4950-8E23-A15A32B1D330 Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 6299647 5249024 2.5G Linux filesystem
/dev/sda3 6299648 8386559 2086912 1019M Linux swap
I see the standard UEFI Boot Partition of 512M (a.k.a the ESP), the rootfs with 2.5G and the swap with 1G. That’s good for this.
Shutdown again to start kernel debugging!
Start the QEMU VM and Attach to the Kernel
This final step is the goal of this paper. Essentially:
- Start the VM but freeze it before the guest starts (-S).
- Attach to the QEMU gdbstub using gdb from the host (-s).
- Load the kernel debug symbols in the host gdb session.
- Manage the guest kernel from the host gdb session.
For step 1, starting the VM. One needs to add to the $Q_P command line and start it and change accel=tcg
. See $Q_P help for an explanation of these. accel=tcg
runs slower but performs all kernel code in the QEMU Tiny Code Generator without calling into the host kvm module.
Check the guest state from the monitor to make sure it is paused, using the QEMU TCG (not the host KVM module) and waiting for gdb:
(qemu) info status
VM status: paused
(qemu) info kvm
kvm support: disabled
(qemu) info char
gdb: filename=disconnected:tcp:0.0.0.0:1234,server
compat_monitor0: filename=stdio
In another window start GDB using a command file: gdb -x <cmdfile>
. The command file is desirable for simple and repeatable testing.
Here is a simple GDB command file with explanatory comments. I use this to prepare for kernel debugging:
# GDB setup for qemu kernel debug
# must run GDB in $K where vmlinux is built # See kerndebug.md
#qemu, tutorial.gdb for reference
# export X_GDB=$GK/qemukvm/kern.gdb
# gdb -x $X_GDB
# after gdb script is loaded, start the remote with 'gdb> c' # Set session logging to file set logging file /home/dturvene/qemu.work/gdb.kern.log set logging on
# run in emacs cmdline so turn paging off
set pagination off # attach to qemu gdb_stub, which will show executable warnings
target remote localhost:1234 # load symbols from vmlinux debug build
file ./vmlinux # software break code not set up initially so use a
# a hardware breakpoint for early code.
# i addr start_kernel
hb start_kernel # Now break on init/main.c:kernel_init kthread and
# add loadable kernel module symbols as each is loaded.
# See $K/scripts/gdb/linux/symbols.py for lx-symbols
b kernel_init commands 2 lx-symbols end
The one slightly obscure line is lx-symbols. This is one of the python 3 methods located in $K/scripts/gdb/linux
to support gdb access in the kernel. These are well-worth your time to learn. See QEMU and GDB for setup and usage for the GDBlx
commands.
Now you have GDB access to a running kernel. In the attached GDB session do:
...
Hardware assisted breakpoint 1 at 0xffffffff82883cbe: file init/main.c, line 538.
Breakpoint 2 at 0xffffffff81a3b570: file init/main.c, line 1051. (gdb) c
Continuing.
Breakpoint 1, start_kernel () at init/main.c:538 538 {
(gdb) c
Continuing.
Breakpoint 2, kernel_init (unused=0x0 <irq_stack_union>) at init/main.c:1051 1051 { loading vmlinux
(gdb) c
scanning for modules in /opt/distros/K/ksrc/linux-stable loading @0xffffffffc0010000: /opt/distros/K/ksrc/linux-stable/drivers/mfd/lpc_ich.ko loading @0xffffffffc001d000: /opt/distros/K/ksrc/linux-stable/drivers/ata/libahci.ko loading @0xffffffffc0002000: /opt/distros/K/ksrc/linux-stable/drivers/i2c/busses/i2c-i801.ko loading @0xffffffffc002a000: /opt/distros/K/ksrc/linux-stable/drivers/net/ethernet/intel/e1000/e1000.ko
...
You are now running the VM kernel under GDB! SSH to the guest to confirm it’s running and all is good. You can now Ctrl-C in the gdb session and it will stop the guest kernel. Set a breakpoint in the scheduler to see what tasks are switching:
(gdb) b finish_task_switch Breakpoint 3 at 0xffffffff810c9e70: file kernel/sched/core.c, line 2657.
(gdb) commands 3
>printf "prev=%s next=%s\n", prev->comm, $lx_current().comm
>bt
>end
(gdb) c
(gdb) c...
(gdb) dis 3
Notice the use of the $lx_current()
method. This is necessary to recover the kernel current global task.
This one example just scratches the surface of what can be achieved in this framework. I have used it to study drivers, the TCP/IP stack, even hardware interrupts (for which I wrote a paper.)
More Efficient QEMU
You can use the $Q_P command line above (increasing the memory to >1G) but restarting the VM becomes easier and more flexible using a QEMU configuration file. There is very limited documentation about the -writeconfig
and -readconfig
command line options so they are a little bit of a mystery. But I find collecting most of the VM attributes into a version-controlled file very helpful, even more so when using the QEMU gdbstub facility to connect to a VM.
Here’s the command I use to start the VM with a -readconfig
option:
${Q_P} -S -s -nodefaults -readconfig ${CFG} -monitor stdio
And here is the ${CFG} file. Notice the comments!
# qemu config file to debug kernel using gdb
# See qemu.sh:d10_r for commandline [drive "uefi-binary"]
if = "pflash"
format = "raw"
file = "/opt/distros/qemu-5.1.0/pc-bios/edk2-x86_64-code.fd"[drive "uefi-varstore"]
if = "pflash"
format = "raw"
file = "d10_ovmfvars.fd" [drive "disk"]
format = "raw"
file = "d10q35.raw" # Enable QXL video, use $Q_P "-display none" to prevent window
display [device "video"]
driver = "qxl-vga"
bus = "pcie.0"
addr = "01.0" # Create the n1 netdevice and map SSH to host 10022 port
[netdev "n1"]
type = "user"
hostfwd = "tcp::10022-:22" # Use an e1000 driver for n1 net
[device "net"]
driver = "e1000"
netdev = "n1"
bus = "pcie.0"
addr = "02.0" # accel = "kvm" or "tcg", must use "tcg" for kernel gdb
# type = "q35"
[machine]
type = "q35"
accel = "tcg" [memory]
size = "4096" # set number of cores,
# 1 is best for GDB debug for a single CPU thread # >1 is better for SMP work
# the qemu executable has several other threads
[smp-opts]
cpus = "1"
Originally published at http://www.dahetral.com.