diff --git a/README.md b/README.md index 84fd68f0..9fd45c37 100644 --- a/README.md +++ b/README.md @@ -202,34 +202,6 @@ After a couple of seconds the client will start live migration. The source server will exit and the destination server will start, watch the client terminal for destination server messages. -gpio ----- - -A [gpio](./samples/gpio-pci-idio-16.c) server implements a very simple GPIO -device that can be used with a Linux VM. - -Start the `gpio` server process: - -``` -rm /tmp/vfio-user.sock -./build/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock & -``` - -Next, build `qemu` and start a VM, as described below. - -Log in to your guest VM. You'll probably need to build the `gpio-pci-idio-16` -kernel module yourself - it's part of the standard Linux kernel, but not usually -built and shipped on x86. - -Once built, you should be able to load the module and observe the emulated GPIO -device's pins: - -``` -insmod gpio-pci-idio-16.ko -cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export -for ((i=0;i<12;i++)); do cat /sys/class/gpio/OUT0/value; done -``` - shadow_ioeventfd_server ----------------------- @@ -241,34 +213,11 @@ demonstrate the benefits of shadow ioeventfd, see Other usage notes ================= -Live migration --------------- - -The `master` branch of `libvfio-user` implements live migration with a protocol -based on vfio's v2 protocol. Currently, there is no support for this in any qemu -client. For current use cases that support live migration, such as SPDK, you -should refer to the [migration-v1 branch](https://github.com/nutanix/libvfio-user/tree/migration-v1). - qemu ---- -`vfio-user` client support is not yet merged into `qemu`. Instead, download and -build [this branch of qemu](https://github.com/oracle/qemu/tree/vfio-user-6.2). - -Create a Linux install image, or use a pre-made one. - -Then, presuming you have a `libvfio-user` server listening on the UNIX socket -`/tmp/vfio-user.sock`, you can start your guest VM with something like this: - -``` -./x86_64-softmmu/qemu-system-x86_64 -mem-prealloc -m 256 \ --object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/gpio,share=yes,size=256M \ --numa node,memdev=ram-node0 \ --kernel ~/vmlinuz -initrd ~/initrd -nographic \ --append "console=ttyS0 root=/dev/sda1 single" \ --hda ~/bionic-server-cloudimg-amd64-0.raw \ --device vfio-user-pci,socket=/tmp/vfio-user.sock -``` +Step-by-step instructions for using `libvfio-user` with `qemu` can be [found +here](docs/qemu.md). SPDK ---- @@ -302,6 +251,14 @@ You can configure `vfio-user` devices in a `libvirt` domain configuration: ``` +Live migration +-------------- + +The `master` branch of `libvfio-user` implements live migration with a protocol +based on vfio's v2 protocol. Currently, there is no support for this in any qemu +client. For current use cases that support live migration, such as SPDK, you +should refer to the [migration-v1 branch](https://github.com/nutanix/libvfio-user/tree/migration-v1). + History ======= diff --git a/docs/qemu.md b/docs/qemu.md new file mode 100644 index 00000000..e17e33ad --- /dev/null +++ b/docs/qemu.md @@ -0,0 +1,112 @@ +qemu usage walkthrough +====================== + +In this walk-through, we'll use an Ubuntu cloudimg along with the +[gpio sample server](../samples/gpio-pci-idio-16.c) to emulate a very simple GPIO +device. + +Building qemu +------------- + +`vfio-user` client support is not yet merged into `qemu`. Instead, download and +build [jlevon's master.vfio-user branch of +qemu](https://github.com/jlevon/qemu/tree/master.vfio-user); for example: + +``` +git clone -b master.vfio-user git@github.com:jlevon/qemu.git +cd qemu + +./configure --prefix=/usr --enable-kvm --enable-vnc --target-list=x86_64-softmmu --enable-debug --enable-vfio-user-client +make -j +``` + +Configuring the cloudimg +------------------------ + +Set up the necessary metadata files: + +``` +sudo apt install cloud-image-utils + +$ cat metadata.yaml +instance-id: iid-local01 +local-hostname: cloudimg + +$ cat user-data.yaml +#cloud-config +ssh_import_id: + - gh:jlevon + +cloud-localds seed.img user-data.yaml metadata.yaml +``` + +don't forget to replace `jlevon` with *your* github user name. + +Starting the server +------------------- + +Start the `gpio` server process: + +``` +rm -f /tmp/vfio-user.sock +./build/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock & +``` + +Booting the guest OS +-------------------- + +Make sure your system has hugepages available: + +``` +$ cat /proc/sys/vm/nr_hugepages +1024 +``` + +Now you should be able to start qemu: + +``` +$ imgpath=/path/to/bionic-server-cloudimg-amd64.img +$ sudo ~/src/build/qemu-system-x86_64 \ + -machine accel=kvm,type=q35 -cpu host -m 2G \ + -mem-prealloc -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/gpio,share=yes,size=2G \ + -nographic \ + -device virtio-net-pci,netdev=net0 \ + -netdev user,id=net0,hostfwd=tcp::2222-:22 \ + -drive if=virtio,format=qcow2,file=$imgpath \ + -drive if=virtio,format=raw,file=seed.img \ + -device vfio-user-pci,socket=/tmp/vfio-user.sock +``` + +Log in to your VM and load the kernel driver: + +``` +$ ssh -p 2222 ubuntu@localhost +... +$ sudo apt install linux-modules-extra-$(uname -r) +$ sudo modprobe gpio-pci-idio-16 +``` + +Now we should be able to observe the emulated GPIO device's pins: + +``` +$ sudo su - +# cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export +# for ((i=0;i<12;i++)); do cat /sys/class/gpio/OUT0/value; done +``` + +and the server should output something like: + +``` +gpio: region2: read 0 from (0:1) +gpio: region2: read 0 from (0:1) +gpio: region2: read 0 from (0:1) +gpio: region2: read 0x1 from (0:1) +gpio: region2: read 0x1 from (0:1) +gpio: region2: read 0x1 from (0:1) +gpio: region2: read 0x2 from (0:1) +gpio: region2: read 0x2 from (0:1) +gpio: region2: read 0x2 from (0:1) +gpio: region2: read 0x3 from (0:1) +gpio: region2: read 0x3 from (0:1) +gpio: region2: read 0x3 from (0:1) +```