FailedConsole Output

Skipping 10,594 KB.. Full Log
.34 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]
++ echo 192.168.122.34
+ ip=192.168.122.34
+ [[ 0 == 127 ]]
+ lxc exec testkvm-xenial-noupd -- ssh ubuntu@192.168.122.34 uptime
Warning: Permanently added '192.168.122.34' (ECDSA) to the list of known hosts.
 04:50:30 up 0 min,  0 users,  load average: 0.00, 0.00, 0.00
+ aliverc=0
+ [[ 0 != 0 ]]
+ success
+ logmsg 0 '  => Pass'
+ local lvl=0
+ local 'msg=  => Pass'
+ local sameline=0
+ [[ 0 -ne 0 ]]
+ sameline=1
+ [[ 1 -ne 1 ]]
+ printf '\n'

+ [[ 1 -ne 1 ]]
+ printf %s '  => Pass'
+ tee -a qemu-libvirt-test.status
  => Pass+ break
+ return 0
+ getkvminfo testkvm-xenial-noupd kvmguest-xenial-normal
+ local containername=testkvm-xenial-noupd
+ local guestname=kvmguest-xenial-normal
+ local machinetype=
+ local osversion=
+ local qemu=
+ lxc exec testkvm-xenial-noupd -- systemctl status libvirtd --lines 200 --full --no-pager
��� libvirt-bin.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-11-29 03:28:21 UTC; 1h 23min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 5352 (libvirtd)
   CGroup: /system.slice/libvirt-bin.service
           ������ 4324 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������ 4325 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������ 5352 /usr/sbin/libvirtd
           ������ 6518 /usr/bin/qemu-system-s390x -name kvmguest-xenial-normal -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 3ac13a2d-8601-4af8-8f72-61fcd203b6d2 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-normal/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:23:7e:44,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
           ������ 6823 /usr/bin/qemu-system-s390x -name kvmguest-xenial-postcopy -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid a95fb259-f1f9-48f1-8246-4dba4b1dc34c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-postcopy/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:c6:b4:2d,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
           ������12180 /usr/bin/qemu-system-s390x -name kvmguest-xenial-saverestore -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid c4da2a75-a509-4939-bdb6-86244bbf6824 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-saverestore/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=25,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:40:b2:94,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on

Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Starting Virtualization daemon...
Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Started Virtualization daemon.
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /etc/hosts - 7 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq-dhcp[4324]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100)
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: hostname: testkvm-xenial-noupd.lxd
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host1/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host1
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host0/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host0
Nov 29 04:28:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.127 52:54:00:69:13:ad ubuntu
Nov 29 04:28:32 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.166 52:54:00:05:b1:3c ubuntu
Nov 29 04:30:36 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.73 52:54:00:6a:30:5b ubuntu
Nov 29 04:32:45 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-normal.log': Device or resource busy
Nov 29 04:41:51 testkvm-xenial-noupd libvirtd[5352]: could not find path for descriptor /proc/self/fd/24, skipping
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: iohelper reports: 
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:42:31 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:43:25 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:44:07 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.67 52:54:00:23:7e:44 ubuntu
Nov 29 04:46:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 ubuntu
Nov 29 04:48:16 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.141 52:54:00:c6:b4:2d ubuntu
Nov 29 04:50:23 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPRELEASE(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:50:23 testkvm-xenial-noupd libvirtd[5352]: internal error: End of file from monitor
Nov 29 04:51:10 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 kvmguest-xenial-saverestore
+ lxc exec testkvm-xenial-noupd -- cat /var/log/libvirt/qemu/kvmguest-xenial-normal.log
2017-11-29 04:28:12.440+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-normal -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 28c1d2fb-c62b-4059-8e72-5d1936fbf5d0 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-normal/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:69:13:ad,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/0 (label charconsole0)
2017-11-29T04:32:46.489936Z qemu-system-s390x: terminating on signal 15 from pid 5352
2017-11-29 04:44:08.149+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-normal -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 3ac13a2d-8601-4af8-8f72-61fcd203b6d2 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-normal/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:23:7e:44,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/0 (label charconsole0)
+ lxc exec testkvm-xenial-noupd -- virsh dominfo kvmguest-xenial-normal
Id:             5
Name:           kvmguest-xenial-normal
UUID:           3ac13a2d-8601-4af8-8f72-61fcd203b6d2
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       8.5s
Max memory:     524288 KiB
Used memory:    524288 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2 (enforcing)

+ [[ false == \t\r\u\e ]]
++ getkvmmt testkvm-xenial-noupd kvmguest-xenial-normal
++ local containername=testkvm-xenial-noupd
++ local guestname=kvmguest-xenial-normal
++ local mt=not-found
++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-normal
+++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-normal
++ xml='<domain type='\''kvm'\'' id='\''5'\''>
  <name>kvmguest-xenial-normal</name>
  <uuid>3ac13a2d-8601-4af8-8f72-61fcd203b6d2</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:23:7e:44'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet0'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/0'\''>
      <source path='\''/dev/pts/0'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</label>
    <imagelabel>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</imagelabel>
  </seclabel>
</domain>'
+++ echo '<domain type='\''kvm'\'' id='\''5'\''>
  <name>kvmguest-xenial-normal</name>
  <uuid>3ac13a2d-8601-4af8-8f72-61fcd203b6d2</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:23:7e:44'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet0'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/0'\''>
      <source path='\''/dev/pts/0'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</label>
    <imagelabel>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</imagelabel>
  </seclabel>
</domain>'
+++ xmllint --xpath 'string(//domain/os/type/@machine)' -
++ mt=s390-ccw-virtio-xenial
++ '[' -z s390-ccw-virtio-xenial ']'
++ echo s390-ccw-virtio-xenial
+ machinetype=s390-ccw-virtio-xenial
+ echo 'Machine Type s390-ccw-virtio-xenial'
Machine Type s390-ccw-virtio-xenial
++ getcontaineros testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- grep '^VERSION=' /etc/os-release
+ osversion='VERSION="16.04.3 LTS (Xenial Xerus)"'
+ echo 'KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"'
KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"
++ getcontainerqemu testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- dpkg-query --show qemu-kvm
+ qemu='qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
+ echo 'qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
qemu-kvm	1:2.5+dfsg-5ubuntu10.16
+ checkalive testkvm-xenial-noupd kvmguest-xenial-normal
+ local container=testkvm-xenial-noupd
+ local guestname=kvmguest-xenial-normal
+ local rc=0
+ local aliverc=0
+ local ip=0
+ logmsg 3 'Check if guest kvmguest-xenial-normal on testkvm-xenial-noupd is alive'
+ local lvl=3
+ local 'msg=Check if guest kvmguest-xenial-normal on testkvm-xenial-noupd is alive'
+ local sameline=0
+ [[ 3 -ne 0 ]]
+ level[${1}]=3
+ for i in '{1..3}'
+ [[ 3 -lt 1 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 2 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 3 ]]
+ [[ 0 -ne 1 ]]
+ printf '\n'
+ tee -a qemu-libvirt-test.status

++ seq 2 3
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ [[ 0 -ne 1 ]]
+ printf '%d.%d.%d ' 5 2 3
+ tee -a qemu-libvirt-test.status
5.2.3 + tee -a qemu-libvirt-test.status
++ date +%H:%M:%S
+ printf '(%s): ' 04:51:24
(04:51:24): + printf %s 'Check if guest kvmguest-xenial-normal on testkvm-xenial-noupd is alive'
+ tee -a qemu-libvirt-test.status
Check if guest kvmguest-xenial-normal on testkvm-xenial-noupd is alive+ local n=1
+ local max=7
+ sleep 10s
+ true
++ getguestip testkvm-xenial-noupd kvmguest-xenial-normal
++ local container=testkvm-xenial-noupd
++ local guestname=kvmguest-xenial-normal
++ local ip=unset
+++ lxc exec testkvm-xenial-noupd -- python -c 'import uvtool.libvirt.kvm; print uvtool.libvirt.kvm.name_to_ips('\''kvmguest-xenial-normal'\'')[0]'
++ ip=192.168.122.67
++ [[ 0 != 0 ]]
++ [[ 192.168.122.67 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]
++ [[ 192.168.122.67 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]
++ echo 192.168.122.67
+ ip=192.168.122.67
+ [[ 0 == 127 ]]
+ lxc exec testkvm-xenial-noupd -- ssh ubuntu@192.168.122.67 uptime
Warning: Permanently added '192.168.122.67' (ECDSA) to the list of known hosts.
 04:51:35 up 7 min,  0 users,  load average: 0.00, 0.00, 0.00
+ aliverc=0
+ [[ 0 != 0 ]]
+ success
+ logmsg 0 '  => Pass'
+ local lvl=0
+ local 'msg=  => Pass'
+ local sameline=0
+ [[ 0 -ne 0 ]]
+ sameline=1
+ [[ 1 -ne 1 ]]
+ printf '\n'

+ [[ 1 -ne 1 ]]
+ printf %s '  => Pass'
+ tee -a qemu-libvirt-test.status
  => Pass+ break
+ return 0
+ removeguest xenial testkvm-xenial-noupd
+ local guestrelease=xenial
+ local container=testkvm-xenial-noupd
+ BashBacktrace
+ [[ true != \t\r\u\e ]]
+ set +x
Backtrace:main:1658 -> removeguest:821
+ [[ false == \t\r\u\e ]]
+ [[ false == \t\r\u\e ]]
+ logmsg 3 'Remove xenial guest on testkvm-xenial-noupd'
+ local lvl=3
+ local 'msg=Remove xenial guest on testkvm-xenial-noupd'
+ local sameline=0
+ [[ 3 -ne 0 ]]
+ level[${1}]=4
+ for i in '{1..3}'
+ [[ 3 -lt 1 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 2 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 3 ]]
+ [[ 0 -ne 1 ]]
+ printf '\n'
+ tee -a qemu-libvirt-test.status

++ seq 2 3
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ [[ 0 -ne 1 ]]
+ printf '%d.%d.%d ' 5 2 4
+ tee -a qemu-libvirt-test.status
5.2.4 ++ date +%H:%M:%S
+ tee -a qemu-libvirt-test.status
+ printf '(%s): ' 04:51:35
(04:51:35): + printf %s 'Remove xenial guest on testkvm-xenial-noupd'
+ tee -a qemu-libvirt-test.status
Remove xenial guest on testkvm-xenial-noupd+ local release=
+ local direction=
+ local suffix=
+ for suffix in normal saverestore postcopy
+ getkvminfo testkvm-xenial-noupd kvmguest-xenial-normal
+ local containername=testkvm-xenial-noupd
+ local guestname=kvmguest-xenial-normal
+ local machinetype=
+ local osversion=
+ local qemu=
+ lxc exec testkvm-xenial-noupd -- systemctl status libvirtd --lines 200 --full --no-pager
��� libvirt-bin.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-11-29 03:28:21 UTC; 1h 23min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 5352 (libvirtd)
   CGroup: /system.slice/libvirt-bin.service
           ������ 4324 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������ 4325 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������ 5352 /usr/sbin/libvirtd
           ������ 6518 /usr/bin/qemu-system-s390x -name kvmguest-xenial-normal -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 3ac13a2d-8601-4af8-8f72-61fcd203b6d2 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-normal/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:23:7e:44,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
           ������ 6823 /usr/bin/qemu-system-s390x -name kvmguest-xenial-postcopy -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid a95fb259-f1f9-48f1-8246-4dba4b1dc34c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-postcopy/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:c6:b4:2d,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
           ������12180 /usr/bin/qemu-system-s390x -name kvmguest-xenial-saverestore -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid c4da2a75-a509-4939-bdb6-86244bbf6824 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-saverestore/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=25,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:40:b2:94,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on

Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Starting Virtualization daemon...
Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Started Virtualization daemon.
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /etc/hosts - 7 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq-dhcp[4324]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100)
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: hostname: testkvm-xenial-noupd.lxd
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host1/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host1
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host0/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host0
Nov 29 04:28:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.127 52:54:00:69:13:ad ubuntu
Nov 29 04:28:32 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.166 52:54:00:05:b1:3c ubuntu
Nov 29 04:30:36 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.73 52:54:00:6a:30:5b ubuntu
Nov 29 04:32:45 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-normal.log': Device or resource busy
Nov 29 04:41:51 testkvm-xenial-noupd libvirtd[5352]: could not find path for descriptor /proc/self/fd/24, skipping
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: iohelper reports: 
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:42:31 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:43:25 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:44:07 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.67 52:54:00:23:7e:44 ubuntu
Nov 29 04:46:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 ubuntu
Nov 29 04:48:16 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.141 52:54:00:c6:b4:2d ubuntu
Nov 29 04:50:23 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPRELEASE(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:50:23 testkvm-xenial-noupd libvirtd[5352]: internal error: End of file from monitor
Nov 29 04:51:10 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 kvmguest-xenial-saverestore
+ lxc exec testkvm-xenial-noupd -- cat /var/log/libvirt/qemu/kvmguest-xenial-normal.log
2017-11-29 04:28:12.440+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-normal -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 28c1d2fb-c62b-4059-8e72-5d1936fbf5d0 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-normal/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:69:13:ad,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/0 (label charconsole0)
2017-11-29T04:32:46.489936Z qemu-system-s390x: terminating on signal 15 from pid 5352
2017-11-29 04:44:08.149+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-normal -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 3ac13a2d-8601-4af8-8f72-61fcd203b6d2 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-normal/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:23:7e:44,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/0 (label charconsole0)
+ lxc exec testkvm-xenial-noupd -- virsh dominfo kvmguest-xenial-normal
Id:             5
Name:           kvmguest-xenial-normal
UUID:           3ac13a2d-8601-4af8-8f72-61fcd203b6d2
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       8.6s
Max memory:     524288 KiB
Used memory:    524288 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2 (enforcing)

+ [[ false == \t\r\u\e ]]
++ getkvmmt testkvm-xenial-noupd kvmguest-xenial-normal
++ local containername=testkvm-xenial-noupd
++ local guestname=kvmguest-xenial-normal
++ local mt=not-found
++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-normal
+++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-normal
++ xml='<domain type='\''kvm'\'' id='\''5'\''>
  <name>kvmguest-xenial-normal</name>
  <uuid>3ac13a2d-8601-4af8-8f72-61fcd203b6d2</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:23:7e:44'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet0'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/0'\''>
      <source path='\''/dev/pts/0'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</label>
    <imagelabel>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</imagelabel>
  </seclabel>
</domain>'
+++ echo '<domain type='\''kvm'\'' id='\''5'\''>
  <name>kvmguest-xenial-normal</name>
  <uuid>3ac13a2d-8601-4af8-8f72-61fcd203b6d2</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-normal-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:23:7e:44'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet0'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/0'\''>
      <source path='\''/dev/pts/0'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</label>
    <imagelabel>libvirt-3ac13a2d-8601-4af8-8f72-61fcd203b6d2</imagelabel>
  </seclabel>
</domain>'
+++ xmllint --xpath 'string(//domain/os/type/@machine)' -
++ mt=s390-ccw-virtio-xenial
++ '[' -z s390-ccw-virtio-xenial ']'
++ echo s390-ccw-virtio-xenial
+ machinetype=s390-ccw-virtio-xenial
+ echo 'Machine Type s390-ccw-virtio-xenial'
Machine Type s390-ccw-virtio-xenial
++ getcontaineros testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- grep '^VERSION=' /etc/os-release
+ osversion='VERSION="16.04.3 LTS (Xenial Xerus)"'
+ echo 'KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"'
KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"
++ getcontainerqemu testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- dpkg-query --show qemu-kvm
+ qemu='qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
+ echo 'qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
qemu-kvm	1:2.5+dfsg-5ubuntu10.16
+ lxc exec testkvm-xenial-noupd -- uvt-kvm destroy kvmguest-xenial-normal
+ for release in '${SPAWNRELEASES}'
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-from -- uvt-kvm destroy kvmguest-xenial-normal
uvt-kvm: error: domain 'kvmguest-xenial-normal' not found.
+ true
+ lxc exec testkvm-xenial-from -- virsh destroy kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-from -- virsh undefine kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-from -- virsh vol-delete --pool uvtool kvmguest-xenial-normal.qcow
error: failed to get vol 'kvmguest-xenial-normal.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal.qcow'

+ true
+ lxc exec testkvm-xenial-from -- virsh vol-delete --pool uvtool kvmguest-xenial-normal-ds.qcow
error: failed to get vol 'kvmguest-xenial-normal-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-to -- uvt-kvm destroy kvmguest-xenial-normal
uvt-kvm: error: domain 'kvmguest-xenial-normal' not found.
+ true
+ lxc exec testkvm-xenial-to -- virsh destroy kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-to -- virsh undefine kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-to -- virsh vol-delete --pool uvtool kvmguest-xenial-normal.qcow
error: failed to get vol 'kvmguest-xenial-normal.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal.qcow'

+ true
+ lxc exec testkvm-xenial-to -- virsh vol-delete --pool uvtool kvmguest-xenial-normal-ds.qcow
error: failed to get vol 'kvmguest-xenial-normal-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-noupd -- uvt-kvm destroy kvmguest-xenial-normal
uvt-kvm: error: domain 'kvmguest-xenial-normal' not found.
+ true
+ lxc exec testkvm-xenial-noupd -- virsh destroy kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh undefine kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh vol-delete --pool uvtool kvmguest-xenial-normal.qcow
error: failed to get vol 'kvmguest-xenial-normal.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal.qcow'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh vol-delete --pool uvtool kvmguest-xenial-normal-ds.qcow
error: failed to get vol 'kvmguest-xenial-normal-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-tononshared -- uvt-kvm destroy kvmguest-xenial-normal
uvt-kvm: error: domain 'kvmguest-xenial-normal' not found.
+ true
+ lxc exec testkvm-xenial-tononshared -- virsh destroy kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh undefine kvmguest-xenial-normal
error: failed to get domain 'kvmguest-xenial-normal'
error: Domain not found: no domain with matching name 'kvmguest-xenial-normal'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh vol-delete --pool uvtool kvmguest-xenial-normal.qcow
error: failed to get vol 'kvmguest-xenial-normal.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal.qcow'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh vol-delete --pool uvtool kvmguest-xenial-normal-ds.qcow
error: failed to get vol 'kvmguest-xenial-normal-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-normal-ds.qcow'

+ true
+ for suffix in normal saverestore postcopy
+ getkvminfo testkvm-xenial-noupd kvmguest-xenial-saverestore
+ local containername=testkvm-xenial-noupd
+ local guestname=kvmguest-xenial-saverestore
+ local machinetype=
+ local osversion=
+ local qemu=
+ lxc exec testkvm-xenial-noupd -- systemctl status libvirtd --lines 200 --full --no-pager
��� libvirt-bin.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-11-29 03:28:21 UTC; 1h 23min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 5352 (libvirtd)
   CGroup: /system.slice/libvirt-bin.service
           ������ 4324 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������ 4325 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������ 5352 /usr/sbin/libvirtd
           ������ 6823 /usr/bin/qemu-system-s390x -name kvmguest-xenial-postcopy -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid a95fb259-f1f9-48f1-8246-4dba4b1dc34c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-postcopy/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:c6:b4:2d,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
           ������12180 /usr/bin/qemu-system-s390x -name kvmguest-xenial-saverestore -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid c4da2a75-a509-4939-bdb6-86244bbf6824 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-saverestore/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=25,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:40:b2:94,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on

Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Starting Virtualization daemon...
Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Started Virtualization daemon.
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /etc/hosts - 7 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq-dhcp[4324]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100)
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: hostname: testkvm-xenial-noupd.lxd
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host1/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host1
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host0/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host0
Nov 29 04:28:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.127 52:54:00:69:13:ad ubuntu
Nov 29 04:28:32 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.166 52:54:00:05:b1:3c ubuntu
Nov 29 04:30:36 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.73 52:54:00:6a:30:5b ubuntu
Nov 29 04:32:45 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-normal.log': Device or resource busy
Nov 29 04:41:51 testkvm-xenial-noupd libvirtd[5352]: could not find path for descriptor /proc/self/fd/24, skipping
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: iohelper reports: 
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:42:31 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:43:25 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:44:07 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.67 52:54:00:23:7e:44 ubuntu
Nov 29 04:46:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 ubuntu
Nov 29 04:48:16 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.141 52:54:00:c6:b4:2d ubuntu
Nov 29 04:50:23 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPRELEASE(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:50:23 testkvm-xenial-noupd libvirtd[5352]: internal error: End of file from monitor
Nov 29 04:51:10 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 kvmguest-xenial-saverestore
+ lxc exec testkvm-xenial-noupd -- cat /var/log/libvirt/qemu/kvmguest-xenial-saverestore.log
2017-11-29 04:28:33.049+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-saverestore -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 3627b261-41d6-422e-a8f6-5d800a426b6c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-saverestore/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=28,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:05:b1:3c,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
2017-11-29T04:41:54.281901Z qemu-system-s390x: terminating on signal 15 from pid 5352
2017-11-29 04:42:32.639+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-saverestore -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 3627b261-41d6-422e-a8f6-5d800a426b6c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-saverestore/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=26,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:05:b1:3c,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -incoming defer -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/0 (label charconsole0)
2017-11-29T04:43:24.813978Z qemu-system-s390x: terminating on signal 15 from pid 5352
2017-11-29 04:46:12.489+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-saverestore -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid c4da2a75-a509-4939-bdb6-86244bbf6824 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-saverestore/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=28,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:40:b2:94,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
2017-11-29T04:50:23.777361Z qemu-system-s390x: terminating on signal 15 from pid 5352
2017-11-29 04:50:23.904+0000: shutting down
2017-11-29 04:51:11.378+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-saverestore -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid c4da2a75-a509-4939-bdb6-86244bbf6824 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-saverestore/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=25,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:40:b2:94,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
+ lxc exec testkvm-xenial-noupd -- virsh dominfo kvmguest-xenial-saverestore
Id:             8
Name:           kvmguest-xenial-saverestore
UUID:           c4da2a75-a509-4939-bdb6-86244bbf6824
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       5.0s
Max memory:     524288 KiB
Used memory:    524288 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-c4da2a75-a509-4939-bdb6-86244bbf6824 (enforcing)

+ [[ false == \t\r\u\e ]]
++ getkvmmt testkvm-xenial-noupd kvmguest-xenial-saverestore
++ local containername=testkvm-xenial-noupd
++ local guestname=kvmguest-xenial-saverestore
++ local mt=not-found
++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-saverestore
+++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-saverestore
++ xml='<domain type='\''kvm'\'' id='\''8'\''>
  <name>kvmguest-xenial-saverestore</name>
  <uuid>c4da2a75-a509-4939-bdb6-86244bbf6824</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:40:b2:94'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet1'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/1'\''>
      <source path='\''/dev/pts/1'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-c4da2a75-a509-4939-bdb6-86244bbf6824</label>
    <imagelabel>libvirt-c4da2a75-a509-4939-bdb6-86244bbf6824</imagelabel>
  </seclabel>
</domain>'
+++ echo '<domain type='\''kvm'\'' id='\''8'\''>
  <name>kvmguest-xenial-saverestore</name>
  <uuid>c4da2a75-a509-4939-bdb6-86244bbf6824</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-saverestore-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:40:b2:94'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet1'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/1'\''>
      <source path='\''/dev/pts/1'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-c4da2a75-a509-4939-bdb6-86244bbf6824</label>
    <imagelabel>libvirt-c4da2a75-a509-4939-bdb6-86244bbf6824</imagelabel>
  </seclabel>
</domain>'
+++ xmllint --xpath 'string(//domain/os/type/@machine)' -
++ mt=s390-ccw-virtio-xenial
++ '[' -z s390-ccw-virtio-xenial ']'
++ echo s390-ccw-virtio-xenial
+ machinetype=s390-ccw-virtio-xenial
+ echo 'Machine Type s390-ccw-virtio-xenial'
Machine Type s390-ccw-virtio-xenial
++ getcontaineros testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- grep '^VERSION=' /etc/os-release
+ osversion='VERSION="16.04.3 LTS (Xenial Xerus)"'
+ echo 'KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"'
KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"
++ getcontainerqemu testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- dpkg-query --show qemu-kvm
+ qemu='qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
+ echo 'qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
qemu-kvm	1:2.5+dfsg-5ubuntu10.16
+ lxc exec testkvm-xenial-noupd -- uvt-kvm destroy kvmguest-xenial-saverestore
+ for release in '${SPAWNRELEASES}'
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-from -- uvt-kvm destroy kvmguest-xenial-saverestore
uvt-kvm: error: domain 'kvmguest-xenial-saverestore' not found.
+ true
+ lxc exec testkvm-xenial-from -- virsh destroy kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-from -- virsh undefine kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-from -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore.qcow
error: failed to get vol 'kvmguest-xenial-saverestore.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore.qcow'

+ true
+ lxc exec testkvm-xenial-from -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore-ds.qcow
error: failed to get vol 'kvmguest-xenial-saverestore-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-to -- uvt-kvm destroy kvmguest-xenial-saverestore
uvt-kvm: error: domain 'kvmguest-xenial-saverestore' not found.
+ true
+ lxc exec testkvm-xenial-to -- virsh destroy kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-to -- virsh undefine kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-to -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore.qcow
error: failed to get vol 'kvmguest-xenial-saverestore.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore.qcow'

+ true
+ lxc exec testkvm-xenial-to -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore-ds.qcow
error: failed to get vol 'kvmguest-xenial-saverestore-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-noupd -- uvt-kvm destroy kvmguest-xenial-saverestore
uvt-kvm: error: domain 'kvmguest-xenial-saverestore' not found.
+ true
+ lxc exec testkvm-xenial-noupd -- virsh destroy kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh undefine kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore.qcow
error: failed to get vol 'kvmguest-xenial-saverestore.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore.qcow'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore-ds.qcow
error: failed to get vol 'kvmguest-xenial-saverestore-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-tononshared -- uvt-kvm destroy kvmguest-xenial-saverestore
uvt-kvm: error: domain 'kvmguest-xenial-saverestore' not found.
+ true
+ lxc exec testkvm-xenial-tononshared -- virsh destroy kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh undefine kvmguest-xenial-saverestore
error: failed to get domain 'kvmguest-xenial-saverestore'
error: Domain not found: no domain with matching name 'kvmguest-xenial-saverestore'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore.qcow
error: failed to get vol 'kvmguest-xenial-saverestore.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore.qcow'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh vol-delete --pool uvtool kvmguest-xenial-saverestore-ds.qcow
error: failed to get vol 'kvmguest-xenial-saverestore-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-saverestore-ds.qcow'

+ true
+ for suffix in normal saverestore postcopy
+ getkvminfo testkvm-xenial-noupd kvmguest-xenial-postcopy
+ local containername=testkvm-xenial-noupd
+ local guestname=kvmguest-xenial-postcopy
+ local machinetype=
+ local osversion=
+ local qemu=
+ lxc exec testkvm-xenial-noupd -- systemctl status libvirtd --lines 200 --full --no-pager
��� libvirt-bin.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-11-29 03:28:21 UTC; 1h 23min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 5352 (libvirtd)
   CGroup: /system.slice/libvirt-bin.service
           ������4324 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������4325 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
           ������5352 /usr/sbin/libvirtd
           ������6823 /usr/bin/qemu-system-s390x -name kvmguest-xenial-postcopy -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid a95fb259-f1f9-48f1-8246-4dba4b1dc34c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-postcopy/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:c6:b4:2d,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on

Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Starting Virtualization daemon...
Nov 29 03:28:21 testkvm-xenial-noupd systemd[1]: Started Virtualization daemon.
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /etc/hosts - 7 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq[4324]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Nov 29 03:28:27 testkvm-xenial-noupd dnsmasq-dhcp[4324]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100)
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: hostname: testkvm-xenial-noupd.lxd
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host1/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host1
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to open file '/sys/class/fc_host//host0/fabric_name': No such file or directory
Nov 29 03:28:27 testkvm-xenial-noupd libvirtd[5352]: Failed to read fabric WWN for host0
Nov 29 04:28:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.127 52:54:00:69:13:ad
Nov 29 04:28:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.127 52:54:00:69:13:ad ubuntu
Nov 29 04:28:32 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.166 52:54:00:05:b1:3c
Nov 29 04:28:41 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.166 52:54:00:05:b1:3c ubuntu
Nov 29 04:30:36 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.73 52:54:00:6a:30:5b
Nov 29 04:30:46 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.73 52:54:00:6a:30:5b ubuntu
Nov 29 04:32:45 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-normal.log': Device or resource busy
Nov 29 04:41:51 testkvm-xenial-noupd libvirtd[5352]: could not find path for descriptor /proc/self/fd/24, skipping
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: iohelper reports: 
Nov 29 04:41:54 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:42:31 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:43:25 testkvm-xenial-noupd libvirtd[5352]: Cannot open log file: '/var/log/libvirt/qemu/kvmguest-xenial-saverestore.log': Device or resource busy
Nov 29 04:44:07 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.67 52:54:00:23:7e:44
Nov 29 04:44:17 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.67 52:54:00:23:7e:44 ubuntu
Nov 29 04:46:11 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:46:21 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 ubuntu
Nov 29 04:48:16 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.141 52:54:00:c6:b4:2d
Nov 29 04:48:25 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.141 52:54:00:c6:b4:2d ubuntu
Nov 29 04:50:23 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPRELEASE(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:50:23 testkvm-xenial-noupd libvirtd[5352]: internal error: End of file from monitor
Nov 29 04:51:10 testkvm-xenial-noupd libvirtd[5352]: Unable to open vhost-net. Opened so far 0, requested 1
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPDISCOVER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPOFFER(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPREQUEST(virbr0) 192.168.122.34 52:54:00:40:b2:94
Nov 29 04:51:18 testkvm-xenial-noupd dnsmasq-dhcp[4324]: DHCPACK(virbr0) 192.168.122.34 52:54:00:40:b2:94 kvmguest-xenial-saverestore
+ lxc exec testkvm-xenial-noupd -- cat /var/log/libvirt/qemu/kvmguest-xenial-postcopy.log
2017-11-29 04:30:37.349+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-postcopy -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8cfa5d9a-e701-42c1-9b1b-32712d00939a -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-postcopy/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:6a:30:5b,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/2 (label charconsole0)
2017-11-29T04:43:32.673305Z qemu-system-s390x: terminating on signal 15 from pid 5352
2017-11-29 04:43:32.874+0000: shutting down
2017-11-29 04:48:16.619+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10.15 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Mon, 06 Nov 2017 16:36:11 +0100), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.16), hostname: testkvm-xenial-noupd
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name kvmguest-xenial-postcopy -S -machine s390-ccw-virtio-xenial,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid a95fb259-f1f9-48f1-8246-4dba4b1dc34c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-kvmguest-xenial-postcopy/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-ccw,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0 -device virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:c6:b4:2d,devno=fe.0.0002 -chardev pty,id=charconsole0 -device sclpconsole,chardev=charconsole0,id=console0 -device virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
char device redirected to /dev/pts/2 (label charconsole0)
+ lxc exec testkvm-xenial-noupd -- virsh dominfo kvmguest-xenial-postcopy
Id:             7
Name:           kvmguest-xenial-postcopy
UUID:           a95fb259-f1f9-48f1-8246-4dba4b1dc34c
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       7.8s
Max memory:     524288 KiB
Used memory:    524288 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-a95fb259-f1f9-48f1-8246-4dba4b1dc34c (enforcing)

+ [[ false == \t\r\u\e ]]
++ getkvmmt testkvm-xenial-noupd kvmguest-xenial-postcopy
++ local containername=testkvm-xenial-noupd
++ local guestname=kvmguest-xenial-postcopy
++ local mt=not-found
++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-postcopy
+++ lxc exec testkvm-xenial-noupd -- virsh dumpxml kvmguest-xenial-postcopy
++ xml='<domain type='\''kvm'\'' id='\''7'\''>
  <name>kvmguest-xenial-postcopy</name>
  <uuid>a95fb259-f1f9-48f1-8246-4dba4b1dc34c</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:c6:b4:2d'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet2'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/2'\''>
      <source path='\''/dev/pts/2'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-a95fb259-f1f9-48f1-8246-4dba4b1dc34c</label>
    <imagelabel>libvirt-a95fb259-f1f9-48f1-8246-4dba4b1dc34c</imagelabel>
  </seclabel>
</domain>'
+++ echo '<domain type='\''kvm'\'' id='\''7'\''>
  <name>kvmguest-xenial-postcopy</name>
  <uuid>a95fb259-f1f9-48f1-8246-4dba4b1dc34c</uuid>
  <memory unit='\''KiB'\''>524288</memory>
  <currentMemory unit='\''KiB'\''>524288</currentMemory>
  <vcpu placement='\''static'\''>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='\''s390x'\'' machine='\''s390-ccw-virtio-xenial'\''>hvm</type>
    <boot dev='\''hd'\''/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='\''utc'\''/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-s390x</emulator>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''qcow2'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy.qcow'\''/>
      <backingStore type='\''file'\'' index='\''1'\''>
        <format type='\''qcow2'\''/>
        <source file='\''/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI='\''/>
        <backingStore/>
      </backingStore>
      <target dev='\''vda'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0000'\''/>
    </disk>
    <disk type='\''file'\'' device='\''disk'\''>
      <driver name='\''qemu'\'' type='\''raw'\''/>
      <source file='\''/var/lib/uvtool/libvirt/images/kvmguest-xenial-postcopy-ds.qcow'\''/>
      <backingStore/>
      <target dev='\''vdb'\'' bus='\''virtio'\''/>
      <alias name='\''virtio-disk1'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0001'\''/>
    </disk>
    <interface type='\''network'\''>
      <mac address='\''52:54:00:c6:b4:2d'\''/>
      <source network='\''default'\'' bridge='\''virbr0'\''/>
      <target dev='\''vnet2'\''/>
      <model type='\''virtio'\''/>
      <alias name='\''net0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0002'\''/>
    </interface>
    <console type='\''pty'\'' tty='\''/dev/pts/2'\''>
      <source path='\''/dev/pts/2'\''/>
      <target type='\''sclp'\'' port='\''0'\''/>
      <alias name='\''console0'\''/>
    </console>
    <memballoon model='\''virtio'\''>
      <alias name='\''balloon0'\''/>
      <address type='\''ccw'\'' cssid='\''0xfe'\'' ssid='\''0x0'\'' devno='\''0x0003'\''/>
    </memballoon>
  </devices>
  <seclabel type='\''dynamic'\'' model='\''apparmor'\'' relabel='\''yes'\''>
    <label>libvirt-a95fb259-f1f9-48f1-8246-4dba4b1dc34c</label>
    <imagelabel>libvirt-a95fb259-f1f9-48f1-8246-4dba4b1dc34c</imagelabel>
  </seclabel>
</domain>'
+++ xmllint --xpath 'string(//domain/os/type/@machine)' -
++ mt=s390-ccw-virtio-xenial
++ '[' -z s390-ccw-virtio-xenial ']'
++ echo s390-ccw-virtio-xenial
+ machinetype=s390-ccw-virtio-xenial
+ echo 'Machine Type s390-ccw-virtio-xenial'
Machine Type s390-ccw-virtio-xenial
++ getcontaineros testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- grep '^VERSION=' /etc/os-release
+ osversion='VERSION="16.04.3 LTS (Xenial Xerus)"'
+ echo 'KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"'
KVM Host OS is on VERSION="16.04.3 LTS (Xenial Xerus)"
++ getcontainerqemu testkvm-xenial-noupd
++ local containername=testkvm-xenial-noupd
++ lxc exec testkvm-xenial-noupd -- dpkg-query --show qemu-kvm
+ qemu='qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
+ echo 'qemu-kvm	1:2.5+dfsg-5ubuntu10.16'
qemu-kvm	1:2.5+dfsg-5ubuntu10.16
+ lxc exec testkvm-xenial-noupd -- uvt-kvm destroy kvmguest-xenial-postcopy
+ for release in '${SPAWNRELEASES}'
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-from -- uvt-kvm destroy kvmguest-xenial-postcopy
uvt-kvm: error: domain 'kvmguest-xenial-postcopy' not found.
+ true
+ lxc exec testkvm-xenial-from -- virsh destroy kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-from -- virsh undefine kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-from -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy.qcow
error: failed to get vol 'kvmguest-xenial-postcopy.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy.qcow'

+ true
+ lxc exec testkvm-xenial-from -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy-ds.qcow
error: failed to get vol 'kvmguest-xenial-postcopy-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-to -- uvt-kvm destroy kvmguest-xenial-postcopy
uvt-kvm: error: domain 'kvmguest-xenial-postcopy' not found.
+ true
+ lxc exec testkvm-xenial-to -- virsh destroy kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-to -- virsh undefine kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-to -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy.qcow
error: failed to get vol 'kvmguest-xenial-postcopy.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy.qcow'

+ true
+ lxc exec testkvm-xenial-to -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy-ds.qcow
error: failed to get vol 'kvmguest-xenial-postcopy-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-noupd -- uvt-kvm destroy kvmguest-xenial-postcopy
uvt-kvm: error: domain 'kvmguest-xenial-postcopy' not found.
+ true
+ lxc exec testkvm-xenial-noupd -- virsh destroy kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh undefine kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy.qcow
error: failed to get vol 'kvmguest-xenial-postcopy.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy.qcow'

+ true
+ lxc exec testkvm-xenial-noupd -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy-ds.qcow
error: failed to get vol 'kvmguest-xenial-postcopy-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy-ds.qcow'

+ true
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-tononshared -- uvt-kvm destroy kvmguest-xenial-postcopy
uvt-kvm: error: domain 'kvmguest-xenial-postcopy' not found.
+ true
+ lxc exec testkvm-xenial-tononshared -- virsh destroy kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh undefine kvmguest-xenial-postcopy
error: failed to get domain 'kvmguest-xenial-postcopy'
error: Domain not found: no domain with matching name 'kvmguest-xenial-postcopy'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy.qcow
error: failed to get vol 'kvmguest-xenial-postcopy.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy.qcow'

+ true
+ lxc exec testkvm-xenial-tononshared -- virsh vol-delete --pool uvtool kvmguest-xenial-postcopy-ds.qcow
error: failed to get vol 'kvmguest-xenial-postcopy-ds.qcow'
error: Storage volume not found: no storage vol with matching path 'kvmguest-xenial-postcopy-ds.qcow'

+ true
+ for release in '${SPAWNRELEASES}'
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-from -- virsh pool-refresh uvtool
Pool uvtool refreshed

+ lxc exec testkvm-xenial-from -- virsh vol-list --pool uvtool
 Name                 Path                                    
------------------------------------------------------------------------------
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ== /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ==
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI=

+ lxc exec testkvm-xenial-from -- uvt-simplestreams-libvirt query
release=artful arch=s390x label=daily (20171122)
release=bionic arch=s390x label=daily (20171127.1)
release=xenial arch=s390x label=daily (20171122)
release=zesty arch=s390x label=daily (20171123)
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-to -- virsh pool-refresh uvtool
Pool uvtool refreshed

+ lxc exec testkvm-xenial-to -- virsh vol-list --pool uvtool
 Name                 Path                                    
------------------------------------------------------------------------------
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ== /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ==
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI=

+ lxc exec testkvm-xenial-to -- uvt-simplestreams-libvirt query
release=artful arch=s390x label=daily (20171122)
release=bionic arch=s390x label=daily (20171127.1)
release=xenial arch=s390x label=daily (20171122)
release=zesty arch=s390x label=daily (20171123)
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-noupd -- virsh pool-refresh uvtool
Pool uvtool refreshed

+ lxc exec testkvm-xenial-noupd -- virsh vol-list --pool uvtool
 Name                 Path                                    
------------------------------------------------------------------------------
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ== /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ==
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI=

+ lxc exec testkvm-xenial-noupd -- uvt-simplestreams-libvirt query
release=artful arch=s390x label=daily (20171122)
release=bionic arch=s390x label=daily (20171127.1)
release=xenial arch=s390x label=daily (20171122)
release=zesty arch=s390x label=daily (20171123)
+ for direction in '${MIGRATIONPEERS}'
+ lxc exec testkvm-xenial-tononshared -- virsh pool-refresh uvtool
Pool uvtool refreshed

+ lxc exec testkvm-xenial-tononshared -- virsh vol-list --pool uvtool
 Name                 Path                                    
------------------------------------------------------------------------------
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMDQ6czM5MHggMjAxNzExMjM=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTcuMTA6czM5MHggMjAxNzExMjI=
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ== /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTguMDQ6czM5MHggMjAxNzExMjcuMQ==
 x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI= /var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6czM5MHggMjAxNzExMjI=

+ lxc exec testkvm-xenial-tononshared -- uvt-simplestreams-libvirt query
release=artful arch=s390x label=daily (20171122)
release=bionic arch=s390x label=daily (20171127.1)
release=xenial arch=s390x label=daily (20171122)
release=zesty arch=s390x label=daily (20171123)
+ [[ 1 -eq 0 ]]
+ [[ 1 -eq 2 ]]
+ [[ 1 -eq 0 ]]
+ [[ 1 -eq 3 ]]
+ [[ 1 -eq 0 ]]
+ [[ 1 -eq 4 ]]
+ [[ 1 -eq 0 ]]
+ [[ 1 -eq 5 ]]
+ set -x
+ cleanlxd
+ local 'suffix=from to noupd tononshared'
+ local retries=0
+ BashBacktrace
+ [[ true != \t\r\u\e ]]
+ set +x
Backtrace:main:1857 -> cleanlxd:1454
+ for release in '${SPAWNRELEASES}'
+ logmsg 3 'stop xenial container'
+ local lvl=3
+ local 'msg=stop xenial container'
+ local sameline=0
+ [[ 3 -ne 0 ]]
+ level[${1}]=5
+ for i in '{1..3}'
+ [[ 3 -lt 1 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 2 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 3 ]]
+ [[ 0 -ne 1 ]]
+ printf '\n'
+ tee -a qemu-libvirt-test.status

++ seq 2 3
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ [[ 0 -ne 1 ]]
+ printf '%d.%d.%d ' 5 2 5
+ tee -a qemu-libvirt-test.status
5.2.5 ++ date +%H:%M:%S
+ tee -a qemu-libvirt-test.status
+ printf '(%s): ' 04:51:57
(04:51:57): + printf %s 'stop xenial container'
+ tee -a qemu-libvirt-test.status
stop xenial container+ for direction in '${suffix}'
+ lxc stop --force testkvm-xenial-from
+ for direction in '${suffix}'
+ lxc stop --force testkvm-xenial-to
+ for direction in '${suffix}'
+ lxc stop --force testkvm-xenial-noupd
+ for direction in '${suffix}'
+ lxc stop --force testkvm-xenial-tononshared
+ for release in '${SPAWNRELEASES}'
+ logmsg 3 'clean xenial container'
+ local lvl=3
+ local 'msg=clean xenial container'
+ local sameline=0
+ [[ 3 -ne 0 ]]
+ level[${1}]=6
+ for i in '{1..3}'
+ [[ 3 -lt 1 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 2 ]]
+ for i in '{1..3}'
+ [[ 3 -lt 3 ]]
+ [[ 0 -ne 1 ]]
+ printf '\n'
+ tee -a qemu-libvirt-test.status

++ seq 2 3
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ for i in '$(seq 2 "${lvl}")'
+ printf '  '
+ [[ 0 -ne 1 ]]
+ printf '%d.%d.%d ' 5 2 6
+ tee -a qemu-libvirt-test.status
5.2.6 + tee -a qemu-libvirt-test.status
++ date +%H:%M:%S
+ printf '(%s): ' 04:52:02
(04:52:02): + printf %s 'clean xenial container'
+ tee -a qemu-libvirt-test.status
clean xenial container+ for direction in '${suffix}'
+ lxc info testkvm-xenial-from
Name: testkvm-xenial-from
Remote: unix:/var/lib/lxd/unix.socket
Architecture: s390x
Created: 2017/11/29 03:22 UTC
Status: Stopped
Type: persistent
Profiles: default, kvm
+ retries=5
+ '[' 5 -ge 0 ']'
+ retries=4
+ lxc delete --force testkvm-xenial-from
+ break
+ lxc info testkvm-xenial-from
error: not found
+ lxc delete --force testkvm-xenial-from
error: not found
+ /bin/true
+ for direction in '${suffix}'
+ lxc info testkvm-xenial-to
Name: testkvm-xenial-to
Remote: unix:/var/lib/lxd/unix.socket
Architecture: s390x
Created: 2017/11/29 03:24 UTC
Status: Stopped
Type: persistent
Profiles: default, kvm
+ retries=5
+ '[' 5 -ge 0 ']'
+ retries=4
+ lxc delete --force testkvm-xenial-to
+ break
+ lxc info testkvm-xenial-to
error: not found
+ lxc delete --force testkvm-xenial-to
error: not found
+ /bin/true
+ for direction in '${suffix}'
+ lxc info testkvm-xenial-noupd
Name: testkvm-xenial-noupd
Remote: unix:/var/lib/lxd/unix.socket
Architecture: s390x
Created: 2017/11/29 03:26 UTC
Status: Stopped
Type: persistent
Profiles: default, kvm
+ retries=5
+ '[' 5 -ge 0 ']'
+ retries=4
+ lxc delete --force testkvm-xenial-noupd
+ break
+ lxc info testkvm-xenial-noupd
error: not found
+ lxc delete --force testkvm-xenial-noupd
error: not found
+ /bin/true
+ for direction in '${suffix}'
+ lxc info testkvm-xenial-tononshared
Name: testkvm-xenial-tononshared
Remote: unix:/var/lib/lxd/unix.socket
Architecture: s390x
Created: 2017/11/29 03:28 UTC
Status: Stopped
Type: persistent
Profiles: default, kvm
+ retries=5
+ '[' 5 -ge 0 ']'
+ retries=4
+ lxc delete --force testkvm-xenial-tononshared
+ break
+ lxc info testkvm-xenial-tononshared
error: not found
+ lxc delete --force testkvm-xenial-tononshared
error: not found
+ /bin/true
+ cleankvm
+ local release=unset
+ BashBacktrace
+ [[ true != \t\r\u\e ]]
+ set +x
Backtrace:main:1858 -> cleankvm:1443
+ for release in '$TESTEDRELEASES'
+ uvt-kvm destroy xenial-qemu-test
uvt-kvm: error: domain 'xenial-qemu-test' not found.
+ true
+ uvt-kvm destroy xenial-libvirt-test
uvt-kvm: error: domain 'xenial-libvirt-test' not found.
+ true
+ cleanping
+ local pid=unset
+ for pid in '"${pingpids[@]:-}"'
+ [[ -n '' ]]
+ pkill -TERM -P 43333
+ /bin/true
+ sleep 5
+ pkill -KILL -P 43333
+ /bin/true
+ sleep 5
+ set +x


Finished, overall status (RC=8):


1.0.0 (03:22:05): stage 0: prepare environment spawned releases 'xenial' tested releases 'xenial'
  1.1.0 (03:22:05): cleanup
    1.1.1 (03:22:06): stop xenial container
    1.1.2 (03:22:11): clean xenial container
  1.2.0 (03:22:19): create shared ssh key
  1.3.0 (03:22:20): custom lxd profile
  1.4.0 (03:22:20): spawn containers
  1.5.0 (03:22:20): spawn lxdkvm containers for xenial
    1.5.1 (03:22:20): create lxdkvm for xenial mode from
    1.5.2 (03:24:41): prep ssh on testkvm-xenial-from
    1.5.3 (03:24:43): prep testkvm-xenial-from libvirt for migration
    1.5.4 (03:24:52): create lxdkvm for xenial mode to
    1.5.5 (03:26:31): prep ssh on testkvm-xenial-to
    1.5.6 (03:26:32): prep testkvm-xenial-to libvirt for migration
    1.5.7 (03:26:40): create lxdkvm for xenial mode noupd
    1.5.8 (03:28:18): prep ssh on testkvm-xenial-noupd
    1.5.9 (03:28:20): prep testkvm-xenial-noupd libvirt for migration
    1.5.10 (03:28:29): create lxdkvm for xenial mode tononshared
    1.5.11 (03:30:44): prep ssh on testkvm-xenial-tononshared
    1.5.12 (03:30:45): prep testkvm-xenial-tononshared libvirt for migration
  1.6.0 (03:30:54): spread hosts info
  1.7.0 (03:30:55): initial daily image sync
  1.8.0 (03:31:36): unshare non shared container
2.0.0 (03:34:52): Version info after initial setup
  2.1.0 (03:34:52): Version at testkvm-xenial-from: - qemu: 1:2.5+dfsg-5ubuntu10.16 libvirt: 1.3.1-1ubuntu10.15
  2.2.0 (03:34:53): Bios versions at testkvm-xenial-from: - ipxe: not-installed slof: not-installed efi: not-installed
  2.3.0 (03:34:53): Version at testkvm-xenial-to: - qemu: 1:2.5+dfsg-5ubuntu10.16 libvirt: 1.3.1-1ubuntu10.15
  2.4.0 (03:34:53): Bios versions at testkvm-xenial-to: - ipxe: not-installed slof: not-installed efi: not-installed
  2.5.0 (03:34:54): Version at testkvm-xenial-noupd: - qemu: 1:2.5+dfsg-5ubuntu10.16 libvirt: 1.3.1-1ubuntu10.15
  2.6.0 (03:34:54): Bios versions at testkvm-xenial-noupd: - ipxe: not-installed slof: not-installed efi: not-installed
  2.7.0 (03:34:54): Version at testkvm-xenial-tononshared: - qemu: 1:2.5+dfsg-5ubuntu10.16 libvirt: 1.3.1-1ubuntu10.15
  2.8.0 (03:34:55): Bios versions at testkvm-xenial-tononshared: - ipxe: not-installed slof: not-installed efi: not-installed
3.0.0 (03:34:55): stage 1: in-release migrations
  3.1.0 (03:34:55): Prep xenial guest on testkvm-xenial-from
    3.1.1 (03:34:55): Remove xenial guest on testkvm-xenial-from
    3.1.2 (03:35:24): spawn guests
    3.1.3 (03:39:55): machine type check  => Pass
    3.1.4 (03:40:00): Test machine type uniqueness within xenial  => Pass
  3.2.0 (03:40:00): Test migrations within xenial - round 1/5
  3.3.0 (03:40:00): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.3.1 (03:40:00): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.3.2 (03:40:08): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.3.3 (03:40:34): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.3.4 (03:40:40): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.4.0 (03:40:58): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.4.1 (03:40:58): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.4.2 (03:41:06): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.4.3 (03:41:30): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.4.4 (03:41:38): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.5.0 (03:41:56): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.5.1 (03:41:56): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.6.0 (03:41:57): Test migrations within xenial - round 2/5
  3.7.0 (03:41:57): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.7.1 (03:41:57): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.7.2 (03:42:03): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.7.3 (03:42:27): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.7.4 (03:42:34): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.8.0 (03:42:52): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.8.1 (03:42:52): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.8.2 (03:43:00): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.8.3 (03:43:24): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.8.4 (03:43:32): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.9.0 (03:43:49): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.9.1 (03:43:50): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.10.0 (03:43:50): Test migrations within xenial - round 3/5
  3.11.0 (03:43:50): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.11.1 (03:43:50): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.11.2 (03:43:56): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.11.3 (03:44:21): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.11.4 (03:44:27): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.12.0 (03:44:45): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.12.1 (03:44:45): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.12.2 (03:44:53): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.12.3 (03:45:17): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.12.4 (03:45:24): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.13.0 (03:45:35): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.13.1 (03:45:35): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.14.0 (03:45:35): Test migrations within xenial - round 4/5
  3.15.0 (03:45:35): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.15.1 (03:45:35): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.15.2 (03:45:42): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.15.3 (03:46:06): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.15.4 (03:46:12): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.16.0 (03:46:30): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.16.1 (03:46:30): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.16.2 (03:46:38): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.16.3 (03:47:02): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.16.4 (03:47:10): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.17.0 (03:47:21): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.17.1 (03:47:21): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.18.0 (03:47:21): Test migrations within xenial - round 5/5
  3.19.0 (03:47:21): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.19.1 (03:47:21): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.19.2 (03:47:27): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.19.3 (03:47:52): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.19.4 (03:47:58): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.20.0 (03:48:16): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.20.1 (03:48:16): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.20.2 (03:48:24): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.20.3 (03:48:48): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.20.4 (03:48:56): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.21.0 (03:49:14): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.21.1 (03:49:14): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.22.0 (03:49:15): Test repetive live migration within xenial - round 1/10
  3.23.0 (03:49:15): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.23.1 (03:49:15): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.23.2 (03:49:21): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.23.3 (03:49:46): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.23.4 (03:49:54): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.24.0 (03:50:12): Test repetive live migration within xenial - round 2/10
  3.25.0 (03:50:12): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.25.1 (03:50:12): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.25.2 (03:50:21): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.25.3 (03:50:46): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.25.4 (03:50:54): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.26.0 (03:51:12): Test repetive live migration within xenial - round 3/10
  3.27.0 (03:51:12): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.27.1 (03:51:12): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.27.2 (03:51:20): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.27.3 (03:51:46): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.27.4 (03:51:53): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.28.0 (03:52:11): Test repetive live migration within xenial - round 4/10
  3.29.0 (03:52:11): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.29.1 (03:52:11): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.29.2 (03:52:18): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.29.3 (03:52:44): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.29.4 (03:52:52): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.30.0 (03:53:10): Test repetive live migration within xenial - round 5/10
  3.31.0 (03:53:10): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.31.1 (03:53:10): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.31.2 (03:53:19): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.31.3 (03:53:44): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.31.4 (03:53:52): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.32.0 (03:54:10): Test repetive live migration within xenial - round 6/10
  3.33.0 (03:54:10): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.33.1 (03:54:10): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.33.2 (03:54:19): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.33.3 (03:54:45): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.33.4 (03:54:52): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.34.0 (03:55:10): Test repetive live migration within xenial - round 7/10
  3.35.0 (03:55:10): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.35.1 (03:55:10): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.35.2 (03:55:16): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.35.3 (03:55:41): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.35.4 (03:55:47): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.36.0 (03:56:05): Test repetive live migration within xenial - round 8/10
  3.37.0 (03:56:05): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.37.1 (03:56:05): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.37.2 (03:56:11): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.37.3 (03:56:35): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.37.4 (03:56:41): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.38.0 (03:56:59): Test repetive live migration within xenial - round 9/10
  3.39.0 (03:56:59): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.39.1 (03:56:59): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.39.2 (03:57:06): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.39.3 (03:57:31): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.39.4 (03:57:38): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.40.0 (03:57:55): Test repetive live migration within xenial - round 10/10
  3.41.0 (03:57:55): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.41.1 (03:57:55): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.41.2 (03:58:02): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.41.3 (03:58:27): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.41.4 (03:58:33): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.42.0 (03:58:51): Test various further migration options of a xenial guest testkvm-xenial-from/testkvm-xenial-to
  3.43.0 (03:58:51): Test live migration (extra option '--p2p') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.43.1 (03:58:51): live migration (extra option '--p2p') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.43.2 (03:58:57): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.43.3 (03:59:21): live migration back (extra option '--p2p') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.43.4 (03:59:28): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.44.0 (03:59:45): Test live migration (extra option '--p2p --tunnelled') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.44.1 (03:59:45): live migration (extra option '--p2p --tunnelled') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.44.2 (03:59:54): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.44.3 (04:00:19): live migration back (extra option '--p2p --tunnelled') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.44.4 (04:00:27): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.45.0 (04:00:45): Test live migration (extra option '--change-protection') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.45.1 (04:00:45): live migration (extra option '--change-protection') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.45.2 (04:00:51): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.45.3 (04:01:16): live migration back (extra option '--change-protection') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.45.4 (04:01:22): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.46.0 (04:01:40): Test live migration (extra option '--verbose') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.46.1 (04:01:40): live migration (extra option '--verbose') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.46.2 (04:01:46): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.46.3 (04:02:12): live migration back (extra option '--verbose') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.46.4 (04:02:20): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.47.0 (04:02:38): Test live migration (extra option '--auto-converge') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.47.1 (04:02:38): live migration (extra option '--auto-converge') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.47.2 (04:02:44): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.47.3 (04:02:55): live migration back (extra option '--auto-converge') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.47.4 (04:03:01): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.48.0 (04:03:43): Test migration options without shared storage of a xenial guest testkvm-xenial-from/testkvm-xenial-to
  3.49.0 (04:03:44): Test live migration (extra option '--copy-storage-all') of a xenial guest testkvm-xenial-from/testkvm-xenial-tononshared
    3.49.1 (04:03:44): live migration (extra option '--copy-storage-all') testkvm-xenial-from -> testkvm-xenial-tononshared  => Pass
    3.49.2 (04:04:54): Check if guest kvmguest-xenial-normal on testkvm-xenial-tononshared is alive  => Pass
    3.49.3 (04:05:19): live migration back (extra option '--copy-storage-all') testkvm-xenial-tononshared -> testkvm-xenial-from  => Pass
    3.49.4 (04:06:05): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.50.0 (04:06:23): Test live migration (extra option '--copy-storage-inc') of a xenial guest testkvm-xenial-from/testkvm-xenial-tononshared
    3.50.1 (04:06:23): live migration (extra option '--copy-storage-inc') testkvm-xenial-from -> testkvm-xenial-tononshared  => Pass
    3.50.2 (04:07:39): Check if guest kvmguest-xenial-normal on testkvm-xenial-tononshared is alive  => Pass
    3.50.3 (04:08:04): live migration back (extra option '--copy-storage-inc') testkvm-xenial-tononshared -> testkvm-xenial-from  => Pass
    3.50.4 (04:08:46): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.51.0 (04:09:17): Test migrations within xenial with background load - round 1/5
  3.52.0 (04:09:17): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.52.1 (04:09:17): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.52.2 (04:09:25): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.52.3 (04:09:44): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.52.4 (04:09:50): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.53.0 (04:10:08): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.53.1 (04:10:08): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.53.2 (04:10:17): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.53.3 (04:10:41): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.53.4 (04:10:49): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.54.0 (04:11:08): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.54.1 (04:11:08): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.55.0 (04:11:08): Test migrations within xenial with background load - round 2/5
  3.56.0 (04:11:08): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.56.1 (04:11:08): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.56.2 (04:11:15): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.56.3 (04:11:26): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.56.4 (04:11:32): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.57.0 (04:11:50): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.57.1 (04:11:50): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.57.2 (04:11:59): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.57.3 (04:12:24): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.57.4 (04:12:32): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.58.0 (04:12:50): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.58.1 (04:12:50): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.59.0 (04:12:51): Test migrations within xenial with background load - round 3/5
  3.60.0 (04:12:51): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.60.1 (04:12:51): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.60.2 (04:12:57): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.60.3 (04:13:08): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.60.4 (04:13:15): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.61.0 (04:13:33): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.61.1 (04:13:33): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.61.2 (04:13:41): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.61.3 (04:14:05): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.61.4 (04:14:13): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.62.0 (04:14:31): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.62.1 (04:14:31): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.63.0 (04:14:32): Test migrations within xenial with background load - round 4/5
  3.64.0 (04:14:32): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.64.1 (04:14:32): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.64.2 (04:14:39): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.64.3 (04:14:50): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.64.4 (04:14:56): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.65.0 (04:15:14): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.65.1 (04:15:14): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.65.2 (04:15:23): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.65.3 (04:15:47): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.65.4 (04:15:57): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.66.0 (04:16:15): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.66.1 (04:16:16): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.67.0 (04:16:16): Test migrations within xenial with background load - round 5/5
  3.68.0 (04:16:16): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.68.1 (04:16:16): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.68.2 (04:16:22): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.68.3 (04:16:33): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.68.4 (04:16:40): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.69.0 (04:16:58): Test saverestore migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.69.1 (04:16:58): saverestore migration testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.69.2 (04:17:07): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    3.69.3 (04:17:31): saverestore migration back testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.69.4 (04:17:40): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
  3.70.0 (04:17:58): Test postcopy live migration of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.70.1 (04:17:58): postcopy-after-precopy live migration testkvm-xenial-from -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  3.71.0 (04:17:58): Test bg-loaded repetive loaded live migration within xenial - round 1/10
  3.72.0 (04:17:58): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.72.1 (04:17:58): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.72.2 (04:18:05): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.72.3 (04:18:16): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.72.4 (04:18:22): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.73.0 (04:18:40): Test bg-loaded repetive loaded live migration within xenial - round 2/10
  3.74.0 (04:18:40): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.74.1 (04:18:40): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.74.2 (04:18:46): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.74.3 (04:19:04): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.74.4 (04:19:10): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.75.0 (04:19:28): Test bg-loaded repetive loaded live migration within xenial - round 3/10
  3.76.0 (04:19:28): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.76.1 (04:19:28): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.76.2 (04:19:34): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.76.3 (04:19:52): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.76.4 (04:19:58): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.77.0 (04:20:16): Test bg-loaded repetive loaded live migration within xenial - round 4/10
  3.78.0 (04:20:16): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.78.1 (04:20:16): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.78.2 (04:20:23): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.78.3 (04:20:41): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.78.4 (04:20:47): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.79.0 (04:21:05): Test bg-loaded repetive loaded live migration within xenial - round 5/10
  3.80.0 (04:21:05): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.80.1 (04:21:05): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.80.2 (04:21:12): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.80.3 (04:21:30): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.80.4 (04:21:36): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.81.0 (04:21:54): Test bg-loaded repetive loaded live migration within xenial - round 6/10
  3.82.0 (04:21:54): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.82.1 (04:21:54): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.82.2 (04:22:01): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.82.3 (04:22:19): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.82.4 (04:22:25): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.83.0 (04:22:43): Test bg-loaded repetive loaded live migration within xenial - round 7/10
  3.84.0 (04:22:43): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.84.1 (04:22:43): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.84.2 (04:22:49): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.84.3 (04:23:07): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.84.4 (04:23:14): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.85.0 (04:23:32): Test bg-loaded repetive loaded live migration within xenial - round 8/10
  3.86.0 (04:23:32): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.86.1 (04:23:32): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.86.2 (04:23:37): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.86.3 (04:23:55): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.86.4 (04:24:01): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.87.0 (04:24:19): Test bg-loaded repetive loaded live migration within xenial - round 9/10
  3.88.0 (04:24:19): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.88.1 (04:24:19): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.88.2 (04:24:26): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.88.3 (04:24:44): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.88.4 (04:24:50): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.89.0 (04:25:08): Test bg-loaded repetive loaded live migration within xenial - round 10/10
  3.90.0 (04:25:08): Test live migration (extra option '') of a xenial guest testkvm-xenial-from/testkvm-xenial-to
    3.90.1 (04:25:08): live migration (extra option '') testkvm-xenial-from -> testkvm-xenial-to  => Pass
    3.90.2 (04:25:14): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Pass
    3.90.3 (04:25:32): live migration back (extra option '') testkvm-xenial-to -> testkvm-xenial-from  => Pass
    3.90.4 (04:25:38): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
  3.91.0 (04:25:56): Test restart (after migration) of a xenial guest on testkvm-xenial-from
    3.91.1 (04:25:56): Restart test on testkvm-xenial-from with guest kvmguest-xenial-normal  => Pass
    3.91.2 (04:26:14): Check if guest kvmguest-xenial-normal on testkvm-xenial-from is alive  => Pass
    3.91.3 (04:26:25): Restart test on testkvm-xenial-from with guest kvmguest-xenial-saverestore  => Pass
    3.91.4 (04:26:43): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-from is alive  => Pass
    3.91.5 (04:26:54): Restart test on testkvm-xenial-from with guest kvmguest-xenial-postcopy  => Pass
    3.91.6 (04:27:13): Check if guest kvmguest-xenial-postcopy on testkvm-xenial-from is alive  => Pass
    3.91.7 (04:27:24): Remove xenial guest on testkvm-xenial-from
4.0.0 (04:27:49): stage 1b: Migrations into and (optional) out of new upgrades
  4.1.0 (04:27:49): Prep xenial guest on testkvm-xenial-noupd
    4.1.1 (04:27:49): Remove xenial guest on testkvm-xenial-noupd
    4.1.2 (04:28:09): spawn guests
    4.1.3 (04:32:38): machine type check
  4.2.0 (04:32:41): Test live migration (extra option '') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.2.1 (04:32:41): live migration (extra option '') testkvm-xenial-noupd -> testkvm-xenial-to  => Pass
    4.2.2 (04:32:49): Check if guest kvmguest-xenial-normal on testkvm-xenial-to is alive  => Failed detail=live migration failed alive check
  4.3.0 (04:41:50): Test saverestore migration of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.3.1 (04:41:50): saverestore migration testkvm-xenial-noupd -> testkvm-xenial-to  => Pass
    4.3.2 (04:41:59): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-to is alive  => Pass
    4.3.3 (04:42:26): saverestore migration back testkvm-xenial-to -> testkvm-xenial-noupd  => Pass
    4.3.4 (04:42:35): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-noupd is alive  => Pass
  4.4.0 (04:42:53): Test postcopy live migration of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.4.1 (04:42:53): postcopy-after-precopy live migration testkvm-xenial-noupd -> testkvm-xenial-to  => Skip reason=postcopy migration not tried (not supported)
  4.5.0 (04:42:53): Test various further migration options of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
  4.6.0 (04:42:53): Test live migration (extra option '--p2p') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.6.1 (04:42:53): live migration (extra option '--p2p') testkvm-xenial-noupd -> testkvm-xenial-to  => Failed detail=migration option --p2p failed
  4.7.0 (04:42:55): Test live migration (extra option '--p2p --tunnelled') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.7.1 (04:42:55): live migration (extra option '--p2p --tunnelled') testkvm-xenial-noupd -> testkvm-xenial-to  => Failed detail=migration option --p2p --tunnelled failed
  4.8.0 (04:42:56): Test live migration (extra option '--change-protection') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.8.1 (04:42:56): live migration (extra option '--change-protection') testkvm-xenial-noupd -> testkvm-xenial-to  => Failed detail=migration option --change-protection failed
  4.9.0 (04:42:58): Test live migration (extra option '--verbose') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.9.1 (04:42:58): live migration (extra option '--verbose') testkvm-xenial-noupd -> testkvm-xenial-to  => Failed detail=migration option --verbose failed
  4.10.0 (04:43:00): Test live migration (extra option '--auto-converge') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
    4.10.1 (04:43:00): live migration (extra option '--auto-converge') testkvm-xenial-noupd -> testkvm-xenial-to  => Failed detail=migration option --auto-converge failed
  4.11.0 (04:43:02): Test migration options without shared storage of a xenial guest testkvm-xenial-noupd/testkvm-xenial-to
  4.12.0 (04:43:11): Test live migration (extra option '--copy-storage-all') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-tononshared
    4.12.1 (04:43:11): live migration (extra option '--copy-storage-all') testkvm-xenial-noupd -> testkvm-xenial-tononshared  => Failed detail=migration option --copy-storage-all failed
  4.13.0 (04:43:13): Test live migration (extra option '--copy-storage-inc') of a xenial guest testkvm-xenial-noupd/testkvm-xenial-tononshared
    4.13.1 (04:43:13): live migration (extra option '--copy-storage-inc') testkvm-xenial-noupd -> testkvm-xenial-tononshared  => Failed detail=migration option --copy-storage-inc failed
    4.13.2 (04:43:15): Remove xenial guest on testkvm-xenial-noupd
5.0.0 (04:43:40): stage 1c: Check guests surviving an upgrade
  5.1.0 (04:43:40): Prep xenial guest on testkvm-xenial-noupd
    5.1.1 (04:43:40): Remove xenial guest on testkvm-xenial-noupd
    5.1.2 (04:44:03): spawn guests
    5.1.3 (04:50:17): machine type check
  5.2.0 (04:50:21): Test upgrade to proposed/ppa a xenial guest on testkvm-xenial-noupd
    5.2.1 (04:50:21): Upgrade test on testkvm-xenial-noupd  => Pass
    5.2.2 (04:51:12): Check if guest kvmguest-xenial-saverestore on testkvm-xenial-noupd is alive  => Pass
    5.2.3 (04:51:24): Check if guest kvmguest-xenial-normal on testkvm-xenial-noupd is alive  => Pass
    5.2.4 (04:51:35): Remove xenial guest on testkvm-xenial-noupd
    5.2.5 (04:51:57): stop xenial container
    5.2.6 (04:52:02): clean xenial container
+ exit 8
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: josh.powers@canonical.com
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error?
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Warning: you have no plugins providing access control for builds, so falling back to legacy behavior of permitting any downstream builds to be triggered
Triggering a new build of virt-in-release-s390x-z
Finished: FAILURE