Explanation: I work for IBM and trying to prototype netboot (pxe) for ubuntu. The goal is to have a maas deployment server deploy out to 4 VMs managed under ubuntu 14.0.4 to test out juju bundle #39 (openstack base). Since my team does not have 4 physical servers we are attempting this using VMs. Note this is ppc64el environment.
Problem With a installed mini.iso for netboot (pxe) in a VM we encounter "Guest has not initialized the display yet". VM does not boot, goes to paused and maas cannot be used for this VM.
Questions
The netboot mini.iso from wiki.ubuntu.com/ppc64el.
Can it be used to establish a VM to boot from pxe?
Are these only for installation on bare metal?
I found this at https://lists.gnu.org/archive/html/qemu-discuss/2015-03/msg00027.html that says "kernel that won't work on this board model" or "kernel has no graphics support". This is leading me to believe that using the netboot/pxe mini.iso cannot be done in VM and needs to be bare metal.
Is putting the netboot iso into VM possible?
Is it not possible because the VM emulation graphics card is not supported for the mini.iso?
QEMU window opens up, but I am getting this error "Guest has not initialized the display yet" I had enabled -sdl option while configuring qemu, but I am still getting that error.
This isn't an error. It is just QEMU telling you that the guest OS has not yet done what it needs to do to turn on the emulated graphics card and display output.
In this case the likely reason for this is that you've tried to run a kernel that won't work on this board model, and so it has crashed before it got anywhere. You can also see this message if the kernel has no graphics support built in and is just doing output to serial console.
- If we can use netboot mini.iso for VM what are we doing wrong in the xml document defintion for the graphics?
 
vm6.xml or parts of it that are relevant
/usr/bin/qemu-system-ppc64le
<controller type='usb' index='0'>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='virtio-serial' index='0'>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='bridge'>
  <mac address='00:1a:64:30:12:11'/>
  <source bridge='br3'/>
  <model type='rtl8139'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
  <target port='0'/>
  <address type='spapr-vio' reg='0x30000000'/>
</serial>
<console type='pty'>
  <target type='serial' port='0'/>
  <address type='spapr-vio' reg='0x30000000'/>
</console>
<input type='tablet' bus='usb'/>
<input type='keyboard' bus='usb'/>
<input type='mouse' bus='usb'/>
<graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/>
Envirionment Info
Server: ubuntu 14.04 trusty - ppc64el
/wiki.ubuntu.com/ppc64el netboot mini.iso being used
juju: //jujucharms.com/u/james-page/openstack-base/bundle/39/
Openstack with ceph storage, requires 4 machines
using maas to boot VMs: askubuntu.com/questions/292061/how-to-configure-maas-to-be-able-to-boot-virtual-machines
Outcome Added Sep 22, 2015
==========================
Explanation of what was done to make things work and example of xml (XML Code secion). The xml helps to start making ppc64le work to get to running state (not paused). Once we got the VM to running state we still had to modify in virt-manager to setup scsi disk for deployed vm. The XML code below is the version that was finally modified in virt-manager with 8G scsi disk.
Notes
Need to run ppc64_cpu --smt=off
xml: Needed to specify arch ppc64
xml: Used qemu-system-ppc64 qemu
Added on 10/07/2014. I forgot to mention that the xml has to be changed to use VNC console. See the supplied XML. Need to use VNC. Have to have the console tags, which I believe should be there. Also if you setup XML you may run into issue with bus slot definition where it says already used slots. If this is the case you may have to adjust bus/slot numbers as shown in example below.
<console type='pty'> <target type='serial' port='0'/> <address type='spapr-vio' reg='0x30000000'/> </console> <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics>Note that this gets vm into running state.
Now the original xml contained mini.iso . However once running in maas the vm commissioned to ready state. Then when deployed (start button) hit issue where VM started up but failed deployment. From virt-manager removed the mini-iso and created a scsi disk of 8G. The scsi disk is needed my maas to put the deployed OS (in our case ubuntu 14.04 trusty).
Note on virt-manager: Using virt-manager makes things real easy. Our setup used private network and we normally use vnc to connect to servers. The server that has all the VMs on it (the VM server) we connect to via vnc. Now from there we tried to connect via vnc to the maas deployment server (on a ubuntu 15.04 level). We could not make vnc work on the 15.04 server. So we used ssh -X xll forwarding. Not the solution for product level but if you are testing this environment out this works in a pinch.
Note that to access the ubuntu properly you need to setup the ssh keys as specified by ubuntu documentation.
See: /maas.ubuntu.com/docs/nodes.html
Note we used root for our testing so if you do this for testing then maas userid does not have to be setup and just ssh-keygen (this is done on maas deployment server).
The target server that has the VMs (VM server in the doc) needs the public key so ssh-copy-id -i ~/.ssh/id_rsa ubuntu@x.x.x.x has to be done and you have to use ubuntu userid. The ubuntu userid is the default userid of deployed VMs.
Once the above is done you can access the deployed VM from the VM server using something like this ssh ubuntu@x.x.x.x (x.x.x.x is the ip address of created ubuntu VM)
Once this was fixed mass deployed to the server ubuntu 14.04.
If you run virt-manager you can see the sequence of processing during deploy.Now in the test case we ran observed in virt-manager that the boot sequence was still off network so changed that to disk. Stopped VM, restarted VM and came up with ubuntu 14.04.
Connect to the newly created VM using ssh ubuntu@x.x.x.x (x.x.x.x is the ipaddress of created ubuntu VM). Note you can obtain the newly created ipaddress from the edit node page, go to bottom and select discovered information. ipaddress is listed in that area a ways down, probably better to copy to a notepad, editor and then search for address start. We used private network so just searched for 192.
Use uname -a and lscpu to check if OS is correct. Should show architecture as ppc64le:
root@ubuntur2n2:~# uname -a Linux ubuntur2n2 3.19.0-25-generic #26-Ubuntu SMP Fri Jul 24 21:18:29 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux root@ubuntur2n2:~# lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 192 On-line CPU(s) list: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152,160,168,176,184 Off-line CPU(s) list: 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79,81-87,89-95,97-103,105-111,113-119,121 -127,129-135,137-143,145-151,153-159,161-167,169-175,177-183,185-191 Thread(s) per core: 1 Core(s) per socket: 6 Socket(s): 4 NUMA node(s): 4
XML Code
<domain type='kvm'>
  <name>vm5</name>
  <uuid>1e964a47-4a69-4b59-a5b4-637a1234f47d</uuid>
  <description>vm5 for PoC</description>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='ppc64' machine='pseries-2.2'>hvm</type>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
        <apic/>
<pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-ppc64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vm5-1.qcow2'/>
     <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
 <controller type='pci' index='0' model='pci-root'/>
   <controller type='ide' index='0'>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='scsi' index='0'>
      <address type='spapr-vio' reg='0x2000'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:64:14:53:14'/>
      <source bridge='br3'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
      <address type='spapr-vio' reg='0x30000000'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
      <address type='spapr-vio' reg='0x30000000'/>
    </console>
    <input type='tablet' bus='usb'/>
    <input type='keyboard' bus='usb'/>
    <input type='mouse' bus='usb'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='vga' vram='16384' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
  </devices>
</domain>