2

Tl/Dr: rebuilding VMs after OS SSD crash. Looking for best practice tips to see if I am missing anything, and to confirm if RAW vs QCOW2 has performance differences, and if they can be set up with the same command, or need different commands to set them up. I am not great with Linux, so it takes me quite a bit of reading to decipher recommendations, thank you in advance though!

Hello all, I am a very green user with Ubuntu server, even after using it for a few years now, in a set it an forget it fashion. I had a server crash from failed OS SSD used for OS, and never bothered to back it up. I have the system up and running again, but am in the steps where I am about to get to setting the virtual machines back up. Previously I was on 14.04 LTS, but am now on 18.04 LTS. The code below is basically what I use to spin up VMs, and it worked quite well. I am looking to see if there is anything I am missing as far as best practices go with this.

I DO need to add in console access, as the SSD failing started with a VM that didn't start after a reboot, and that is when it spiraled out of control. The VM would "start" and be pingable, but refuse SSH connections, so not FULLY start. I still need to learn how to set up console, and will be working with that this week, but I am wondering if there is anything else I am overlooking here.

sudo ubuntu-vm-builder kvm xenial \
 --dest /mnt/Chaos.raw \
 --hostname Chaos \
 --arch amd64 \
 --mem 4096 \
 --cpus 4 \
 --user admin \
 --pass password \
 --bridge br0 \
 --ip 172.16.5.21 \
 --mask 255.255.255.0 \
 --net 172.16.5.0 \
 --bcast 172.16.5.255 \
 --gw 172.16.5.1 \
 --dns 172.16.5.2 \
 --components main,universe \
 --addpkg acpid \
 --addpkg openssh-server \
 --addpkg nfs-common \
 --addpkg linux-image-generic \
 --addpkg postfix \
 --addpkg mailutils \
 --addpkg libsasl2-2 \
 --addpkg ca-certificates \
 --addpkg libsasl2-modules \
 --addpkg htop \
 --rootsize=100000 \
 --libvirt qemu:///system ;

It was suggested to me on Reddit that using RAW instead of QCOW2 will allow the VM to be faster and have better performance. I wanted to get feedback on that. I tried a different method of creating the virtual machine as shown below, and it WORKED, but I can't for the LIFE of me figure out how to USE it. How the heck I connect to it I have no idea, also I don't know how to set up the network info on setup, I tried a few ways with the MANPAGE, but I was getting errors.

virt-install \
--connect qemu:///system \
--name Chaos \
--memory 4096 \
--vcpus cpuset=1-4 \
--disk=path=/mnt/Chaos/Chaos.raw,size=100,bus=virtio,format=raw,cache=none \
--os-variant ubuntu16.04 \
--location http://us.archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/ \
--network bridge=virbr0,model=virtio, \
--virt-type kvm \
--hvm \
Ziggidy
  • 113

1 Answers1

7

you have combined a few questions, let me try to answer them one by one. The guest uses the default network and dhcp in there with your latter command. I assume you have set up a user on install. The easiest way to know how to connect would be virsh domifaddr like:

$ virsh domifaddr xenial-kvm
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:fe:2c:1f    ipv4         192.168.122.232/24

Note: I'd personally always prefer the much sleeker (no install, but using cloud images) uvtool-libvirt - see this info if you are interested


Then for the good old raw vs qcow2 discussion. I have done KVM performance for some years - are there differences yes. But the answer isn't that easy. You trade quite some features of qcow2 for that (sparse allocation, snapshots, ...).

And if you are really concerned about performance then raw isn't what you want to use either - at least free up a partition or better a full device and pass (type='block' device='disk', driver type='raw' is different to a .raw type='file') that device to the guest - that skips much more of the host stack and allows detection of the device characteristics in the guest which usually ends up being much faster.

You can spin that thought further depending on your setup, IMHO one of the best solution for somewhat normal setups (there always is some >10k$ enterprise alternative, lets ignore that) to optimize speed at the moment is an extra PCIe nvme controller which you PCI-passthrough to the guest - but that requires the hardware to do so.

So the question IMHO is never "raw file vs qcow2 file", it is "qcow2 for features, or some pass-through for speed" - raw files are somewhere in between and rarely useful for either of above trade-off decision.

Christian Ehrhardt
  • 2,195
  • 16
  • 22