QEMU Disk Management

When I first got started with QEMU, I simply created .qcow images, assuming the formatting would be based on the file extension. I was either wrong or they changed how qemu views the file. So, in order to migrate my virtual machines from raw image drives to the qcow2 format to avoid a complete rebuild, I had to do the following.

First I made a backup of my currently running virtual machine disk.

cp /VMs/dns-server/dns-server.qcow /VMs/dns-server/dns-server.qcow2.bak

Then I got down to the conversion to the qcow2 format.

qemu-img convert /VMs/dns-server/dns-server.qcow -f raw /VMs/dns-server/dns-server.qcow2 -O qcow2

I made use of the info feature of the qemu-img program to validate the format of my new disk.

qemu-img info /VMs/dns-server/dns-server.qcow2

Then I generated a snapshot of the virtual machine, so I can fall back to that snapshot if and when I goof something up.

qemu-img snapshot -c /VMs/dns-server/30SEP15.snapshot /VMs/dns-server/dns-server.qcow2

Then I was able to simply stop my virtual machine, modify my startup script to look for the new .qcow2 hard disk, and restart the virtual machine!


X11 over Secure Shell

When I first started using Slackware Linux, I struggled to find answers to why my “ssh -X node” wouldn’t work. I was sent on a wild goose chase for about a year. Finally, I found the solution to my problem.

First, modify the server side secure shell configuration found in /etc/ssh/sshd_config. There are a couple of options you will need to modify in order to allow X11 over SSH.

In the following example, my client and server are two different nodes. So if you want to enable X11 over SSH on both sides, make sure you make client-side and server-side changes for each node.

#X11Forwarding no
#X11DisplayOffset 10

You’ll want to change that default setting to the following, ensuring that you remove the # comment as well:

X11Forwarding yes
X11DisplayOffset 10 

Next, you’ll want to configure your client side settings in /etc/ssh/ssh_config.

#ForwardX11 no

And change this to, again, don’t forget to remove the comment:

ForwardX11Trusted yes

Finally, restart the server side ssh daemon using the following command:

# /etc/rc.d/rc.sshd restart


I would recommend making a script that will automate these changes, because sometimes the slackpkg manager will install a new configuration file for either of the client/server side services. Pay attention to those messages when using slackpkg to ensure you don’t over-write configuration files you’ve already manipulated!

Disk Management!

It turns out that you can add and remove SCSI, and thereby SAS/SATA, disk drives without forcing a system reboot. To force a rescan of the SCSI controller you need to first find the SCSI controller hosts.

First, we’ll cover re-scanning the SCSI controller after you’ve added a new disk.

$ ls /sys/class/scsi_host/

The output will vary, depending on your system, but it should look something like this:


You can now issue the following command(s) to force a rescan of the SCSI bus controller:

echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan

Continue the echo command until you’ve hit all of your hosts.

Now we’ll cover how to gracefully remove a disk from your system.

First, ensure that you have unmounted the drive. In my example I will be removing /dev/sdb from the system, modify your command to include the specific device you are removing.

# umount /dev/sdb1

Once your drive is unmounted, you can issue the following command to signal the Linux kernel to remove the device.

# echo 1 > /sys/block/sdb/device/delete

You could verify the kernel message by inspecting the last few lines of dmesg to ensure the disk was removed.