I decided to try migrating one of my real linux machines into a VM. It's a machine that doesn't do much (ssh/imap server), and is an old P166 with 64MB ram, and a 4GB hard drive, that's almost full.
The process I ended up using to migrate the machine, was to create a new VM under vmware, with 64MB ram, and a 4GB disk, same as the real machine.
After that, I used LTSP that I also have installed on the VMware server to boot it up, and I configured LTSP to just boot to a prompt, instead of trying to start X.
(you edit the lts.conf, and change the SCREEN_01 line to = shell).
I then had a VM that booted up a linux kernel, and gave me a root shell. I tried to startup sshd so I could rsync the real machine into the VM, but this failed because I had no host keys for the VM.
I ran ssy-keygen on the LTSP/VMware machine, and generated the host keys under the filesystem that's exported to the LTSP client, and I was able to start sshd.
I then copied the root user's public key from the real machine into the root user's authorized_keys file under the LTSP export, so rsync could ssh to the LTSP client/VM.
Once I had this working, I used the VM to partition the virtual disk, create a filesystem on it, and mount it.
This had a bit of a trap in it.. after I partitioned the disk, the device nodes to represent the new partitions didn't appear. I had to manually go into the /dev directory, and "mknod sda1 b 8 1" then "mknod sda2 b 8 2" (found these here).
All of the above is probably caused by using LTSP, instead of knoppix or something, but I don't have a physical cdrom connected, so I would have had to use a different machine to make an iso, and then work out how to make vmware use an iso as a cdrom drive or something.
Once I'd made the nodes, I could "mke2fs /dev/sda1" and "mount /dev/sda1 /mnt".
I was then able to rsync the real machine into the VM's disk..
sudo rsync -av --exclude=/proc --exclude=/sys --exclude=/dev / root@[VM ip]:/mnt
(this was run on the real machine).
After all the files were copied, I went about attempting to "fix" the grub install, so the VM could boot off it's virtual disk.
This was a bit tricky, first I chrooted into the mounted virtual disk, and went to run "grub-install /dev/sda", but I realised that wouldn't work because the /dev under the chroot was empty.
I exited out of the chroot, and remounted /dev under the mnt, "mount -o bind /dev /mnt/dev", and chrooted again.
I ran "grub-install /dev/sda", but that told me "/dev/sda does not have any corresponding BIOS drive."
I had to "grub-install --recheck /dev/sda", which told me:
"Probing devices to guess BIOS drives. This may take a long time.
/dev/hda1: Not found or not a block device."
hmm, where's it getting /dev/hda1 from?
I edited /boot/grub/menu.lst, and changed all the hda1 references to sda1, since now the machine had a scsi disk, not a real ide disk, but that didn't help.
Eventually I discovered it was because I'd rsync'd the /etc/mtab file from the real machine across, so it thought /dev/hda1 was mounted. I edited /etc/mtab, and changed it to /dev/sda1 from /hda1, and then edited the /etc/fstab, so it would mount / properly when it booted.
I ran "grub-install /dev/sda", and got:
Searching for GRUB installation directory ... found: /boot/grub
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script `grub-install'.
(fd0) /dev/fd0
(hd0) /dev/sda
Ok, looking good. I exited the chroot, and rebooted the VM.
It booted up properly, first go. Only issue was that I had no eth0, yet the kernel was saying it detected the VMware PCnet eth device, as eth0.
ifconfig -a showed me I had an eth1 (but no eth0). I think this is because I compiled the kernel, and compiled support for the Intel e100 NIC directly into the kernel, so it was binding to eth0, even though it wasn't there under the VM.
I reconfigured /etc/network/interfaces, to use eth1 instead, and restarted networking, and it seemed to be ok.
The whole process wasn't too painful, just took a while to wait for the rsync, and there were a couple of head scratchers with the hda1/grub issue.