Installing VMWare ESXi
Over the past few days, we have received and installed new server hardware, installed VMWare ESXi, and transferred our test file sharing system build to run in the ESXi environment.
VMWare ESXi is a "bare metal" virtualization host; that is, it installs directly onto some server hardware, rather than onto an existing operating system (in contrast to systems like VMWare Server).
Installing and using ESXi turned out to be very easy - much easier than, say, setting up a VMWare Server under Linux. In hindsight, I think this is because ESXi is a dedicated environment, with really no choices to be made, and hence far fewer system components to be configured. This is fortunate, as the VMWare documentation is pretty inpenetrable. The biggest hurdle to overcome was the fact that the ESXi system management console software (vSphere) has to be installed on a Windows client, XP service pack 2 or later, which was slightly awkward for us, since we are mostly a Linux and Mac shop these days.
Installing ADMIRAL file sharing for the Silk group
Transferring the virtual machine images from our KVM test environment to ESXi went pretty smoothly. A small change to the script used to run the virtual machine image builder (vmbuilder) allows VMWare disk images to be generated directly. Copying these files to the ESXi system is a slightly fiddly 2-stage process, taking about 20m minutes in total as the disk image file is quite large at about 600+Mb.
Getting the system running under ESXi required some rethinking of our original approach. While the pre-built image would boot directly into a newly created VM, the networking would not work in our chosen VM configuration until VMWare tools is installed. This in turn requires that the original system image has Linux kernel development tools installed (Ubuntu kernel-package and linux-headers), which are largely responsible for the large size of the disk image file. With this taken care of, the VMWare tools installation runs very smoothly, and the system can be rebooted with functional networking. (NOTE: ESXi does not support NAT, only bridged networking, so an IP address must be allocated for the new server before networking can be activated.)
The need to install VMWare tools to get networking capability means that our original plan of doing much of the system configuration automatically on first system boot has been dropped in favour of using manually initiated scripts to handle post-boot system configuration. Scripts for configuring certificates, LDAP, Samba and automatic backups require a fair degree of user interaction, but are otherwise quite straightforward.
After all this, out test suite (which checks file access via Samba, file access via WebDAV and direct HTTP access) ran straight away against the new server. For this, having the pyunit-based test suite was a real boon.
Currently, we are using Ubuntu 9.10 ("Karmic") for our ADMIRAL server platform. We do intend to update to version 9.14 ("Lucid") when it becomes available, as this version has been designated for "long term support". The LDAP configuration in 9.10 does seem to be something of a work-in-progress, so we fully expect to revisit this work when we come to upgrade the base system. We have also dropped Kerberos from our platform for the time being, because of difficulties getting it to work with a range of common clients. LDAP seems to be a reasonable comptomise, as it allows all ADMIRAL facilities to be authenticated and authorized from a common source.
More information about the setup is at http://imageweb.zoo.ox.ac.uk/wiki/index.php/ADMIRAL_VMWare_ESXi_notes.
No comments:
Post a Comment