Setup a simple XEN Virtualization Cluster – Part 1 – Hardware

XEN Server LogoXEN Server is a very cost effective way to virtualize your machines in SMB environments where you have to cope with a very limited buget and in most cases with limited networking performance.
The community edition of XEN Server is free and offers all functionality you need for SMB if you do not require High Availablity Services (moving VM’s between servers without shutting them down).

In this series we will setup a simple imaginary virtualization cluster from scratch.

I won’t go into explaining why virtualization of servers is a good idea. Just read my previous post about upgrading Alfresco 3.4 to 4.2 and imagine going through the whole procedure over 20 times from scratch instead of just rolling back a snapshot. If having an environment that is recoverable within minutes is not reason enough for you to start virtualizing I would like to refer to the marvelous publication “managing small virtual environments” by Scott D. Lowe from virtualizationadmin.

The goals we want to achive will be the following:

  • Provide redundancy to be able to e.g. replace server parts without taking your business applications off line
  • Have an up-to-date backup of everything at any point of time (OFF SITE) for disaster recovery

Ready? Lets roll..!



To maintain the cluster without service interuption we will need at least two servers in our little setup. To join two standalone servers in a server pool (or cluster) Citrix demands the two machines to have identical CPU’s. So if you have two left over machines that differ in CPU model too much, welcome to no-support-or-any-guarantees-ville.
Actually Citrix’s requirement is only a must if you want to utilize the High Availablity (HA) features because it is absolutely clear that you can not move a running VM from one type of CPU to a completly different one without shutting it down first.

So you CAN join two servers with different CPUs into a XEN pool if you force the procedure by commandline option.
In most cases you will have to provide a “CPU mask” that will hide features of the more complex CPU to match the other pool members.

How can we find out if our available hardware is suitable for clustered operation?

  1. The CPUs must support 64bit operations as there is no x86 version of XEN server.
    Check the official citrix list if your CPUs are compatible with XEN Server.
  2. Check the heterogeneous CPU pooling matrix if your CPU’s can be pooled together

If your CPU’s can not be clusterd according to the pooling matrix there is still a good chance that you can force the joining procedure and everything will be fine as long you not attempt to use Live Migration (HA) of VM’s between the servers. Worth a try.


Get as much as you can. RAM is even more important than CPU in virtualization environments. Find out how many VM’s you want to host. Double the RAM requirements of the VM’s because we want one server to host all of them if we have to shutdown the other one. Add 2 GB for the XEN Server OS. Thats the RAM size you will need per XEN server machine. The easiest way to provide sufficient RAM is to buy as much as you can afford and the server can carry.


Most servers have at least two buit-in Gigabit ports that you can use to provide one management and one dedicated networking interface for your VM’s. Gigabit speed should be enough for the VM’s to perform pretty well but you will have to install extra interfaces to connect storage to your server pool.


Before you start virtualizing your machines you will first have to think about how to setup your storage in the most flexible way.

Here are some points to consider:

  • Capacity limits of internal storage
  • Extendablity
  • How fast can you recover from backup if everything burns down
  • Performance

Lets assume your servers have only a small number of harddrive bays and you want to use internal storage.

  • You will have to shutdown everything to extend capacity if you need more space.
  • You will have to tinker a backup procedure without affecting sever operation.
  • In the unpropable case that your whole office burns down you will have to get a similar system very fast so you can rebuild your business from backup before you loose too much cash

In conclusion. You do not want to store your VM data on the internal storage of your servers. Leave it for the OS.

A better way to go is to have a cheap NAS attached via iSCSI.

  • Most NAS devices have the ability to backup iSCSI images automatically without affecting the running systems.
  • You can add iSCSI disks to your server on the fly and extend existing disks.
  • Disaster recovery will be much faster because you can recover your iSCSI drive backup to almost any piece of equipment you can get your hands on. (a simple PC with linux and sufficient storage is enough ;)

Storage connection:

To revisit the networking section of this article you need to establish the fastest possible connection between your servers and your NAS.

If you are lucky with descent I/O performance of your VM because you have only a small number of users than just add another 1GB interface to your server and connect it to the storage unit.

The “best” way to roll is 10GB Ethernet over optical fibre. As we will not buy insanly expensive optical switches you will need a single-port 10GB interface in each server and one dual-port 10GB interface in your NAS to connect everything with Direct Attatch optical cables. Most NAS units are nothing more than servers with a large number of harddrive bays and a custom high performance RAID controller. This gives you the possiblity to install additional networking interfaces by just plugging them into the PCI(e) slots of the NAS just like you would upgrade any other server.

If you are interested in learning more about 10GB Ethernet over fiber (especially range and cable selection) I recommend spending five minutes of time to read “10 Things to Know Before Deploying 10 Gigabit Ethernet” (5 pages, pdf alert) by Netgear.

So at this point we have:

  • Two servers with clusterable CPUs, a lot of RAM, each with two 1GB and one 10GB networking interface.
  • One NAS unit with at least one 1GB and two 10GB interfaces

Storage capacity:

Next lets think about harddrives.
You will need X TB harddrive space for your VM’s. Assuming our backup method will be snapshotting we will need the same amount of space on the NAS to store our snapshots. (This will be our recovery base if you screw up a single VM and want it to be recovered in a matter of minutes. The archivable backup will be outside the NAS.)

The NAS is going to run a RAID (I prefer RAID 6 IMHO) so you can use any online RAID calculator to determine how many harddisks you will have to buy (e.g. for 4TB on RAID6 you will need four 2TB disks).

Well we are done. :) Everything you need for a two server cluster with shared iSCSI storage over 10GB Ethernet.

In part 2 of this series we will: install the XEN Servers and attach the storage.
(not as staight forward as one may think because we have a quite special mixed topology with switched networking and directly attached storage.) Stay tuned. ;)

Our Setup will look something like this at this point:

two-server-cluster-shared storage


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s