Working in an environment where we have 100 or so Virtual Machines running across for a multitude of ESX Servers it becomes exponentially harder to maintain them as users begin to want to copy, backup, duplicate & migrate Virtual Machines from different ESXi Servers within our organisation. It was time to implement some additional, external, centrally managed storage to store and maintain the expanding collection of virtual machines we have.
ESXi, well at least the free version 4.1, has two distinct network abilities for external storage – iSCSI & NFS. We are going to look at which one does what and when to use one and not the other.. We are also going to do this on the cheap – for free where possible..
Keep in mind our specific purpose is to store VMWare Virtual Machines, migrate them and back them up between ESXi Hosts. A task which is very common in our organisation. There are other reasons to use these two protocols which are beyond what we are doing here..
Important side note: We are a Microsoft world here.. Apart from ESXi which has a proprietary Linux OS we don’t run any other operating systems except Windows XP, 7, Server 2003 & 2008 organisation-wide so all our iSCSI/NFS implementations are going to be run on Windows in this article. We will explore what software products I have used as we move through as well as some other potential products on offer.
Microsoft iSCSI Software Target 3.3
Price: Free (OS specific though)
Download: here
The Microsoft iSCSI Software Target is Free and is an optional Server Component for Windows 2008 Server R2.
I have used iSCSI, and the Microsoft Target Software, many times before for other projects where cost was to be kept to a minimum. While it’s a fantastic technology it has some pros\cons which become apparent when you start using it on a daily basis for VMWare ESXi.
- Outstanding performance over TCP\IP – if have the ability to assign an entire gigabit network (and throw in some load balancing if you have multiple NICS on the Target and Initiator) all to itself you can have many many virtual machines using this storage simultaneously.
- iSCSI is a block level protocol – meaning it’s best used in a one-to-one fashion – after all the external target is treated like a real physically connected disk and you don’t want multiple endpoints writing to the same storage a the same time unbrokered – this can be very disastrous.
While there are some tricky multicasting features for iSCSI if used in the wrong circumstances you can irrevocably damage your iSCSI Store and all the data inside (in our case all our VM’s!! eeek!) so I recommend against this. - External Storage over iSCSI is great but it’s not ‘shared’ storage. You can have multiple targets available to endpoints however only one endpoint can use one target at a time. No sharing of the information stored in these targets between multiple endpoints.
Microsoft’s Target implementation requires you create a VMDK (Microsoft’s Virtual Machine Disk Format – anyone versed in Virtual PC or Hyper-V will know what these are) which will be your iSCSI endpoint (e.g. where you will be reading\writing your files to over iSCSI).
However you have to preallocate the size of the VMDK storage at the beginning at creation time! This is a problem as, at some point often sooner than later, you will run our of allocated space and then you have to go through a lengthy process of shutting down all the iSCSI-connected Virtual Machines in ESXi and resize the VMDK to accommodate more virtual machines. This is extremely messy and time consuming while people sit around with all their virtual machines down as you try and resize some ‘virtual space’..
Rinse and Repeat this process over and over as you or your users fill this storage. When you are creating 50 or 100GB Virtual Machines at a time you can understand why this becomes frustrating very quickly. Most people at this time would say ‘well how about just allocating all the storage space at once’ – this is not wise either as, for instance:
Total iSCSI Storage Disk Space: 2 Terabytes
Two ESXi Servers: Peter1 & Paul1
Storage Server Matthew1 – iSCSI Target space allocations:
Peter1: 1500GB
Paul1: 500GB
Problem 1 – All the disk space is preallocated – If you fill Paul1’s target to full you can not reduce Peter1 and recover the space back into Paul1.
problem 2 – What if you want to add an additional target for a new ESXi Server? You can’t you have no more disk space to give on the iSCSI Storage Server.
You can’t rob from Peter to pay Paul..
Conclusions:
iSCSI is a great technology and implemented properly and for the right reason it’s a fantastic way to store data over your network.
For VMWare though in 1 to 1 configurations (one iSCSI Target to one ESXi Virtual Server) it is a fantastic idea and you get all the speed benefits along with being able to have external storage on a different server in your network. Microsoft’s implementation is painful though – with the constant resizing required for VMDK’s of different targets maintenance and upkeep gets very tiring very quickly.
On a side note: The main reasons that make iSCSI so powerful is it opens you up to high availability and loading balancing for your storage but these specific needs use more advanced iSCSI Target Software (UNIX implementations or StarWind etc) which normally has a higher price tag – the MS Product does not support either in its current state.
So let’s see what VMWare’s other supported network external storage – NFS – can do for us..
haneWIN.net NFS Server for Windows
Price: EUR 29.00 / License (e.g per NFS Server)
Download: here
NFS has been around for nearly 25 years and is UNIX\LINUX Protocol for Network File Systems.
While I’ve tried a few of the other implementations of NFS for Windows (including Microsoft’s own Windows Services for UNIX implementation) I found the price point of haneWIN NFS Server along with it’s excellent throughput (on par with UNIX\LINUX based implementations) to be the real defining factors for why I chose it for our network.
Configuring haneWIN NFS Server was a snap and, in NFS terms, you create your exports (which are really just exposed filesystem directories on the NFS Server). Once these are configure NFS Clients can connect\read\write to your NFS Server.
Pros\Cons of NFS:
- Storage all clients can use at once. Much like Windows Networking any client can access the storage at all times.
It’s true ‘shared’ storage – lay down all your virtual machines in the NFS Storage and connect the storage to all ESXi servers. All VM’s within the NFS Server Storage are available to all ESXi servers. - There is no proprietary disk container like iSCSI. All the directories are visible on the NFS Storage Server. This opens up numerous possibilities for cloning of Virtual Machines or backing up Virtual Machines from Windows as you can see all the virtual machines within Windows Explorer.
- Migration & Resource Balancing of VM’s on multiple ESXi servers. With this visible shared storage between all your ESXi servers you can easily stop a virtual machine on one ESXi server and then go right ahead and start it on another ESXi server. NFS locking also prevents starting a Virtual Machines when it’s also running within another ESXi host.
- Storage expands and contracts as you add and remove virtual machines. You are only limited by the amount of physical storage in the NFS Server. No resizing and storage maintenance.
- Because you can see all the files in the NFS Server’s filesystem you can back up full VM’s or only parts of VM’s where needed using Windows Tools, Backup Software etc.
Conclusions:
NFS’s true power is that it’s a shared network filesystem protocol – it’s specifically designed for multiple users and its UNIX heritage means it’s built for performance. The multithreaded haneWIN implementation of NFS is excellent and with a segregated gigabit network with load balancing all the way through to the NFS Storage Server you get fantastic performance. Equally to the block-level iSCSI protocol. Performance in most cases is better due to how VMWare manages virtual machine reads & writes and is IO intensive rather than high bandwidth.
Final Conclusions
If you need high availability, clustering and 1-to-1 storage then iSCSI is an obvious choice. Hardware iSCSI is fantastic if you have the hardware that supports it but the 1-to-1 is a problem for us for our VMWare implementation.
If you aim is to have an easy way to manage large amounts of virtual machines where you can easily move, migrate, backup and clone VM’s with all your normal Windows Tools then NFS is an excellent way to expand the current storage of your ESXi VM Servers. It is easy to setup and manage as you are only limited by the size of your storage server – not proprietary disk formats used in StarWind and Microsoft iSCSI Software Target.
Your mileage may vary but at least you have some free products now to try out before moving to commercial implementations of iSCSI and NFS such as AIX offers and NetApp appliances. Hopefully this will help to give you a better grasp of both protocols and be able to make informed judgements as to which one to use for future projects.
Happy iSCSI and NFS’ing!
— TheNinja