I thought I would share some of my weekend entertainment. This weekend I grabbed a copy of EMC ScaleIO to have a play with as it looks like a seriously awesome piece of software that was previously only available to the big boys in enterprise or corporate IT. EMC ScaleIO is a software-only solution that uses your existing hardware (i.e. existing physical servers, hosts, etc.) to turn existing DAS (internal Hard drives or storage) into shared block storage!
This is at its core a Software Defined Storage Platform that is very powerful. With ScaleIO, there is no single point of failure. It provides data protection and resiliency through two-copy meshed mirroring of randomly sliced and distributed “data chunks” across multiple storage devices and servers. If a server or storage outage occurs, ScaleIO automatically rebuilds the failed blocks and rebalances the data to self-heal the cluster. Storage and server outages are tolerated and handled automatically without disrupting overall system operation. Data rebuilds can be monitored and throttled by administrators according to business hours or other factors.
This thing has some pretty cool features including multi-tenant capabilities, QoS, snapshots, and thin provisioning as well as protection delivered by two copy mesh mirroring.
So I built 4 Servers on my Desktop PC (using VMware Workstation.)
Figure 1- 4 x Virtual Machines – Windows 2012 R2
These were all Windows 2012 R2 with
- 2 vCPU
- 2 GB RAM
- 60 GB O/S Disk
- 120 GB Data Disk
I made a few tweaks around windows to allow remote installs to work in a workgroup and making sure they were configured. (The Data disk was initialised and had a simple partition, but were not formatted)
I then followed the installation guide and setup to get it working.
The first thing I did was copy a few chunks of Data across to the shared Disk
This is MB’s not Mb’s so… 138.9 (MB/s) = OVER 1,100.0 Mb/s (Not to shabby for my desktop!)
Figure 2- Dashboard showing an initial file copy speed
This is the backend screen showing the bandwidth coming from each disk
Figure 3 – Filecopy view at the backend
What do these this ring of colours and stuff mean?
Figure 4 – Legend showing what things mean
Then because I like to break things, I randomly shutdown a SDS Node and watched it rebuild the data, Because the data is protected across the nodes, any loss of a node will not cause data loss, and when the node comes back online or is rebuilt it will figure it out and rebuild or rebalance.
Figure 5 – Dashboard View of Rebuild Process
Figure 6 – Backend view of Rebuild Process
Figure 7 – Backend view of a rebalance as opposed to a rebuild
My Poor Little desktop is getting a little smashed with these tests, but consider this, I’m getting this on my home desktop, what would a cluster of 5 x DL360’s running this bare metal with SSD’s and SAS disks get?
Then I started to think about how many IOPS does this thing actually do based on my desktop hardware
I used this good old guide to setting up IOMeter based on a VDI workload.
Figure 8 – One variant of settings that I used with IO Meter
Got IOPS ?
I ran this against 4 Workers and was able to get about 16000 IOPS, this isn’t too bad
Figure 9 – Dashboard view of IOPS for a VDI type workload
This is a screen shot of the backend IOPS distribution between the nodes and the aggregate
Figure 10 – Backend IOPS showing for the VDI type workload
EMC calls this a disruptive technology. And I think they are very right, this type of thing could allow schools and business to use commodity hardware to create extremely high performance storage arrays that are pretty easy to manage.
I also think there are opportunities out there for things like backup as a service and offsite storage, this environment would allow for a cluster of servers to provide cheap storage on lots of slow disks that has N+1 or N+2 capabilities.
And here is the zinger… anyone can download it for an unlimited time, and get unlimited capacity, for free… really… no kidding free… what the hell. (and yes, you can purchase a production licence with support if you wanted to)
Anyway, that’s me for the weekend.