Category Archives: General

Dell EMC PowerEdge R340 Review

Introduction

The Dell EMC PowerEdge R340 is one of Dells 1RU, compact and powerful entry level servers targeted at Small office / Remote Offices and it really hits the mark. Normally you spend your time working out what trade off you are willing to accept and how to make things work, while there are trade-offs with the R340, they are not show stoppers and you have a range of options and flexibility to let you choose.

Tech Specs

With a single Intel E-2100 Series CPU’s, Expandability through two PCIe Gen 3.0 slots and up to 64GB RAM. This little Server can be powered with either a single or dual 350W or 550W hot-plug redundant power supplies. These servers support four 3.5″ or 8 2.5″ drive bays, This empowers you with a range of flexible storage options that give you flexibility to tackle different workloads and a wide range of uses.

If you need a highly capable and compact server this one might just tick all of your boxes, with 64GB DDR4 RAM and an Intel E-2186G 3.8GHz, 12M cache, 6C/12T CPU driving one of these, you get a lot of bang in a very small box, not to mention up to four x 14 TB 3.5″ Drives providing a huge amount of capacity, or eight x 3.84TB SSD providing raw speed and capacity.

This CPU, RAM and Storage capacity is backed up by 2 expansion slots for high speed networking allowing for both High speed networking (i.e. dual 10Gb) and Fibre Channel Cards allowing for high speed SAN connectivity, expanding out the storage capabilities beyond what can be catered for internally.

Processor

  • 1 processor from the following:
  • Intel® Xeon® E-2100 product family,
  • Intel Pentium®,
  • Intel Core i3®,
  • Intel Celeron®

Bezel

  • Optional security bezel

Chassis

  • Form Factor: Rack (1U)

Dimensions:

  • Chassis width: 434.00mm (17.08″) 
  • Chassis depth: 595.63mm (23.45″) (3.5″ HHD config)
  • Chassis depth: 544.836mm (21.45″) (2.5″ HHD config)
    • Note: These dimensions do not include: bezel, redundant PSU
  • Chassis weight: 13.6 kg (29.98 lb)

Memory*

  • DIMM Speed: Up to 2666MT/s DDR4 DIMMs
  • Memory Type: UDIMMs
  • Memory Module Slots: 4
  • Maximum RAM: Up to 64GB

Hard Drive

  • Up to 8 x 2.5 hot-plug SAS, SATA, or SSD
  • Up to 4 x 3.5 hot-plug SAS, SATA, or SSD

Management

Embedded / At-the-Server

  • iDRAC9 with Lifecycle Controller 
  • iDRAC Direct
  • iDRAC RESTful API with Redfish

Consoles

  • OpenManage Enterprise
  • OpenManage Essentials
  • OpenManage Power Center

Mobility

  • OpenManage Mobile

Tools

  • iDRAC Service Module
  • OpenManage Server Administrator
  • Dell EMC Repository Manager
  • Dell EMC System Update
  • Dell EMC Server Update Utility
  • Dell EMC Update Catalogs
  • Dell EMC RACADM CLI
  • IPMI Tool

OpenManage Integrations

  • Microsoft® System Center
  • VMware® vCenter™
  • BMC Truesight (available from BMC)
  • Red Hat Ansible

OpenManage Connections

  • Nagios Core & Nagios XI
  • Micro Focus Operations Manager i (OMi)
  • IBM Tivoli Netcool/OMNIbus

Network Controller

  • 2 x 1GbE LOM Network Interface Controller (NIC) ports

Power Supplies

  • Single or dual 350W or 550W hot-plug redundant power supplies

Storage Controllers

  • Internal controllers: PERC H730P, H330, HBA330 (non-RAID)
  • Software RAID: PERC S140 
  • External HBAs: 12Gbps SAS HBA (non-RAID)
  • Boot Optimized Storage Subsystem: 2x M.2 240GB (RAID 1 or No RAID), 1x M.2 240GB (No RAID only)

I/O Expansion Slots

2 x PCIe Gen 3.0 slots

  • One x8 slot low-profile, half length with x4 slot width
  • One x16 slot low-profile/full-height, half length with x8 bandwith

Ports

Front:

• One USB 2.0

• Micro USB 2.0 compliant for iDRAC Direct

Rear:

• Two USB 3.0

• VGA

• Serial connector

Internal port:

• One USB 3.0

Supported Operating Systems

  • Microsoft Windows Server® with Hyper-V
  • Red Hat® Enterprise Linux
  • SUSE® Linux Enterprise Server
  • VMware® ESXi
  • Citrix® XenServer®
  • Ubuntu Server
  • Certify XenServer

Rack Support

ReadyRails™ sliding rails with optional cable management arm for 4-post racks (optional adapter brackets required for threaded hole racks).

Remote Management

For a small box it doesn’t slack off when it comes to remote management, the iDRAC controller provides enterprise grade remote management and security capabilities. This can be included with a dedicated NIC, or it can utilise one of the onboard NICs for connectivity.

Closing thoughts

For a lot of customers, this box could be considered laughably small, with a single Socket, only 64 GB RAM, and only limited internal storage, however those customers are looking at the wrong server and they know it, for customers looking for small office, remote office, remote campus, departmental servers or even physically isolated or segregated workloads This is a little power house is great. I could easily see these being deployed out to dozens of remote offices to run an Office in a Box style solution, possibly backed by a IPDA 4400 or small DataDomain for backups.

Depending on your requirements, it could be almost impossible to go past the PowerEdge R340 Server, I would highly recommend it for a wide range of use cases!

Links

https://www.dell.com/en-au/work/shop/povw/poweredge-r340

PowerEdge R340 Spec Sheet

PowerEdge R340 Technical Guide

Dell EMC PowerEdge 14th Generation 1-socket servers

PowerEdge Rack Servers Quick Reference Guide

PowerEdge Server Solutions Brochure


Aruba 2930M Series Switch

I’ve recently been spending some time with the Aruba 2930M series switches, and found them to be worth doing a quick writeup about. With today’s converged networking carrying more traffic from a wider range of sources, networking infrastructure is more critical than ever! The 2930’s have been designed with IOT and the explosion of wireless in mind

The companies that I’m dealing with are adding end points to their network at a rate of knots that is astounding, 802.11ac (and ax) access points, network sensors, security cameras, lights, door controllers, not to mention phones, tablets, laptops, desktops, servers, firewalls and other systems and technology.

With all of this new and old technology, cost and capabilities are driving what gets adopted and how quickly. The core importance of reliability, manageability, security, scalability are often overlooked, but are key to the success of any project or piece of work that will be networked! The Aruba 2930M ticks all of these box’s and overlayed with API’s and Software defined Networking at its heart, these are very capable switches and give a huge bang for your buck.

One of the key features that anyone looking at these switches needs to look into is the management capabilities. You can manage your network both through the on-premise Aruba AirWave product, or in the cloud with Aruba Central. Definitely worth a look at the capabilities and features unto themselves.

Link – Aruba 2930M Switch Series

  • Simple deployment, provisioning and management with advanced security and network management tools like Aruba ClearPass Policy Managerand Aruba AirWave and cloud-based Aruba Central with Zero Touch Provisioning.
    • Enhanced security with Tunnel Node so you can use the mobility controller as a unified policy enforcement point for traffic from both wired and wireless clients. I wrote a blog about it how it simplifies policy management and ensures consistent access and permissions.
    • Plenty of PoE+ that keeps running with dual redundant, hot-swappable power supplies with up to 1440W to power IoT devices, 802.11ac APs and cameras.
    • Pay-as-you-grow modular wire-speed 10GbE and 40GbE uplinks for scalable capacity back to a larger aggregation switch.
    • Enormous stacking capability with up to 10 chassis so you can quickly grow your network when new devices show up.

Many of my clients are adding new 802.11ac access points and in a lot of cases adding dozens and in some cases hundreds, as well as High resolution IP Security cameras, power sensors, motions sensors, humidity and temperature sensors, water quality and air quality sensors and other IoT devices, the growth is astounding. These devices all need connectivity, and in a lot of cases they need power and connectivity as these devices are frequently powered through PoE from the switch or have wireless connectivity via an AP that is connected via the switch. The core and edge switching that clients adopt has been critical to their success and adoption of the new technologies. The Aruba 2930 switches have been an extremely capable and reliable addition to their networking portfolio.

The Aruba 2930M Switch Series are Layer 3 switches that support Tunnel Node, robust QoS, and static, RIP, Access OSPF routing, PIM, VRRP and IPv6. One of the things I love about these Aruba 2930M’s is that there is no software licensing required for these features!

Software Defined Networking (SDN) is on the hottest topics in networking today. The value that SDN is bringing to larger networks is undisputed, both the value of the features and the capabilities as well as the value driven through faster and more agile change with less time and effort on behalf of the engineers and network administrators. The 2930M has the powerful Aruba ProVision ASIC that supports the latest SDN apps with programmability for today’s, tomorrow’s and many future applications. Customers wanting to adopt digital workplaces and who are seeing IOT and the big push towards pure mobility (i.e. 100% user device wireless) are perfect fits for this switch!

The last thing to touch on is the while these are fixed port switches (i.e. 24 and 48 port) they also have modular capabilities (i.e. the M in 2930M), There are some great options for using modules for connecting.

https://www.arubanetworks.com/assets/ds/DS_2930MSwitchSeries.pdf

Problems? well, yes, I did have one issue that caught me by surprise and was more than a little disappointing. The Hot Swap PSU was very difficult to install and seat into the switch, this ended up being due to the Fan being slightly high and hitting the switch chassis before it could be inserted fully. I’ve come to expect more from HPE / Aruba build quality and this was this one had me second guessing if I had the right PSU until I realised what the problem was.

In conclusion these are some great switches with a lot of enterprise capabilities, they can deliver fantastic connectivity to the edge or in smaller datacentres environments where you haven’t upgraded to the Aruba 5400 Series Modular switches

Value for money, the Aruba 2930Ms have a very good cost per port ratio, the included features and manageability is great.


Everki Titan Backpack Review



Travel Friendly Laptop Backpack

Intro

To begin with, I love my laptop bags, I’m always trying to find the perfect bag for my requirements and desires, now the problem is that my requirements change on a regular basis and are highly dependent on my ‘persona’ and role that I’m playing at the time.

I can go from living off a Microsoft Surface pro for a couple of weeks to lugging around a high end workstation laptop and running half a dozen VM’s in a demo. My personal laptop is a Dell XPS 15 9560, which while not a large laptop, isn’t a small one either.

Devices that I’ll carry with me on a regular basis include.

  • Dell XPS 15 9560
  • Lenovo T460s
  • Microsoft Surface 3 Pro
  • Dell Latitude 7480

I will often also carry the Lenovo T460s when I’ve got one of the other devices as well adding my load.

While looking for a laptop bag to carry some more kit, one of my key decision factors was air travel, I will be taking this bag on flights and needed to be able to carry this on with me.

The Titan has lots of large compartments, but that’s not what makes it so innovative. Everki designed intuitive spaces for smart organization, buckles and straps positioned for on-the-fly adjustments; and pockets, pouches and slots for fingertip access. The extra large laptop compartment is roomy enough for the most behemoth 18.4-inch laptop. In the main storage area, you’ll find large, divided spaces which allow you to keep chargers and power supplies separate from fragile files and documents.

The Bag

The way this bag has been put together has really impressed me, the construction is solid and really well done. They have not skimped on the materials, the zipper tags or the tough stitching! This is one damn well made bag.

I’m not sure about you, however my backpack is fully loaded it gets heavy, very heavy at times, so I appreciate the engineered 5-point balance strap system that ergonomically distributes the weight and reduces muscle strain (I recently tweaked my back, so believe me when i say this made a big difference).

I normally have carry a bit, but the below is my minimum load, this is the whittled down loadout after I tweaked my back, there is normally a number of other pouches, cables and connectors, as well as a portable HDD, etc.

With straps adjustable at the shoulders, 2 quick-slide straps at the bottom and one across the chest; you have total control over weight distribution. This allows you balanced comfort, even when your backpack is full.

I also found one of the side pockets very handy for holding a bottle of drink, I’ve show a 600ml bottle of Coke in there, this bag is so huge that it makes it look like a 300ml baby bottle!

This might not seem like much, but having a drink handy and in reach is really appreciated when your on the go.

Below are a number of pictures I’ve taken of the bag, one of the things I don’t think is shown well is the size of the laptop compartment, This bag is a monster that swallows my laptops without even twitching!

Yet another of the handy features (which is becoming more common), is the Accessories Pouch. No one wants their power brick to get in the way or become a tangled mess. That’s why Everki designed the Accessories Pouch to separate your mouse, chargers, power supplies and cables from your more sensitive items.

There is also a good amount of padding around the laptop compartment, I’m always worried about the corners of my laptop the somewhat flimsy padding that a lot of bags have, No concerns with that here.

Conclusions

I’ve been really happy with this bag so far. Its amazingly well made and its big enough to hold everything I need with space to spare. It has all the pockets that I regularly use and can take a beating.

I spent my own money on this bag and I’m very happy with the investment!

I’m giving the Titan a score of 9 out of 10


Dell EMC PowerEdge R340 Review

Introduction

The Dell EMC PowerEdge R340 is one of Dells 1RU, compact and powerful entry level servers targeted at Small office / Remote Offices and it really hits the mark. Normally you spend your time working out what trade off you are willing to accept and how to make things work, while there are trade-offs with the R340, they are not show stoppers and you have a range of options and flexibility to let you choose.

Tech Specs

With a single Intel E-2100 Series CPU’s, Expandability through two PCIe Gen 3.0 slots and up to 64GB RAM. This little Server can be powered with either a single or dual 350W or 550W hot-plug redundant power supplies. These servers support four 3.5″ or 8 2.5″ drive bays, This empowers you with a range of flexible storage options that give you flexibility to tackle different workloads and a wide range of uses.

If you need a highly capable and compact server this one might just tick all of your boxes, with 64GB DDR4 RAM and an Intel E-2186G 3.8GHz, 12M cache, 6C/12T CPU driving one of these, you get a lot of bang in a very small box, not to mention up to four x 14 TB 3.5″ Drives providing a huge amount of capacity, or eight x 3.84TB SSD providing raw speed and capacity.

This CPU, RAM and Storage capacity is backed up by 2 expansion slots for high speed networking allowing for both High speed networking (i.e. dual 10Gb) and Fibre Channel Cards allowing for high speed SAN connectivity, expanding out the storage capabilities beyond what can be catered for internally.

Processor

  • 1 processor from the following:
  • Intel® Xeon® E-2100 product family,
  • Intel Pentium®,
  • Intel Core i3®,
  • Intel Celeron®

Bezel

  • Optional security bezel

Chassis

  • Form Factor: Rack (1U)

Dimensions:

  • Chassis width: 434.00mm (17.08″) 
  • Chassis depth: 595.63mm (23.45″) (3.5″ HHD config)
  • Chassis depth: 544.836mm (21.45″) (2.5″ HHD config)
    • Note: These dimensions do not include: bezel, redundant PSU
  • Chassis weight: 13.6 kg (29.98 lb)

Memory*

  • DIMM Speed: Up to 2666MT/s DDR4 DIMMs
  • Memory Type: UDIMMs
  • Memory Module Slots: 4
  • Maximum RAM: Up to 64GB

Hard Drive

  • Up to 8 x 2.5 hot-plug SAS, SATA, or SSD
  • Up to 4 x 3.5 hot-plug SAS, SATA, or SSD

Management

Embedded / At-the-Server

  • iDRAC9 with Lifecycle Controller 
  • iDRAC Direct
  • iDRAC RESTful API with Redfish

Consoles

  • OpenManage Enterprise
  • OpenManage Essentials
  • OpenManage Power Center

Mobility

  • OpenManage Mobile

Tools

  • iDRAC Service Module
  • OpenManage Server Administrator
  • Dell EMC Repository Manager
  • Dell EMC System Update
  • Dell EMC Server Update Utility
  • Dell EMC Update Catalogs
  • Dell EMC RACADM CLI
  • IPMI Tool

OpenManage Integrations

  • Microsoft® System Center
  • VMware® vCenter™
  • BMC Truesight (available from BMC)
  • Red Hat Ansible

OpenManage Connections

  • Nagios Core & Nagios XI
  • Micro Focus Operations Manager i (OMi)
  • IBM Tivoli Netcool/OMNIbus

Network Controller

  • 2 x 1GbE LOM Network Interface Controller (NIC) ports

Power Supplies

  • Single or dual 350W or 550W hot-plug redundant power supplies

Storage Controllers

  • Internal controllers: PERC H730P, H330, HBA330 (non-RAID)
  • Software RAID: PERC S140 
  • External HBAs: 12Gbps SAS HBA (non-RAID)
  • Boot Optimized Storage Subsystem: 2x M.2 240GB (RAID 1 or No RAID), 1x M.2 240GB (No RAID only)

I/O Expansion Slots

2 x PCIe Gen 3.0 slots

  • One x8 slot low-profile, half length with x4 slot width
  • One x16 slot low-profile/full-height, half length with x8 bandwith

Ports

Front: 

• One USB 2.0

• Micro USB 2.0 compliant for iDRAC Direct

Rear:

• Two USB 3.0

• VGA

• Serial connector

Internal port:

• One USB 3.0

Supported Operating Systems

  • Microsoft Windows Server® with Hyper-V
  • Red Hat® Enterprise Linux
  • SUSE® Linux Enterprise Server
  • VMware® ESXi
  • Citrix® XenServer®
  • Ubuntu Server
  • Certify XenServer

Rack Support

ReadyRails™ sliding rails with optional cable management arm for 4-post racks (optional adapter brackets required for threaded hole racks).

Remote Management

For a small box it doesn’t slack off when it comes to remote management, the iDRAC controller provides enterprise grade remote management and security capabilities. This can be included with a dedicated NIC, or it can utilise one of the onboard NICs for connectivity.

Closing thoughts

For a lot of customers, this box could be considered laughably small, with a single Socket, only 64 GB RAM, and only limited internal storage, however those customers are looking at the wrong server and they know it, for customers looking for small office, remote office, remote campus, departmental servers or even physically isolated or segregated workloads This is a little power house is great. I could easily see these being deployed out to dozens of remote offices to run an Office in a Box style solution, possibly backed by a IPDA 4400 or small DataDomain for backups.

Depending on your requirements, it could be almost impossible to go past the PowerEdge R340 Server, I would highly recommend it for a wide range of use cases!

Links

https://www.dell.com/en-au/work/shop/povw/poweredge-r340

PowerEdge R340 Spec Sheet

PowerEdge R340 Technical Guide

Dell EMC PowerEdge 14th Generation 1-socket servers

PowerEdge Rack Servers Quick Reference Guide

PowerEdge Server Solutions Brochure


NetApp AltaVault Review

I have recently been working with a number of clients who are struggling with uncontained data growth and retention requirements. This is something that everyone has talked about and known about across the industry for a long time so it’s no surprise that more and more people are needing solutions and quickly. The old ways of doing things are not working anymore and are costing more, consuming more storage infrastructure and draining employee hours, to make matters worse, it’s senior staff who are having their time taken on an ongoing basis.

There are a number of different solutions that organisations have been leveraging to address this problem, the oldest tried and true solution is backup and archive to tape. Large tape libraries with multiple tape drives writing out huge amounts of data to tape again and again. The problem with this is that is very cold data and it has very little value to the business in today’s fast moving world where immediate responses are required, not desired. This makes the time and investment that everyone has been making into putting data on tape a huge waste, Tape has only every really solved a very small number of issues for businesses.

What I’m seeing is a significant change in how Backups and Archiving is being addressed in many organisations. People are starting to be much smarter about what they are backing up and how long they are keeping that data. The difference between backup data and archival data is being discussed and we are hearing questions around the value of data more frequently, we are also being involved in in-depth and robust discussions around data retention, data destruction and data sovereignty.

One of the biggest red flags that I see regularly is when questions are answered with “Because that’s how we’ve always done it” or “That’s how it was when I got here“. These answers are coming back from both business stakeholders and from the technology staff who are managing and supporting backups.

The two most successful solutions that I’ve worked with are the EMC DataDomain’s and the NetApp AltaVault’s, these are Purpose Built Backup Appliances (PBBA)

So why have PBBAs been so successful and what makes them stand out?

  • Plug and Play deployment – there is no need to change your backup application
  • High performance – DataDomain in particular has excellent ingest rates
  • High data reduction rates – due to the use of variable segment size de-duplication
  • Proven scalability and reliability – solutions are widely deployed at many enterprise organisations
  • Supported and validated – known good designs and support from the vendor

NetApp’s AltaVault is tackling a range of the issues faces by organisations in today’s market

  • Too slow. Users expect instant recovery and minimal data loss, but legacy backup and recovery strategies can’t keep pace. As a result, many organisations fail to meet backup and recovery windows. (windows which are quickly shrinking)
  • Too expensive. As storage grows, organisations struggle with the rising cost of protecting that data on premises. Also, bandwidth costs and constraints are more of an issue with larger datasets. (other factors such as power availability, carbon footprint and physical capacity)
  • Too risky. Many organisations still rely on tape, which increases risk exposure because of the potential for lost media in transport, increased downtime and data loss, and limited testing ability.
  • Too complex. With an ever-increasing number of critical applications to protect, along with complex backup architectures, multiple backup apps, and error-prone legacy technologies, backup is incredibly complex and difficult to manage.

What has and is currently changing?

The leading trend that is providing relief to these problems comes in two parts. Staging backups onsite in Purpose Built Backup Appliances (PBBA) before replicating the data to a cloud storage provider or to tape for archival storage.

NetApp AltaVault cloud-integrated storage enables customers to securely back up data to any cloud at up to (what could be as high as) 90% lower cost compared with on-premises solutions. AltaVault gives customers the power to tap into cloud economics while preserving investments in existing backup infrastructure and meeting backup and recovery SLAs.

AltaVault physical appliances

The AltaVault physical appliances are among the industry’s most scalable cloud-integrated storage appliances, with capacities ranging from 32TB up to 384TB of usable local cache. Enterprises often deploy AltaVault physical appliances in their data centres to protect large volumes of data. These datasets typically require the highest levels of performance and scalability available. AltaVault physical appliances are built on a scalable and efficient hardware platform that is optimised to reduce data footprints and rapidly stream data to the cloud.

AltaVault Virtual Appliances

AltaVault virtual appliances are an ideal solution for medium-sized businesses that want to get started with cloud backup. They’re also ideal for enterprises that want to protect branch offices and remote offices with the same level of protection they enjoy in the data centre. AltaVault virtual appliances provide the flexibility of deploying onto heterogeneous hardware while still providing all of the features and functionality of hardware-based appliances. AltaVault virtual appliances can be deployed onto Kernel-Based Virtual Machine(KVM), Microsoft Hyper-V, and VMware vSphere, enabling customers to choose the hardware that works best for them.

AltaVault Cloud-Based Appliances

For organisations without a secondary disaster recovery location or for companies looking for extra protection with a low-cost tertiary site, cloud-based AltaVault appliances on Amazon Web Services (AWS) and Microsoft Azure are the key to enabling cloud-based recovery. Using on-premises AltaVault physical or virtual appliances, data is seamlessly and securely backed up to the cloud. If the primary site is unavailable, you can quickly spin up a cloud-based AltaVault appliance on AWS or Azure and recover data in the cloud. Pay only for what you use when you use it with usage-based pay-as-you-go pricing. If you already have production workloads running in the public cloud, you know that protecting those workloads in the cloud is just as critical as if they were running on premises. Cloud-based AltaVault appliances offer an efficient and secure approach to backing up cloud-based workloads. Using your existing backup software, AltaVault cloud-based appliances deduplicate, encrypt, and rapidly migrate data to long-term, low-cost cloud storage.

Cold storage and archives

AltaVault can be configured in cold storage mode. In cold storage mode, which is optimised for infrequently accessed data, or in archives, the AltaVault appliance uses the local cache for metadata only, increasing the scalability of the solution by more than 5 times. This results in minimal management and operational costs for cold data, while also allowing for fast and simple access to data when needed.

Disaster recovery

Perhaps your organisation has no secondary disaster recovery location or you are looking for extra protection with a low-cost tertiary site. With cloud-based NetApp AltaVault appliances on Amazon Web Services (AWS) or on Microsoft Azure, you can quickly recover data after a disaster. When you use on-premises AltaVault physical or virtual appliances, data is seamlessly and securely backed up to the cloud. If the primary site is unavailable, you can quickly spin up a cloud-based AltaVault appliance and recover data in the cloud. And with usage-based pay-as-you-go pricing, you pay only for what you use.

Key Features Unparalleled efficiency

• Industry-leading data reduction. AltaVault uses inline deduplication and compression, resulting in up to 30:1 data reduction ratios. This means that you store less data in the cloud and you can get it there more quickly.

• Network and cloud optimisation. Built-in WAN optimisation and deduplication reduce the amount of data that is transported to cloud and increase transfer speed by up to 4 times. AltaVault intelligently throttles data, which saves you money (and time). Quality of service (QoS) confirms that data moves to and from cloud storage at the speed that your business requires.

• Faster data restoration. AltaVault improves recoverability because 95% of restores occur from local cache. With intelligent prefetching, AltaVault restores data from the cloud within minutes. With AltaVault, you can restore data up to 32 times more quickly than with tape. Enterprise scale on an open platform

• Flexible deployment and scale. Choose the offering that is right for you: physical, virtual, or cloud-based in the AWS or the Azure clouds. AltaVault appliances can start as small as 2TB and scale up to 57PB of protected data in the cloud, making it one of the most scalable cloud-integrated storage products on the market.

• Compatibility with existing backup software. Do you love your backup software? So do we. AltaVault is compatible with all leading backup and archival software solutions, including NetApp SnapMirror®, Commvault, EMC, IBM, Veeam, and Veritas. Don’t waste time and effort on ripping and replacing your backup software and retraining IT staff.

• Cloud choice and agility. Pick a cloud, any cloud, and AltaVault likely supports it. AltaVault supports 95% of all leading cloud storage providers and platforms on the market today, including AWS, Azure, Google, IBM, and NetApp StorageGRID® Webscale. Cloud providers rise and fall, and you never want your data stuck with a cloud provider. AltaVault cloud agility allows you to free your data.

Ironclad security and compliance

• Is encrypted and compliant. Data is encrypted and secure at all times—in flight and at rest—using AES 256-bit encryption, with FIPS 140-2 level 1–validated encryption and industry-standard Transport Layer Security encryption. You can manage encryption keys locally with the Key Management Interoperability Protocol without having to leave your data center.

• Controls your data. Confirm that only qualified employees can access data in AltaVault, with role-based access controls and integration with the Terminal Access Controller Access Control System and Remote Authentication Dial-In User Service. Use the management access control list to secure the appliance by exposing only the necessary protocols, ports, and networks.

• Shrinks recovery time and data loss. Reduce recovery time objectives by retrieving data more quickly in a disaster by accessing locally cached backups on the AltaVault appliance. Reduce recovery point objectives by immediately getting data off site so that recovery points are as recent as your latest backup, rather than the last time that tapes were shipped off site.

• Prepares for disasters. Recover on-premises workloads in the cloud during disaster recovery tests or declarations with the AltaVault cloud-based appliances on AWS and Azure.

Easy to deploy and manage

• Simplify deployment. Get up and running with AltaVault in less than 30 minutes. In three simple steps, you can start sending data to the cloud.

• Manage smarter. Reduce tape management overhead and manual tasks that are error-prone and time-consuming, allowing you to allocate valuable employee resources to other projects. Remote monitoring and management capabilities from a GUI-driven management dashboard give administrators complete control from across the data center or across the world.

• Get to a cloud-connected ready state with services. Quickly meet the technical and business requirements for your new AltaVault solution with installation and implementation services from NetApp and our certified services partners. Using best practices our services team configures your new solution into your existing environment and brings your device to a cloud-connected ready state. Get the outcomes you expect while lowering your deployment risks and costs with our proven implementation methodology.

AltaVault Lowers Adoption Barriers for Cloud Storage

Many organisations are excited by the prospect of leveraging cloud economics but soon find themselves daunted by barriers that are associated with the public cloud. The AltaVault cloud-integrated storage appliance was designed to help customers vault over these barriers by:

• Maintaining security and compliance in a multitenant environment.

• Easily managing large volumes of data, even with limited bandwidth.

• Enabling simple deployment for a faster cloud on-ramp.

• Giving employees rapid access to data.


Architectural Decision – Cloud Migration

I wanted to work through a solution that I proposed to a client recently that involved taking a client to the cloud.

 

Scenario / Customer Requirements

My client is overdue for a major infrastructure refresh, They have had significant growth over the last 18 months and due to this growth their internal ICT systems have been put under unexpected pressure due to this growth. They have had to go through unplanned and unstructured expansion with a ‘just get it done’ mindset. The internal ICT team has been stretched and has been in firefighting mode for over 12 months.

 

During the initial client engagement meetings the business has identified that

  • There is insufficient manpower to make significant change in their business
  • There have been several hardware failures of equipment with little or no redundancy and a number of single points of failure
  • A finance audit of the leasing schedule raised issues with the life of equipment and its lease period
  • The ICT Team has asked for a high capital expenditure for new equipment which has been declined

 

Technical Environment

  • ESXi 5.0 environment
  • Various storage platforms in production use including a 2.8 year old SAN, 3 direct attached storage shelves and several NAS units.
  • 4 x Dell servers (high end) – 2.8 years old with 3 years maintenance. Currently consuming on average approx. 40% CPU and 92% RAM
  • 4 x HP Servers, between 3 and 6 years old. Currently running file and infrastructure services
  • 4 x Workstations (HP Z Series workstations), running test and development workload in the server room
  • HP Networking equipment
  • Backup Tape library has failed and not covered by warranty
  • Limited to 1 Gig networking through the office (some servers have 10gb cards)
  • Building has fibre termination from 3 providers
  • 50 Meg Internet (fibre to customer rack)
  • Server room power is at capacity

 

Assumptions

While the business has seen significant growth, the ICT team has stayed the same size, with staff coming and going but no real increase in FTE count, their systems have grown slightly however interviews and review of systems doesn’t reflect a large growth in CPU, RAM. The main capacity increases were in storage and is aligned with known project workloads.

 

Constraints

The business can’t afford in this or the next budget cycles for unexpected capital expenditures, business is focused on growth and expansion and significant financial resources have been tied up in securing that growth.

 

Compelling Reasons and Business Drivers

  • Business can’t afford to purchase new servers, licensing, backup infrastructure and storage
  • Outages have affected business productivity
  • Existing team is under unacceptable levels of stress and pressure
  • Business is more interested in known operational costs than large one off capital costs.

 

Architectural Decisions 

  • Migrate capital costs to operational cost.
  • Move environment to cloud service provider
  • Decommission Server infrastructure
  • Decommission Storage infrastructure
  • Upgrade existing Internet connection

 

Justifications

  • Meetings with client did not reflect any appetite for a refresh of existing kit
  • Aversion to capital costs at this point
  • Desire by ICT team to work smarter not harder

Impacts

  • The ICT will not survive for another 2 years, staff turnover would be approx. 100%
  • Stress is affecting both personal and professional lives of all ICT staff
  • Outages and downtime are costing the business measurable and significant amounts of money
  • ICT is unable to provide the required business expansion that is scheduled over the next 3 years

 

 

Alternatives

  • Leasing or other financial cost off setting solutions
  • Outsourcing infrastructure and management of ICT services to service provider

 

Solution

  • We provided them with a pathway to the cloud utilising a operational cost model and hybrid cloud solution which allowed them to migrate their critical production workloads into the cloud which solved a significant number of their issues and freed up onsite resources and capacity to provide a high level of service to the business.
  • The cloud networking solution provided a platform that will allow the business to embrace a number of cloud solutions (PaaS and SaaS) that have never been viable previously.

 

Next Steps

The next steps for this client revolve around the migration of their test and development workloads from onsite and ageing HP workstations to the public cloud cloud where they can take advantage of the elasticity and flexibility of the environment.

 

 

 

 

Thoughts and comments welcome

 


My EMC ScaleIO Adventures

I thought I would share some of my weekend entertainment. This weekend I grabbed a copy of EMC ScaleIO to have a play with as it looks like a seriously awesome piece of software that was previously only available to the big boys in enterprise or corporate IT. EMC ScaleIO is a software-only solution that uses your existing hardware (i.e. existing physical servers, hosts, etc.) to turn existing DAS (internal Hard drives or storage) into shared block storage!

This is at its core a Software Defined Storage Platform that is very powerful. With ScaleIO, there is no single point of failure. It provides data protection and resiliency through two-copy meshed mirroring of randomly sliced and distributed “data chunks” across multiple storage devices and servers. If a server or storage outage occurs, ScaleIO automatically rebuilds the failed blocks and rebalances the data to self-heal the cluster. Storage and server outages are tolerated and handled automatically without disrupting overall system operation. Data rebuilds can be monitored and throttled by administrators according to business hours or other factors.

This thing has some pretty cool features including multi-tenant capabilities, QoS, snapshots, and thin provisioning as well as protection delivered by two copy mesh mirroring.

So I built 4 Servers on my Desktop PC (using VMware Workstation.)

1

Figure 1- 4 x Virtual Machines – Windows 2012 R2

These were all Windows 2012 R2 with

  • 2 vCPU
  • 2 GB RAM
  • 60 GB O/S Disk
  • 120 GB Data Disk

I made a few tweaks around windows to allow remote installs to work in a workgroup and making sure they were configured. (The Data disk was initialised and had a simple partition, but were not formatted)

I then followed the installation guide and setup to get it working.

The first thing I did was copy a few chunks of Data across to the shared Disk

This is MB’s not Mb’s so… 138.9 (MB/s) = OVER 1,100.0 Mb/s (Not to shabby for my desktop!)

2

 

Figure 2- Dashboard showing an initial file copy speed

 This is the backend screen showing the bandwidth coming from each disk

3

Figure 3 – Filecopy view at the backend

What do these this ring of colours and stuff mean?

 

4

Figure 4 – Legend showing what things mean

Then because I like to break things, I randomly shutdown a SDS Node and watched it rebuild the data, Because the data is protected across the nodes, any loss of a node will not cause data loss, and when the node comes back online or is rebuilt it will figure it out and rebuild or rebalance.

5

 

Figure 5 – Dashboard View of Rebuild Process

6

Figure 6 – Backend view of Rebuild Process

 

7

Figure 7 – Backend view of a rebalance as opposed to a rebuild

My Poor Little desktop is getting a little smashed with these tests, but consider this, I’m getting this on my home desktop, what would a cluster of 5 x DL360’s running this bare metal with SSD’s and SAS disks get?

8

 

Then I started to think about how many IOPS does this thing actually do based on my desktop hardware

I used this good old guide to setting up IOMeter based on a VDI workload.

http://community.atlantiscomputing.com/blog/Atlantis/August-2013/How-to-use-Iometer-to-Simulate-a-Desktop-Workload.aspx

 

9

Figure 8 – One variant of settings that I used with IO Meter

Got IOPS ?

I ran this against 4 Workers and was able to get about 16000 IOPS, this isn’t too bad

 

10

Figure 9 – Dashboard view of  IOPS for a VDI type workload

This is a screen shot of the backend IOPS distribution between the nodes and the aggregate

11

Figure 10 – Backend IOPS showing for the VDI type workload

EMC calls this a disruptive technology. And I think they are very right, this type of thing could allow schools and business to use commodity hardware to create extremely high performance storage arrays that are pretty easy to manage.

I also think there are opportunities out there for things like backup as a service and offsite storage, this environment would allow for a cluster of servers to provide cheap storage on lots of slow disks that has N+1 or N+2 capabilities.

And here is the zinger… anyone can download it for an unlimited time, and get unlimited capacity, for free… really… no kidding free… what the hell. (and yes, you can purchase a production licence with support if you wanted to)

Anyway, that’s me for the weekend.


Benefits of SharePoint 2016 for schools

Ok, so SharePoint 2016 is still a while away, in fact it’s currently slated for release sometime mid next year (Q2 2016), However there have been a lot of recent discussions around its features and capabilities as well heated discussions about on-premises v’s cloud. Most schools who decided to be early adopters of SharePoint 2016 will start the planning and project early in 2016 (Feb / March) or even later this year. While not a project to be tackled lightly, the benefits are huge and well worth the effort.

SharePoint 2016 will have a great deal of focus on content management, team collaboration, user experiences across devices, and how the cloud can be blended into existing on-premises environments. As we’ve seen, heard about and experienced Hybrid Exchange environments, many people will start working in Hybrid SharePoint 2016 environments to ease their migration path into the cloud.

SharePoint Server 2016 will deliver enhancements and new capabilities in three major areas:

  • Improved user experiences
  • Cloud-inspired infrastructure
  • Compliance and reporting

Portals

Over the years, Schools using SharePoint have built incredibly rich, dynamic portals. Now Microsoft are expanding the portfolio and delivering new ‘ready-to-go’ Portals that historically would have taken customers weeks, if not months, to build. NextGen Portals, by design, are intelligent, collaborative, mobile and ready to go. Office 365 Video, which was delivered in 2014, is the first of the NextGen Portals. Microsoft will also be adding new portals focused on knowledge management and people.

Files

In addition to the general purpose document libraries in SharePoint, OneDrive provides users with one place to easily store, sync and share personal files across devices—with security, reliability and manageability squarely in place for IT. OneDrive ultimately removes the need for personal file servers and local hard drives while enabling easier sharing and collaboration across the organization. In Febuary 2015 Microsoft rolled out a new update and offering for iOS and Mac which has really worked to integrate and build on the direction of Any Device, Anywhere, Anytime that so many schools have embraced.

Team sites

Within Office 365, schools have the opportunity to expand the concept of team sites. In doing so, we can bring together team content traditionally kept in SharePoint, along with the broader set of information across Office 365 including email, instant messaging, tasks, contacts, personal files, social feeds and more. This holistic team experience across Office 365 offers simplified permissions for the user and holistic management, governance and extensibility models, making it a win for IT and for educators, with Office 365 being so strongly adopted by schools there is already a lot of the work done to enable some of these features.

Business Intelligence (BI)

Nearly every school I work with is talking about big data, or analytics, or dashboards, or student visibility, this is a major strategic direction for schools. Now with Power BI  Microsoft moved forward the vision of creating a “ready to go” solutions that schools and IT can get up and running in minutes. Built on top of Office 365, Power BI provides an integrated analytical platform in the cloud that connects to your important information from where it lives via the Excel interface users know. 

What I’m personally looking forward to seeing in SharePoint 2016:

Document Discussions

This has to be one of the most exciting things that is coming with SharePoint 2016! Getting a group of people in the same room to work on a document collaboratively is great, but when you can’t and you are all working on it separately it is hard to get that collaborative experience, having multiple people edit a document is great, and a big step, but this is closing that gap and allows people to discuss and talk about the look, feel, reasons or desired outcomes of a document that just wasn’t there before. I believe that this will be one of those commonly used features of SharePoint 2016 that will heavily used and great value will come from in the day to day lives of anyone who works with other people on a document.

Compliance and reporting

In Schools today, Data Loss Prevention (DLP) is non-negotiable, and overexposure to information can have significant damage to a school or colleges brand, not to mention legal and compliance implications when related to both financial and student information. SharePoint Server 2016 will provide a broad array of features and capabilities designed to make certain that sensitive information remains protected with investments in DLP, new scenarios to enable data encryption, and compliance tools that span on-premises servers and Office 365 while providing a balance between enabling user self-service and ensuring content usage adheres to school policy.

Durable Links

Microsoft has really dropped the ball in not implementing this feature in much earlier versions of SharePoint! This is a feature that I’ve seen and used in practically every other enterprise Document Management system, and it removes so many of the issues surrounding document links: Limited URL length, naming conventions, file relocation breaking saved links, etc. I never liked that you would get an “alternate URL” when you activated Doc ID’s, this is now the default.

Microsoft are evolving SharePoint in new ways within Office 365 and through traditional server releases which is a great relief for a lot of people who are either not ready to move to the cloud or just don’t want to take that step. Microsoft has had a busy year with many new innovations released and there’s a lot more than what is highlighted above.

I’m personally very excited about the vision for Office 365 and SharePoint and can’t wait to see the technical previews and what schools do with it in 2016. If you are thinking about SharePoint 2016 for your school, let me know your thoughts below.


Cloud Backup as a Service (BaaS)

As my job is to design tailored and strategic technology solutions for school environments, I naturally talk to schools every day. When it comes to Backups, I keep hearing ‘I just want it to work’, or ‘I don’t want to worry about it’, and more frequently now ‘I just want someone else to deal with it’. I hear your pain, that’s why Backup as a Service exists!

These days it seems like everyone is getting onto the Backup as a Service (or BaaS) bandwagon! With the ongoing costs managing, repairing, maintaining, replacing, supporting, troubleshooting, etc, etc of LTO tapes, tape library’s, offsite storage providers, disk arrays, backup software, licensing, agents, time, warranties, manpower, power, rack space and air-conditioning… it’s all just getting a bit much and the idea of a simple Backup as a Service solution is very compelling.

While BaaS may not be the right solution for everyone, things have certainly come a long way and are moving forward very quickly with the improvements in technology, bandwidth and cloud services. Historically, one of the biggest roadblocks to adopting a cloud backup solution was the connectivity between the school and the cloud provider. Over recent years schools have seen demand for online services go through the roof and as a result their internet connections have had to grow and scale to provide connectivity to the students in the classrooms. After school hours however, the links are frequently underutilised and have a great deal of bandwidth available.

As could be expected, many schools I speak with still have a lot of questions around how BaaS works, so I’ll endeavour to answer some here.

Backup Problems VS Cloud Backup Problems

I frequently get asked questions about cloud backup and what to do when something goes wrong or breaks. Most of the time the answer is that we do the same as if it was an on-premises backup.

I normally treat the cloud as another site: It exists somewhere, it is connected via a link of some type and is has certain capabilities and capacities. All of these things are very similar to what a remote site may look like.

How do I know if my backups are successful?

You should always have regular reports on failed backups (and successful ones as well), however the only real way to know is to test them!

On site or in the cloud, it is always good practice to test your backups and I strongly recommend that you do so on a regular basis to confirm and enforce good practices around your backups. Data is very valuable and in today’s environment the value of data is only growing.

Who should I use for Backup as a Service?

This is a simple one, you should use the company that best suits your needs and understands your business, provides you with value added services, can assure you that your data is secure, and can help you understand the how, where, when and why of their cloud solution.

What will happen if I hit my maximum storage allocation?

Generally this will cause problems if your Windows guest machine runs out of storage, however this is not a uniquely Cloud, IaaS or BaaS problem and shouldn’t be considered as one. If you are close to, or have run out of space on your cloud environment, you can easily fix this by logging onto the admin console or contact your cloud provider and request additional space to be added to your account. I’ve done this a number of times and it is generally provisioned either instantaneously or within a matter of minutes. This scalability is one of the key benefits of Cloud computing.

Where are my backups?

This is always a great question, and here at Computelec we do tours of the datacenters where our cloud is located to show off both the facilities and the local capabilities. Where your data is stored can be critically important to some customers due to a number of different reasons such as legal, data sovereignty and risk mitigation.

I find that a lot of cloud backup services do not address recoverability time lines and it is not fully considered what the impact of these times will be. When you need to recover a critical virtual machine from a cloud backup you need to restore it. If that machine is hundreds of Gig and you are restoring from the cloud this can take days or even weeks.

To address the above problem, I would recommend a provider that can recover your backed up virtual machine into their cloud IaaS environment allowing you very quick access to the data, or one who can facilitate backups being copied to removable storage that can be securely transported to your site in the event of needing a large amount of data quickly.

How frequently should I be backing up?

Frequency of backups can be a long and interesting discussion, and comes back to what the business Recovery Point Objectives (RPO‘s) and Recovery Time Objectives (RTO‘s) are. I’m a strong believer that ICT should not dictate these times, they should be part of a greater discussion and provide guidance and support to the decision making process.

What is Value Add?

Value add is the differentiation that a lot of cloud providers are striving for. Computelec for instance is purely focused on education, that is a value we have above and beyond the norm. Our understanding of and close alignment with education gives us insight and understanding of schools and means our solutions are suited for today’s educational providers.

Summary

I hope this article has shed some light on Backup as a Service, and hopefully answered a few of your frequently thought questions. Again, BaaS may not be something that will fit your school’s ICT plan, but for schools who find Backup to be a real pain point in their operations, I implore you to investigate further.

If you have any questions that I haven’t addressed above, please don’t hesitate to comment or email me, I’d love to continue the conversation.


Part 2 – EMC VPLEX Experiences

Hi

Thanks, for coming back, or reading on from my previous post on EMC’s VPLEX kit and my experiences.

The first thing I was going to cover in this post is the most common commands that I’ve been using and what they are for. This is as much for my ease of reference as it is to touch on them for anyone else reading J

To begin with, there are two main places where you use the CLI

The Clusters management Server and the VPLEXcli

To logon to the Cluster Management interface you open a SSH connection using your preferred SSH client

  • IP address of the Cluster Management Ethernet interface
  • Port 22
  • SSH Protocol 2
  • Scrollback lines to 20000

 

When you you get to the logon interface you need to logon as the appropriate account, during the implementation phase this is probably going to be the service account. As always, please be smart about security and get this password changed early and don’t leave the default passwords on the default accounts

When you are logged in you will come to the following interface:

service@ManagementServer:~>

 

The second place you perform most of your CLI work is the VPLEXcli, you need to logon to the Management Server first and then enter the VPLEXcli using the command VPLEXcli (wow, who would have guessed it would be that easy!)

service@ManagementServer:~> vplexcli

 

Several messages are displayed, and a username prompt appears:

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

Enter User Name:

 

Again, you will need to authenticate with the appropriate UN and PW. Probably still the Service account you used to logon to the Management Server. When logged on successfully you’ll see the following:

creating logfile:/var/log/VPlex/cli/session. 

log_service_localhost_T28921_20101020175912

 

VPlexcli:/>

 

I found it highly useful to run up two sessions of putty, and logon with both, one that stay’s logged onto just the cluster management server and the other logged into the VPLEXcli. This allows you to quickly flick back and forth.

 

Commands

 

 

You can confirm that the Product Version of what’s running matches the required version in the VPLEX release notes and your expectations.

version -a

 

 

Verify the VPLEX directors

 

From the VPlexcli prompt, type the following command:

ll /engines/**/directors

Verify that the output lists all directors in the cluster, and that all directors show the following:

  • Commissioned status: true
  • Operational status: ok
  • Communication status: ok

Output example in a dual-engine cluster:


 
 

 

Verify storage volume availability

From the VPlexcli prompt, type the following commands to rediscover the back-end storage:

cd/clusters/cluster-1/storage-elements/storage-arrays/EMC-*

array re-discover <array_name>

Type the following command to verify availability of the provisioned storage:

storage-volume summary

 
 


 
 

Resume-at-loser

This is probably one of the most important commands to know, after you’ve had an outage of some type and you need to get your data re-sync’d

 

During an inter-cluster link failure, an you or your client can allow I/O to resume at one of the two clusters the “winning” cluster.

I/O remains suspended on the “losing” cluster. When the inter-cluster link heals, the winning and losing clusters re-connect, and the losing cluster discovers that the winning cluster has resumed I/O without it. Unless explicitly configured otherwise (using the auto-resume-at-loser property), I/O remains suspended on the losing cluster. This prevents applications at the losing cluster from experiencing a spontaneous data change. The delay allows the administrator to shut down applications and get into a clean state. After stopping the applications, the administrator can use this command to resynchronize the data image on the losing cluster with the data image on the winning cluster, Resume servicing I/O operations. The administrator may then safely restart the applications at the losing cluster.

 

Without the ‘–force’ option, this command asks for confirmation to proceed, since its accidental use while applications are still running at the losing cluster could cause

applications to misbehave.

 

 

Cd /clusters/cluster-n/consistency-groups/group-name/resume-at-loser
			

 

 

One of the important things to check is the Rx and Tx power of your FC modules, the following command allows you to bring this up and look for discrepancies or things that are out of the ordinary.

Ccd /engines/engine-1-1/directors/director-1-1-A/hardware/sfps/

 

 

 

Next up is some information around VPLEX and storage, and then VPLEX and VMWare Vsphere.