portbike.blogg.se

Nimble san visio
Nimble san visio










nimble san visio

It’s now also possible to save your network changes as “Draft”, but also to revert your network settings back to your previously applied configuration – very handy in case something went wrong! Both of these topics are an individual blog post on their own! It is now also possible to create multiple routes on the array to allow for replication traffic, for example.Īny individual port can be configured to either be on the management, data or both networks. From here we can also configure a new technology Nimble call “Virtual Target IP addresses” as well as creating “Network Zones” to stop traversing and saturating Inter-Switch Links for Multipath configurations. Nimble has now dedicated “Networking” tab available in the administration menu where the Group or individual arrays can be changed. The network configuration on the array is easily configured.

nimble san visio

#NIMBLE SAN VISIO WINDOWS#

When configuring the array with your ESXi or Windows servers you will use the target IP address shown below to configure your storage connectivity. One of the nice feature in the GUI is when you hover over a port in the array screen it will then highlight which port it corresponds to on the array and display the IP address and status on screen. The compression within the Nimble storage array happens inline with no performance loss and can offer between 30-75 percent saving depending on the workload.Ĭheck out the following page for more information on CASL – Nimble use an architecture called CASL (Cache Accelerated Storage Architecture) This is made up of a number of components, SSDs are utilised as a Random Read Cache for Hot Blocks in utilisation within the systems, random writes are coalesced through NVRAM to the array, and it compresses them and writes them sequentially to the RAID 6 near line SAS spinning disks resulting in write operations that Nimble claim can be up to 100x faster than traditional disk alone.

nimble san visio

We can see on the screenshot below on a different system that there are a number of shelves configured and that we have active and hot standby controllers configured. The theoretical maximums are thus ~280,000 IOPS & 508TB usable storage in a scale out cluster! Today Nimble supports up to 4 arrays in a group – each array supporting 3 shelves of additional disk. This now forms part of a Nimble Storage array “Group”. Nimble have now released version 2.0 of their software, meaning that scale-out is now available as a third scaling method for any Nimble array. What is different about Nimble is the architecture is not based on drive spindles to deliver performance as like traditional storage arrays, rather using multiple Intel Xeon processors to drive IOPS from the array. You then can increase capacity by attaching up to a further 3 shelves of high-capacity drives using the SAS connectors on the controllers, or you can scale performance by upgrading the controllers or swapping for larger SSDs. Prior to Nimble OS 2.0, the architecture worked with a frame / scale up based design where you start with a head unit that contains 2 controllers, 12 high-capacity spinning disks and a number of high-performance SSDs. Finally on the left we have a breakdown of events over the last 24 hours. In the middle we have a breakdown of throughput in MB/Sec and IOPS broken down by Reads and Writes. On the left we can see a breakdown of the storage usage including snapshots, below this we can see what our space saving is, utilising the in-line compression technology for both primary and snapshot data. The home screen shows you a good overview of what is happening within your storage array. Very recently Nimble have announced Nimble OS 2.0, which this walkthrough is based on and big thanks to Nick for helping me up date this from 1.X to 2.X I sat down with a good friend of mine Nick Dyer around 6 months ago, Nick at the time had been with Nimble for only a few months after previously being at Xsigo and Dell EqualLogic, we discussed who Nimble were and what made them different to everyone else in the market place, Nick also gave me a tour of its features and functionality. This is a blog post that has been long overdue, I have blogged about Nimble Storage a couple of times when at VMworld and Devin Hamilton (Director of Storage Architecture & Nimble’s first ever customer-facing engineer) was also on one of the HandsonVirtualization podcasts that we recorded in the past.












Nimble san visio