Quantcast
Channel: Hitachi Vantara Community : Blog List - Hu's Place
Viewing all articles
Browse latest Browse all 426

Maximizing IT Velocity for Data Explosion

$
0
0

Maximizing IT Velocity

The biggest challenge facing IT today is the explosion of data and in particular, the explosion of unstructured data. Obviously this will require a tremendous amount of storage. While we are faced today with TB and PB of data, we need to be looking at managing exabytes of data in the near future. While much has been written about the need for capacity and efficiency to contain this data explosion we also need to plan for the velocity requirements for these large data stores.

There are at least three dimensions to this velocity requirement. First is the velocity required to store data.  Today the storage media of choice is the hard drive or disk due to its low cost per capacity and its random access characteristics. Second is the velocity to access this data. Here we see a widening performance gap between the compute servers that process data and the disks that store the data. We also see the performance challenges of unstructured data, which requires the overhead of a file system and many access protocols. Third, is the velocity that that will be required to provision the infrastructure for this data explosion. With the rapid growth of data we cannot afford to take months to acquire host servers, storage systems, networking switches, integrate them with an operating system and certify them with an application.  In order to maximize IT we need to maximize velocity. Here are some ways to do this:

Maximize recording velocity

Hard disks have been the mainstay for data storage for over 50 years, due to its random access capability and the technologies for improving bit densities, which have doubled about every two years, driving the price of storage down about 30% per year.  Doubling the bit densities also increased the data transfer rate with every turn of technology. While there have been mechanical limitations on spindle speeds and access arm movement, users have been able to compensate for this by short stroking the disk arms and wide striping across a large number of disk to parallelize the access. The demand for storage performance was also relatively low for non-mainframe workstations.

All that has changed. Disk technology has slowed down and bit densities are no longer doubling every two years. In order to continue the price erosion for capacity, other techniques like slowing down the rotation speed to add more bits per track or adding more tracks per disk platter are being implemented. However, this further decreases the recording and access speeds for disks. Short stroking 1 and 2 TB disks creates too much wasted capacity and wide striping now requires 100s of disks in order to service faster multicore processors which are scaling up with many virtual machines.

Flash technology can be an answer to this if the implementation of flash can solve the durability, performance, and scalability deficiencies of SSDs. The Hitachi Accelerated Flash Module answers these deficiencies.

Maximizing Access Velocity

With the increasing amounts of data, accessing it quickly is a growing challenge.HDS can answer this challenge with a global cache architecture where multiple processor cores, and data movers, can automatically load balance I/O requests to provide the best block access performance compared to comparatively configured storage systems. We provide Dynamic Tiering to ensure that the most active data is stored on the highest performance tier of storage. Storage virtualization is included to extend tiering to external storage. The micro code is optimized to support the higher performance demands of flash, particularly for random reads. Our unified platform also increases the access velocity for unstructured data through the use of a hardware assist NAS engine that is integrated with the block controller. High performance is provided for unified file and block access and is optimized for flash performance.

Maximizing Provisioning Velocity

Instead of a DIY (Do It Yourself) approach to provisioning infrastructure, Hitachi Data Systems provides Unified Compute Platform, which includes Hitachi or Cisco blade servers, Brocade or Cisco switches, and Hitachi storage platforms. These converged solutions are pre-configured and pre-certified for applications like VDI, Exchange, Oracle, SAP HANA, or SQL. They are application specific and can provide the infrastructure to spin up an application in a matter of days as opposed to months.  There is also a converged solution for virtualization environments where an orchestration layer is provided for managing the entire infrastructure through a hypervisor’s management interface, like VMware vCenter. With this configuration, virtual infrastructures can be spun up in a matter of minutes.

Maximizing IT with Flash, Unified Storage, and Converged Platforms

Hitachi Data Systems provides the ability to use all three technologies: Flash, high performance unified file and block performance, and converged solutions using all these technologies that are pre-configured and certified for rapid deployment of application infrastructures. Hitachi enables users to achieve the velocity needed to store, access and provision IT infrastructure quickly, efficiently and economically.

 


Viewing all articles
Browse latest Browse all 426