Quantcast
Channel: Hitachi Vantara Community : Blog List - Hu's Place
Viewing all 426 articles
Browse latest View live

Five Trends for 2019

$
0
0

Happy New Year and welcome to 2019, a year full of possibilities.

 

New Years 2019.png

 

2018 was a year of maturity for Digital Transformation, and most companies are committed to transforming their companies. They have laid out their strategies and are allocating resources to this transformation. Public cloud, agile methodologies and devops, RESTful APIs, containers, analytics and machine learning are being adopted. Against this backdrop there are five trends for 2019 that I would like to call out.

 

Trend 1. Companies Will Shift from Data Generating to Data Powered Organizations

 

A 2017 Harvard Business Review article on Data Strategy noted “ Cross-industry studies show that on average, less than half of an organization’s structured data is actively used in making decisions—and less than 1% of its unstructured data is analyzed or used at all.”Deployments of large data hubs have only resulted in more data silos that are not easily understood, related, or shared. In order to utilize the wealth of data that they already have, companies will be looking for solutions that will give comprehensive access to data from many sources. Data curation will be a focus to understand the meaning of the data as well as the technologies that are applied to the data so that data engineers can move and transform the essential data that data consumers need to power the organization. More focus will be on the operational aspects of data rather than the fundamentals of capturing, storing and protecting data. Meta data will be key, and companies will look to object based storage systems to create a data fabric as a foundation for building large scale flow based data systems.

 

Trend 2: AI and Machine Learning Unleash the Power of Data to Drive Business Decisions

AI and machine learning technologies can glean insights from unstructured data, connect the dots between disparate data points, and recognize and correlate patterns in data such as facial recognition. AI and machine learning are becoming widely adopted in home appliances, automobiles, plant automation, and smart cities. However, from a business perspective, AI and machine learning has been more difficult to implement as data sources are often disparate and fragmented and much of the information generated by businesses has little or no formal structure. While there is a wealth of knowledge that can be gleaned from business data to increase revenue, respond to emerging trends, improve operational efficiency and optimize marketing to create a competitive advantage, the requirement for manual data cleansing prior to analysis becomes a major roadblock. A 2016 Forbes article published a survey of data scientists which showed that most of their time, 80%, is spent on massaging rather than mining or modeling data.

Data Scientist work.png

In addition to the tasks noted above one needs to understand that data scientists do not work in isolation. They must team with engineers and analysts to train, tune, test and deploy predictive models. Building an AI or machine learning model is not a one-time effort. Model accuracy degrades over time and monitoring and switching models can be quite cumbersome. Organization will be looking for orchestration capabilities like Hitachi Vantara’s Pentaho’s data integration and machine learning orchestration tools, to stream line the machine learning workflow and enable smooth team collaboration.

 

Trend 3: Increasing Data Requirements Will Push Companies to The Edge with Data

Enterprise boundaries are extending to the edge – where both data and users reside, and multiple clouds converge. While the majority of the IoT products, services, and platforms are supported by cloud-computing platforms, the increasing high volume of data, low latency and QoS requirements are driving the need for mobile cloud computing where more of the data processing is done on the edge. Public clouds will provide the connection between edge and core data centers creating the need for a hybrid cloud approach based on open REST or S3 App integration. Edge computing will be less of a trend and more of a necessity as companies seek to cut costs and reduce network usage. The edge will require a hardened infrastructure as it resides in the “wild” outside the protection of cloud/data center walls.

 

Trend 4: Data Centers Become Automated

 

The role of the data center has now changed from being an infrastructure provider to a provider of the right service at the right time and the right price. Workloads are becoming increasingly distributed, with applications running in public and private clouds as well as in traditional enterprise data centers. Applications are becoming more modular, leveraging containers and microservices as well as virtualization and bare metal. As more data is generated, there will be a corresponding growth in demand for storage space efficiency. Enterprises need to make the most of information technology—to engage with customers in real time, maximize return on IT investments and improve operational efficiency. Accomplishing this requires a deep understanding of what is happening in their data centers to predict and get ahead of trends, as well as the ability to automate action so staff are free to focus on strategic endeavors. A data center is like an IoT microcosm, every device and software package have a sensor or log and is ripe for the application of artificial intelligence (AI), machine learning and automation to enable people to focus on the business and not on infrastructure.

 

As a provider of data center analytics and automation management tools, Hitachi Vantara realizes that a data center is made up of many different vendor products that interact with each other. Therefore, automation must be based on a shared/open API architecture that allows us to simplify transmission of data across our suite of management tools & 3rdparty tools. Everything we have must be API-based so that we can draw information in from other sources to create a more intelligent solution, and we can also pass information out if we’re not the master in the environment, so we can make other things smarter. In addition, our goal is to enrich our library of 3rdparty device API information so that we can capture analytics from a broad range of devices & interact with them. We are taking a very vendor-neutral approach as we recognize that there is a much broader opportunity to deliver better solutions if we integrate with more vendors and partners.”

 

Trend 5: Corporate Data Responsibility Becomes a Priority

 

The implementation of GDPR in 2018 has focused attention on Data Privacy and required companies to make major investments on compliance. All international companies that are GDPR compliant now have a data protection officer (DPO) in an enterprise security leadership role. Data protection officers are responsible for overseeing a data protection strategy and implementation to ensure compliance with GDPR requirements.

 

The explosion of new technologies and business models are creating new challenges as companies are shifting from being data generating to data powered organizations. Big Data systems and analytics are becoming a center of gravity as business realize the power of data to increase business growth and better understand their customers and markets. This has been fueled by the advances in technologies to gather data, integrate data sources, search, and analyze data to derive business value. The most powerful companies in the world are those who understand how to use the power of data. Relative new comers like Amazon, Baidu, Facebook, and Google have achieved their prominence through the power of data. However, with great power comes great responsibilities.

 

IT must provide the tools and processes to understand their data and ensure that the use of that data is done responsibly. In my previous blog I describe how Hitachi Vantara approaches Corporate Data Responsibility in the development of our products for storage, encryption, content management, AI and video analytics.

 

These trends represent my own thoughts and should not be considered representative of Hitachi or Hitachi Vantara.

 

Please tune into a webinar on Thursday January 17 at 9:00am Pacific time where I will discuss these 5 trends with Shawn Rosemarin, SVP and CTO, Global Field and Industry Solutions. I am delighted to have Shawn join me to share his unique perspectives around these trends


Hitachi Vantara’s API Strategy for The Smart Data Center

$
0
0

In this post I will take a deeper dive into one of the key enablers for Digital Transformation, the REST API. I will cover our strategy for utilizing it in our products and provide some example of how it is utilized to enable the Smart Data Center.

 

Application Program Interface, API, is software that allows applications to talk to each other. APIs have been an essential part of software development since the earliest days of programming. Today, modern, web based, open APIs are connecting more than code. APIs are a key driver for digital transformation where everything and everyone is connected. APIs support interoperability and design modularity and help improve the way systems and solutions exchange information, invoke business logic, and execute transactions.

 

API Picture.png

Hitachi’s developers are reimagining core systems as microservices, building APIs using modern RESTful architectures, and taking advantage of robust, off-the-shelf API management platforms. REST stands for “representational state transfer.” APIs built according to REST architectural standards are stateless, which means that neither the client nor the server need to remember any previous state to satisfy it. Stateless components can be freely redeployed if something fails, and they can scale to accommodate load changes. REST enables plain-text exchanges of data assets. It also makes it possible to inherit security policies from an underlying transport mechanism. REST APIs provide a simplified approach to deliver better performance and faster paths to develop, deploy, and organize. Restful APIs are available in our Hitachi Content Platforms, Pentaho analytics, Hitachi Unified Compute Converged, Hyper Converged, and Rack platforms, REAN cloud, and LUMADA which is our IoT platform.

 

A REST API is built directly into our VSP storage controllers. We increased the memory and CPU in the controller specifically to support the REST API running natively in the controller. This gives us the opportunity to not only connect with other vendor’s management stacks, but also apply analytics and machine learning and automate deployment of resources through REST APIs. Here are some examples of how this API strategy brings operational benefits to the Smart Data Center.

 

Infrastructure Analytics

Hitachi Vantara has developed an analytics tool, Hitachi Infrastructure Analytics Advisor (HIAA) that can provide predictive analytics by mining telemetry data from servers, storage appliances, networking systems and virtual machines to optimize performance, troubleshoot issues and forecast when a business may need to buy new storage systems. There are 77 performance metrics that we can provide via REST API over IP connections. Based on an analysis of these metrics the analytics tool can determine the right actions to take, then launch into an automation tool to invoke the appropriate services to execute that action.

HIAA.png

Automation

The automation tool, Hitachi Automation Director (HAD), contains a catalog of templates that can automatically orchestrate the delivery and management of IT resources. The analytics tool communicates with the automation tool, through a REST API, to select a template, fill in the parameters and request deployment of resources, which is done automatically. During the execution, the automation tool may need to communicate with third party switches, virtual machines, containers or public cloud through their APIs. When one considers all the tedious steps required to request and deploy storage, networking, hypervisor, and application services for hundreds or even thousands of users, you can see how automation can reduce days of work downs to minutes.

HAD.png

Hitachi Automation Director has a catalog of canned application templates which we are continuing to expand. This internal “App” store” of packages includes Hitachi Storage Orchestration, Provisioning, Flash Module Compression (FMC) Optimization, Creation of a Virtual Storage Machine (VSM) that spans two physical storage systems for active/active availability, Replication (2DC, 3DC, GAD), SAN Zoning: Brocade BNA, Cisco DCNM, Oracle DB Expansion, VMWare Datastore Life Cycle Management, and Plugins/Utilities:CMREST (Configuration Management Rest API), JavaScript, OS, VM, OpenStack, AWS, etc.

 

Policy based Copy Management

Since most data is backed up and copied; a copy data management platform is available to simplify creating copies and managing policy-based workflows that support business functions with controlled copies of data. Hitachi Vantara provides a Hitachi Data Instance Director which the automation tool can invoke to deploy the copy workload and set up and enforce data protection SLA policies through REST APIs.

HDID.png

IT ServiceManagement

Hitachi Automation Director’s REST API is open and available for working with third party resources. Enhancements to the software include integration with IT service management (ITSM) tools, including the ServiceNow platform, for better resource tracking and improved REST API integration for working with third-party resources. Hitachi Automation Director creates workflows in Service Now, approval can be administrator driven or driven by the Automation Director. The Automation Director executes the changes and updates the ticket.

Service Now.png

Third Party and Home Grown Services

Hitachi Automation Director encourages working with third party services by providing a design studio, and a developer community site. Service Builder is the design studio where users have the flexibility to create their own service template to fit their own environment, operation policy and workflow. They are provided the capability to leverage 3rdparty or home grown tools.

 

Service Builder.png

Hitachi Vantara has launched the Hitachi Automation Director (HAD) Developer Community site.

It is available to external users. Here Hitachi Vantara shares sample service templates(more than 30 content packs), prototypes, how to use Hitachi Automation Director, Q&A, etc. Here we will be collaborating with customers / partners to develop more content.

 

Call Home Monitoring

Other uses of REST APIs include our call-home monitoring system, Hi-Track. which has been re-coded to use our native REST APIs to collect information about the storage systems and report that back to our support teams. Hi-Track provides 24/7 monitoring for early alerting and insight to help your storage system run at peak efficiency. Only authorized Hitachi Vantara support specialists may establish a connection with your site, and only by using the Hitachi Vantara internal network. Secure access with encryption and authentication keeps error and configuration information tightly controlled, and your production data can never be accessed.

 

Container Plug-In

We have a Hitachi Storage Plug-in for Containers that integrates us with Docker and thereby with Kubernetes and Docker Swarm. This Plug-in is built on the REST API that is also available to customers to integrate with. This plug- in retains the state of the storage as containers are spun up and down. Without this, the storage for a container would disappear when the container goes away.

 

VSP Configurator

The VSP storage configuration tool, Hitachi Storage Advisor, can be accessed through software on an external virtual or physical server via the REST API.

 

Summary

The use of REST APIs is key to the integration of infrastructure, software, and analytics to create an intelligent data center. This is a summary of the primary benefits of our API strategy for an intelligent data center.

 

A Rest API built directly in our VSP controller provides connection with other vendor’s management stacks, and enables the application of analytics and machine learning for automated deployment of resources

 

An analytics tool, Hitachi Infrastructure Analytics Advisor, can provide predictive analytics by mining telemetry data from servers, storage appliances, networking systems and virtual machines to optimize performance, troubleshoot issues and forecast when a business may need to buy new storage systems.

 

An automation tool, Hitachi Automation Director, with a catalog of templates that can automatically orchestrate the delivery and management of IT resources.

 

A copy data management platform, Hitachi Data Instance Director, which can be invoked by the automation tool to simplify creating copies and managing policy-based workflows that support business functions with controlled copies of data.

 

Hitachi Automation Director’s REST API is open and available for working with third party resources like IT service management (ITSM) tools, including the ServiceNow platform, for better resource tracking and improved integration with third-party resources.

 

Hitachi Automation Director encourages working with third party services by providing a design studio, and a developer community site where users have the flexibility to create their own service template to fit their own environment, operation policy and workflow.

 

Other uses include call home monitoring, container plug-ins, and VSP configuration management from external systems. The list of plug-ins, utilities, and extensions will grow as the digital data center eco system grows.

 

Customer Benefits

Reduce workloads from days to minutes

Reduce errors resulting from tedious manual work

Reduce the need for skilled IT staff

Optimize use of IT resources

Increase speed of resolution to customer requests

Effective Outage Management with quicker return to service

Customize to fit their specific environment

Improve forecasting of future resource requirements

 

Nathan Moffit, Hitachi Vantara senior director of infrastructure, sums up our API strategy asFollows:

“Hitachi’s management strategy is based around the idea of a shared and open API architecture that allows us to simplify transmission of data across our suite of management tools & 3rdparty tools. Everything we have is API-based so that we can draw information in from other sources to create a more intelligent solution, but we can also pass information out if we’re not the master in the environment, so we can make other things smarter. In addition, our goal is to enrich our ‘library’ of 3rdparty device API information so that we can capture analytics from a broad range of devices & interact with them. We are taking a very vendor-neutral approach as we recognize that there is a much broader opportunity to deliver better solutions if we integrate with more vendors and partners.”

2018: A Year in Review for Storage Systems.

$
0
0

Storage Innovation.png

2018 was a very busy year for Hitachi Vantara. September marked the one year anniversary of Hitachi Vantara which was formed by the integration of three Hitachi companies, Hitachi Data Systems, an IT infrastructure systems and services company; Hitachi Pentaho, a data integration and analytics company; and the Hitachi Insight Group, developer of Lumada, Hitachi’s commercially available IoT platform. This new company will unify the operations of these three companies into a single integrated business as Hitachi Vantara to capitalize on Hitachi’s social innovation capability in both operational technologies (OT) and information technologies (IT).

 

When the formation of Hitachi Vantara was announced, it was clear that combining Hitachi’s broad expertise in OT (operational technology) with its proven IT product innovations and solutions, would give customers a powerful, collaborative partner, unlike any other company, to address the burgeoning IoT market.

 

For lack of similar capabilities, some of our competitors began implying that we would no longer be focused on the innovative data infrastructure, storage and compute solutions that were the hallmark of Hitachi Data Systems. In fact, the closer collaboration between data engineers and data analysts from Pentaho, the data scientists from Insights with the proven software and hardware engineering skill of Hitachi Data Systems has given Hitachi Vantara even more talent and resources to drive innovation in storage systems.

 

During 2018 we proved our detractors wrong in many ways with the introduction of a new family of all flash and hybrid VSP storage systems which could scale from a low end 2U form factor to the largest enterprise storage frame with the same high end enterprise capabilities, including Global Active/Active availability, non-disruptive migration, multi-data center replication, and 100% data availability guarantee. A common Storage Virtualization Operating System SVOS RF, enables the “democratization” of storage services. A midrange user now has access to the same, super-powerful features as the biggest banks. Or, as a large company, you can deploy the same super-powerful features in remote, edge offices as you do in the core, using a lower cost form factor.

 

A Common Architecture from Midrange to Mainframes

Analysts like Gartner acknowledge that sharing a common architecture and management tools from the smallest VSP G200 to the flagship VSP G1500 provides users with an easy to follow upgrade path that leverages their investments in training policy and procedures. It also leverages Hitachi's investments in infrastructure monitoring and management tools, as well as ecosystem certifications and plug-ins. 2018 saw competitive storage vendors follow suit by announcing their intent to consolidate 3 to 5 disparate storage systems just to have a common storage system for the midrange. Since many of these storage systems were acquisitions, this consolidation effort will be a 2 to 5 year journey, with questions about migration between the systems. Hitachi Vantara is ahead of the pack delivering, a common storage platform from small midrange to large enterprise and mainframes system without any limitations in capabilities.

 

Open REST API

Another area of storage innovation was the introduction of AI and automation tools for the smart data center which was enabled by an open REST API. A REST API is built directly into our VSP storage controllers. We increased the memory and CPU in the controller specifically to support the REST API running natively in the controller. This gives us the opportunity to not only connect with other vendor’s management stacks, but also apply analytics and machine learning and automate deployment of resources through REST APIs.

 

Analyze Infrastructure Metrics from Servers to Storage

Hitachi Vantara has developed an analytics tool, Hitachi Infrastructure Analytics Advisor (HIAA) that can provide predictive analytics by mining telemetry data from servers, storage appliances, networking systems and virtual machines to optimize performance, troubleshoot issues and forecast when a business may need to buy new storage systems. Based on an analysis of metrics from host servers to storage resources the analytics tool can determine the right actions to take, then launch into an automation tool to invoke the appropriate services to execute that action.

 

Automate the Delivery and Management of IT Resources

The automation tool, Hitachi Automation Director (HAD), contains a catalog of templates that can automatically orchestrate the delivery and management of IT resources. The analytics tool communicates with the automation tool, through a REST API, to select a template, fill in the parameters and request deployment of resources, which is done automatically. The APIs are open, and Hitachi Vantara provides a design studio and developer community site for customers and third parties to design their own templates to fit their own environment, operation policy and workflow. Our customers love the ability to integrate Automation Director with their ServiceNow tickets for speedier resolution of the client’s requests.

 

Enhance Flash performance, Capacity and Efficiency

The common Storage Virtualization Operating System, SVOS, has been greatly enhanced from previous versions of the operating system, and has been renamed SVOS RF, where RF stands for Resilient Flash. SVOS-RF’s enhanced flash-aware I/O stack includes patented express I/O algorithms and new, direct command transfer (DCT) functionality to streamline I/O. Combined, these features lower latency up to 25% and increase IOPS per CPU core up to 71%, accelerating even the most demanding workloads. Quality of service (QoS) makes sure workloads have predictable performance for better user experiences. User selectable, adaptive data reduction with deduplication and compression, can reduce capacity requirements by 5:1 or more depending on the data set. A Total Efficiency Guarantee from Hitachi Vantara can help you deliver more storage from all-flash Hitachi Virtual Storage Platform F series (VSP F series) arrays and save up to 7:1 in data efficiency.

 

Summary

If there was any doubt that Hitachi Vantara would continue to be a storage systems leader, that should have been proven wrong in 2018. 2019 will provide even more proof points. Hitachi Vantara will continue to drive data center modernization with high-performance integrated cross-platform storage systems, AI-powered analytics, IT automation software, and the best in Flash efficiency. While these storage announcements were made in May of 2018, they did not make the cut off dates for Gartner's 2018 Magic Quadrant or Critical Capabilities for Solid State Arrays and General-Purpose Disk Arrays. So look for their evaluation in Gartner's 2019 reports. However, even without an evaluation of these new capabilities Hitachi Vantara did well in the Gartner 2018 reports and other Industry recognition reports.

 

Industry Recognition

The lights and Shadow of Digitalization

$
0
0

Mr. Toshiaki Higashihara, president and CEO of Hitachi, Ltd., opened our annual NEXT event last September speaking on Data and Innovation. He talked about this in terms of Lights and Shadow, the rewards and risks of the digital age. “The light gives us opportunity,” he said. “And we should not discount the shadow of cyber-attacks, and security and data breaches.” He continued, “Hitachi is aligned with the greater demands we face today. Our mission is clear: to contribute to society through the development of superior, original technology and products.”

 

ligtht and shadow slide b.png

 

Mr. Higashihara’s analogy of light and shadow touches on an increasing concern for all of us. The deadline for GDPR (General Data Protection Regulation) implementation in May of last year along with a growing number of cyber hacks, fake news, governance and risk management issues has drawn more focus on corporate responsibility. The financial impact is already being felt as a French data protection watchdog fines Google $57m under GDPR. I included this concern in my top 5 Trends for 2019 blog post and discussed this with Shawn Rosemarin, our SVP and CTO of Global Field and Industry in our Top 5 Trends webinar.

 

Shadow is a good analogy for risk. Shadows consist of two components, blockage of light and a surface to project on. The surface is still there but the perception is altered. Last Sunday, in North America, we had a “Blood Moon”, where the earth cast its shadow onto the surface of the moon at such a spectral angle that it appeared to us as a “Blood Moon”

Blood moon.png

 

Shadows are many shades of darkness, and the outlines are often blurry. The object may be clear cut as a regulation which could lead to rules for compliance. However, simply being compliant may not keep us out of the shadows, especially if we are dealing with new technologies and new business models which may not have any precedence.

 

Compounding this is the globalization of business where corporations operate in many different political and cultural environments. Corporations have also become more diversified and dependent on third parties as part of their supply chain, manufacturing, distribution, sales and services. Contractors are also a vital part of a corporation’s workforce as new skill sets are required. An acquisition’s corporate practices and security measures must be vetted to ensure that that the corporation is not acquiring new sources of risk as part of the acquisition. All of these factors can help a corporation be successful, but they also introduce risk to the corporation’s security and the responsible use of the corporation’s assets.

 

Corporate responsibility is also not just about satisfying the regulators and stockholders, corporations are also being judged by social media. What we say and do as well as what we don’t say or do is subject to review by a connected society with a tremendous amount of power at their fingertips. Our success as a company will also depend on how our conduct is perceived by the online society.

 

So how does a company meet the evolving challenges of corporate responsibility? At the risk of over-simplification, I believe that corporate responsibility must be based on some key elements:

 

Sustaining Corporate Principles. Since 1910 Hitachi’s corporate philosophy has been based on Harmony, Sincerity, and pioneering spirit.

 

Corporate Leadership. The commitment to Corporate responsibility starts at the top as demonstrated by Mr. Higashihara and Brian Householder our Hitachi Vantara CEO.

 

Corporate Culture. Shared values and beliefs that are rooted in the corporation’s goals strategies, organization, and approach to employees, customers, partners, investors and the greater community. At Hitachi Vantara we work to a double bottom line, to deliver outcomes that benefit business and society.

 

Clear Guidelines and Education. Last October was the deadline for Hitachi Vantara employees to complete certification on Data Privacy, Cyber Security, Ethics, and Avoiding Harassment. This was not a check mark exercise. This required 100% participation, which is almost impossible to enforce for 6000+ globally dispersed employees with different work and personal schedules. But 100% was accomplished through the direct involvement of the corporate executive committee.

 

Technology. The use of technology to not only secure and protect our data and systems but to also understand the data that we acquire so that we can treat it responsibly and monitor the workflow to guide its use.

 

Mr. Higashihara is looking beyond the concerns that I mention here, and his shadow extends to include digital divide and concerns about singularity.

 

Digital Divide is a social issue referring to the differing amount of information between those who have access to the internet and those who do not or have limited access.

 

Singularity is the concern that AI or super intelligence will continue to upgrade itself and advance technology at such a rate that humans would become obsolete. The physicist Stephen Hawking warned that the emergence of artificial intelligence could be the “worst event in the history of our civilization.” Can you teach a machine ethics, morality and integrity?

 

Mr. Higashihara closed his presentation at NEXT 2018, by reaffirming our commitment to Social Innovation.

 

Commiutment t SI.png

 

Plan to attend our NEXT 2019 event, October 8-10, 2019 at the MGM Grand, Las Vegas to hear from our leaders and experts and our customers and partners on how they are delivering new value to business and society with responsible innovation.

Solving The Von Neumann Bottleneck With FPGAs

$
0
0

Lately I have been focused on the operational aspects of data, how to prepare data for business out comes and less about the infrastructure that supports that data. As in all things, there needs to be a balance, so I am reviewing some of the innovations that we have made in our infrastructure portfolio which contribute to operational excellence. Today I will be covering the advances that we have made in the area of hybrid-core architecture and its application to Network Attached Storage. This hybrid-core architecture is a unique approach which we believe will position us for the future, not only for NAS but for the future of compute in general.

 

FPGA Candy.png

The Need for New Compute Architectures

The growth in performance of non-volatile memory technologies such as storage-class memories, and the growing demand for intensive compute for graphics processors, analytics/machine learning, crypto currencies, and edge processing are starting to exceed the performance capabilities of CPU processors. CPUs are based on the Von Neumann architecture where processor and memory sit on opposite sides of a slow bus. If you want to compute something, you have to move inputs across the bus, to the processor. Then you have to store the outputs to memory when the computation completes. Your throughput is limited by the speed of the memory bus. While processor speeds have increased significantly, memory improvements, have mostly been in density rather than transfer rates. As processor speeds have increased, an increasing amount of processor time is spent idling, waiting for data to be fetched from memory. This is referred to as the Von Neumann bottleneck.

 

Field Programmable Gate Arrays

Hitachi has been working with combinations of different compute architectures to overcome this bottleneck for some time. One architecture is a parallel state machine FPGA (Field Programmable Gate Arrays). Hitachi has been working with FPGA technology, investing thousands of man hours in research and development, producing over 90 patents. Unlike a CPU which is an instruction stream processor that runs through the instructions in software, to access data from memory and move, modify, or delete it in order to accomplish some task, FPGA’s are a reconfigurable systems paradigm that is formulated around the idea of a data stream processor—instead of fetching and processing instructions to operate on data, the data stream processor operates on data directly by means of a multidimensional network of configurable logic blocks (CLBs) connected via programmable interconnects. Logic blocks compute a partial result as a function of the data received from its upstream neighbors, stores the result within itself and passes it downstream. In a data-stream based system, execution of a program is not determined by instructions, but rather by the transportation of data from one cell to another—as soon as a unit of data arrives at a cell, it is executed.

 

Today’s FPGAs are high-performance hardware components with their own memory, input/output buffers, and clock distribution - all embedded within the chip. In their core design and functionality, FPGAs are similar to ASICs (Application Specific Integrated Circuits) in that they are programmed to perform specific tasks at high speeds. With advances in design, today’s FPGAs can scale to handle millions of tasks per clock cycle, without sacrificing speed or reliability. This makes them ideally suited for lower level protocol handling, data movement and object handling. Unlike ASICs (that cannot be upgraded after leaving the factory), an FPGA is an integrated circuit that can be reprogrammed at will, enabling it to have the flexibility to perform new or updated tasks, support new protocols or resolve issues. It can be upgraded easily with a new firmware image in the same fashion as for switches or routers today.

 

Hitachi HNAS Incorporates FPGAs

At the heart of Hitachi's high performance NAS (HNAS) is a hybrid core architecture of FPGAs and Multicore intel processors. HNAS has over 1 million logical blocks inside its primary FPGAs, giving it a peak processing capacity of about 125 trillion tasks per second – an order of magnitude more tasks than the fastest general purpose CPU. Because each of the logic blocks is performing well-defined, repeatable tasks, it also means that performance is very predictable. HNAS was introduced in 2011 and as new generations of FPGAs increased the density of logic blocks, I/O channel and clock speeds, increasingly more powerful servers have been introduced.

 

FPGAs are not always better to use than multi-core CPU’s. CPU’s are the best technology choice for advanced functions such as higher-level protocol processes and exception handling, functions that are not easily broken down into well-defined tasks. This makes them extremely flexible as a programming platform, but it comes at a tradeoff in reliable and predictable performance. As more processes compete for a share of the I/O channel into the CPU, performance is impacted.

 

HNAS Hybrid-Core Architecture

Hitachi has taken a Hybrid-core approach, combining a multi-core Intel processor with FPGAs to address the requirements of a high performance NAS system. One of the key advantages of using a hybrid-core architecture is the ability to optimize and separate data movement and management processes that would normally contend for system resources. The HNAS hybrid-core architecture allows for the widest applicability for changing workloads, data sets and access patterns. Some of the attributes include:

  • High degree of parallelism
    Parallelism is key to performance. While CPU based systems can provide some degree of parallelism, such implementations require synchronization that limits scalability.
  • Off-loading
    Off-loading allows the core file system to independently process metadata and move data while the multi-core processor module is dedicated to data management. This provides another degree of parallelism.
  • Pipelining
    Pipelining is achieved when multiple instructions are simultaneously overlapped in execution. For a NAS system it means multiple file requests overlapping in execution.

Pipeline.png

Another advantage of the hybrid-core architecture is the ability to target functions to the most appropriate processing element for that task, and this aspect of the architecture takes full advantage of the innovations in multi-core processing. High-speed data movement is a highly repeatable task that is best executed in FPGAs, but higher level functions such as protocol session handling, packet decoding, and error / exception handling need a flexible processor to handle these computations quickly and efficiently. The unique hybrid-core architecture integrates these two processing elements seamlessly within the operating and file system structure, using dedicated core(s) within the CPU to work directly with the FPGA layers within the architecture. The remaining core(s) within the CPU are dedicated to system management processes, maintaining the separation between data movement and management. The hybrid core approach has enabled new programmable functions to be introduced and integrated with new innovations in virtualization, object store and clouds through the life of the HNAS product.

 

For us, it’s not just about a powerful hardware platform or the versatile Silicon file system; it’s about a unified system design that forms the foundation of the Hitachi storage solution. The HNAS 4000 integrally links its hardware and software together in design, features and performance to deliver a robust storage platform as the foundation of the Hitachi Family storage systems. On top of this foundation, the HNAS 4000 layers intelligent virtualization, data protection and data management features to deliver a flexible, scalable storage system.

 

The Basic Architecture of HNAS

The basic architecture of HNAS consists of a Management Board (MMB) and a Filesystem Board (MFB).

HNAS Architecture.png

File System Board (MFB)

The File System Board (MFB) is the core of the hardware accelerated file system. Responsible for core file system functionalities such as object, free space management, directory tree management etc. and Ethernet and FC handling. It consists of four FPGA s connected by Low Voltage Differential Signaling (LVDS), dedicated point to point, Fastpath connections, to guarantee very high throughput for data reads and writes. Each FPGA has dedicated memory for processing and buffers which eliminates memory contention between the FPGAs unlike a shared memory pool in a CPU architecture

  • Network Interface FPGA is responsible for all Ethernet based I/O functions
  • The Data Movement FPGA is responsible for all data and control traffic routing throughout the node, interfacing with all major processing elements within the node, including the MMB, as well as connecting to companion nodes within a HNAS cluster
  • The Disk Interface FPGA (DI) is responsible for connectivity to the backend storage system and for controlling how data is stored and spread across those physical devices
  • The Hitachi NAS Silicon File System FPGA (WFS) is responsible for the object based file system structure, metadata management, and for executing advanced features such as data management and data protection. It is the hardware file system in the HNAS. By moving all fundamental file system tasks into the WFS FPGA, HNAS delivers high and predictable performance
  • MFB coordinates with MMB via a dedicated PCIe 2.0 8-lane bus path (simultaneous 500MB/s per lane for send and 500MB/s for receive, per lane).

Management Board (MMB)The Management Board provides out-of-band data management and system management functions for the HNAS 4000. Depending on the HNAS model, the platform uses 4 to 8 core processors. Leveraging the flexibility of multi-core processing, the MMB serves a dual purpose. In support of the FPGAs in the File System Board, the MMB provides high-level data management and hosts the operating system within two or more dedicated CPU cores in a software stack known as BALI. The remaining cores of the CPU are set aside for the Linux based system management, monitoring processes and application level APIs. The MMB is responsible for

  • System Management
  • Security and Authentication
  • NFS, CIFS, iSCSI, NDMP
  • OSI Layer 5, 6 & 7 Protocols

 

A Growing Industry Trend.

The market for FPGAs has been heating up. Several years ago,Intel acquired Altera, one of the largest FPGA companies, for $16.7 Billion. Intel, the world largest chip company, has identified FPGAs as a mature and growing market and is embedding FPGAs into their chipsets. Today Intel offers a full range of SoC (System on Chip) FPGA product portfolio spanning from high-end to midrange to low-end applications.

 

Microsoft announced that it has deployed FPGAs in more than half its servers. The chips have been put to use in a variety of first-party Microsoft services, and they're now starting to accelerate networking on the company's Azure cloud platform. Microsoft's deployment of the programmable hardware is important as the previously reliable increase in CPU speeds continues to slow down. FPGAs can provide an additional speed boost in processing power for the particular tasks that they've been configured to work on, cutting down on the time it takes to do things like manage the flow of network traffic or translate text.

 

Amazon now offers an EC2 F1 instance, which use FPGAs to enable delivery of custom hardware accelerations. F1 instances are advertised to be easy to program and come with everything you need to develop, simulate, debug, and compile your hardware acceleration code, including an FPGA Developer AMI (An Amazon Machine Image is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud. It serves as the basic unit of deployment for services delivered using EC2and supporting hardware level development on the cloud). Using F1 instances to deploy hardware accelerations can be useful in many applications to solve complex science, engineering, and business problems that require high bandwidth, enhanced networking, and very high compute capabilities.

 

FPGA Developments in Hitachi Vantara

Hitachi Vantara, with its long experience with FPGAs and extensive IP portfolio is continuing several active and innovative FPGA development tracks along similar lines as those explored and implemented by Microsoft and Amazon.

 

Hitachi provides VSP G400/600/800 with embedded FPGAs that tiers to our HCP object store or to Amazon AWS and Microsoft Azure cloud services. With this Data Migration to Cloud (DMT2C) feature, customers can significantly reduce CAPEX by tiering “cold” files from their primary Tier 1 VSP Hitachi flash storage to lower cost HCP or public cloud services. Neil Salamack’s blog post explains the benefits that this provides for Cloud Connected Flash – A Modern Recipe for Data Tiering, Cloud Mobility, and Analytics

 

Hitachi has demonstrated a functional prototype running with HNAS and VSP to capture finance data and report on things like currency market movements, etc. Hitachi has demonstrated the acceleration of Pentaho functions with FPGAs, and presented FPGA enabled Pentaho-BA as a research topic at the Flash memory summit. Pentaho engineers have demonstrated 10 to 100 time faster analytics with much less space, much less resources, and at a fraction of the cost to deploy. FPGAs are very well suited for AI/ML implementations and excel in deep learning where training iterative models may take hours or even days while consuming large amounts of electrical power.

 

Hitachi researchers are working on a software defined FPGA accelerator that can use a common FPGA platform on which we can develop algorithms that are much more transferable across workloads. The benefit will be the acceleration of insights on many analytic opportunities, many different application types, and bring

things out to market faster. In this way we hope to crunch those massive data repositories and deliver faster business outcomes and solve social innovation problems. It also means that as we see data gravity pull more compute to the edge, we can vastly accelerate what we can do in edge devices with less physical hardware because of the massive compute and focused resources that we can apply with FPGAs.

 

Hitachi has led the way and will continue to be a leader in providing FPGA based innovative solutions

chúc mừng năm mới

$
0
0

This morning I saw four wild pigs in my front yard. My dogs were going crazy, but they opted to stay inside. Wild pigs are not a common sight in my suburban neighborhood. I took this as an optimistic sign since this is the end of the dog year and the beginning of the year of the pig.

 

pig.png

 

According to Chinese astrology, 2019 is a great year for good fortune and a good year to invest! 2019 is going to be full of joy, a year of friendship and love for all the zodiac signs; an auspicious year because the Pig attracts success in all the spheres of life.

 

Wishing you all the best for the year of the pig and enjoy a year of friendship and love.

Continuous Business Operations For Oracle RAC

$
0
0

From our friends at Merriam-Webster, the definition for “continuous” is:

adjective

con·tin·u·ous | \kən-ˈtin-yü-əs\

  1. Marked by uninterrupted extension in space, time, or sequence //The batteries provide enough power for up to five hours of continuous use.//
synonyms:continual, uninterrupted, unbroken, constant, ceaseless, incessant, steady, sustained, solid, continuing, ongoing, unceasing, without a break, permanent, nonstop, round-the-clock, always-on, persistent, unremitting, relentless, unrelenting, unabating, unrelieved, without respite, endless, unending, never-ending, perpetual, without end, everlasting, eternal, interminable

 


Understanding the definition of “continuous” is key because in this blog we will discuss “Continuous Business Operations” as it relates to those of you that must have continuous, uninterrupted access to data. You are running business applications that require strict zero RTO (Recovery Time Objective)/RPO (Recovery Point Objective) service levels, as these applications are mission critical and users *must* be able to access critical database information, despite datacenter failure due to catastrophic or natural causes.What types of businesses/services demand such levels of uptime? Think emergency call center services and medical/urgent care, and the impact it can have to the customers of their services – you, me, and our families… These are “life-critical” operations where the consequence of downtime could result in death.

 

Huyhn post.png


When operations are running on a database, two things can interrupt continuous operations; loss of a server or loss of the database. Oracle Real Application Clusters (RAC) provides customers with a clustered server environment where the data base is shared across a pool of servers, which means that if any server in the server pool fails, the database continues to run on surviving servers. Oracle RAC not only enables customers to continue processing database workloads in the event of a server failure, it also helps to further reduce costs of downtime by reducing the amount of time databases are taken offline for planned maintenance operations.

 

But what if the data base storage fails? This will take some time to recover, unless the data base is running on a virtual storage array that is supported by two physical storage arrays. With a virtual storage array, the data base can continue to run even when one storage array fails or is taken down for maintenance. This is a capability that is provided with the Global Active Device (GAD) feature of our VSP storage array. The combination of Oracle RAC and Hitachi GAD provides true continuous business operations.

 

Configuring, implementing, and managing such a system, with servers, switches, storage, Oracle RAC, and GAD, can take some expertise, time and effort. Hitachi Vantara can simplify this with a converged system specifically designed and validated for Oracle Real Application Clusters (RAC) databases and VSP GAD storage arrays. Our “Hitachi Solution for Databases – Continuous Business Operations” has three core components that enable your business with uninterrupted access to Oracle database:

 

Huyhn slide.png

  1. Hitachi Unified Compute Platform, Converged Infrastructure (UCP CI) – this is the core infrastructure that Oracle RAC and associated software is run on
    • Hitachi Advanced Server DS220/DS120
    • Hitachi Virtual Storage Platform F and G series
    • Brocade G620 Fibre Channel Switches
    • Cisco Nexus GbE Switches
  2. Hitachi Global-Active Device (GAD)for dual geographic site synchronous replication, and configuring Oracle RAC for extended distances
  3. Hitachi Data Instance Director– used for orchestration and simplified set-up and management of Hitachi global-active devices.


“Hitachi Solution for Databases – Continuous Business Operations” topology diagram follows:Huyhn diagram.png
Hitachi Data Instance Director (HDID) provides for the automatic setup and management of global-active device, avoiding hours of tedious manual work. It also handles the swap operation when a failure in one of the sites is detected. HDID can also interface with the Oracle instances to orchestrate non-disruptive application-consistent snapshots and clones, in either or both of the sites, to enable point-in-time operational recovery following a database corruption or malware event.Note in the above diagram, the “quorum site” is a dedicated GAD cluster and management device that is shared between both sites to provide traffic management and ensure data consistency.

 

Find out more about how Hitachi Vantara can help your company achieve continuous business operations by joining our upcoming webinar on February 20th. Experts from Broadcom, ESG, and my colleague, Tony Huynh, who helped me with this post will share their insights with you:

 

Register for our February 20thwebinar herehttps://www.brighttalk.com/webcast/12821/348002?mkt_tok=eyJpIjoiTnpNek9XWTBPVE14WTJNeCIsInQiOiI5bk0yVmViMjFITDdWNk12NlcybkpwQ0dMOXRHQTNEOHVtUHFzUk1GK3VteTROOFVEXC81eFZ2MFwvTWxqUHhvbjVTOXpjdTZYXC9zNkNERVVFZjJCNml4UT09In0%3DAdditional technical resources:

  1. Full reference architecture details
  2. ESG Lab Validation report
  3. What is Global-Active Device? – Video

Accelerating Cloud Adoption with Hitachi Vantara

$
0
0

Cloud adoption.png

Surveys by Gartner,IDG, and Right scale in 2018 leave no doubt that cloud adoption is mainstream. Public cloud adoption led the way increasing to 92% in the Right Scale survey. The same survey showed that 81% having a multi-cloud strategy, leveraging almost 5 clouds on average. 51 percent have a Hybrid cloud strategy, combining public and private clouds.

 

The number of respondents adopting private cloud is 75%. While the respondents in the survey ran 40% of their workloads in public cloud and 39 % in private cloud, enterprise customers ran fewer workloads in the public cloud, 32%, and more in Private clouds, 45%, which reflects their concern with security and safety of critical workloads. Private cloud adoption also increased with VMware vSphere with 50% adoption, followed by Open Stack with 24%.

 

Hybrid and Multi-cloud Benefits

Hybrid and multi-cloud strategies offer flexibility, scalability, and agility by providing the freedom to choose where and how to deploy workloads, without the complexity and delays of acquiring and deploying infrastructure and operations resources. Applications can burst out into a public cloud during peak periods. Hybrid and multi-cloud also provide flexible data management, governance, compliance, availability, and durability. It eliminates upfront capital costs and avoids the risk of infrastructure vendor lock-in. Another aspect of agility is the self-service resources that enables the DevOps culture to run dev/test workloads in the cloud. Another major benefit of public clouds is to geographically distribute apps and services especially as more applications gravitate to the edge.

 

Hybrid and Multi-cloud Challenges

A cloud is a computing model where IT services are provisioned and managed over the Internet in the case of public clouds or over private IT Infrastructure in the case of private cloud. Selecting an application and merely moving it to a cloud provider is typically a poor decision. It needs to be designed and built to take advantage of a cloud environment else it is likely to become more problematic. Public cloud providers typically develop highly specialized tools for monitoring, orchestration, cost management, security and more to suit the capabilities of their services. However, these tools may not map over to other clouds. In the case of Hybrid and multi-cloud, we are mixing up multiple clouds which increases operational and data management complexity. Operational policies and methods are different and aggregation of data across multiple clouds boundaries makes it difficult for governance, analytics, and business intelligence. When you have petabytes of data in one cloud, how long will it take you to switch to another cloud?

 

While the major cloud companies have security measures in place that probably exceed what most private companies can provide, they present a very visible target, and we must assume that nothing is fool proof. Security still remains a key concern for critical applications, especially when it comes to public clouds.

 

Cloud Changes in 2019

While lift and shift application migrations to clouds will continue in 2019, more applications will be modernized to take advantage of the new capabilities of containers, serverless, FPGAs, and other forms of computing. Competition between the leading cloud providers will increase, resulting in more services available for infrastructure as a service, integrations and open source, analytics, compliance, and hybrid cloud capabilities. With Microsoft running GitHub, open source is becoming the model for developing new technologies and cloud vendors will become more open to developer communities. Hybrid clouds become a battleground with Amazon Outpost delivering an on-premise server rack to deliver AWS cloud, and IBM acquiring Redhat to increase their relevance in the data center.

 

All these changes will require new skill sets in migrating, modernizing, and managing new cloud deployments. Cloud providers realize that customers need help migrating and implementing cloud solutions, so they have carefully qualified services partners that customers can trust with support or managed services.

 

Hitachi Vantara Hybrid and Multi-cloud capabilities

Hitachi Vantara provides support for private, hybrid, public and multi-cloud deployments. There are three major areas of support

  1. Cloud gate way for block, file and object storage with HNAS and HCP.
    HNAS provides a transparent data migrator for block and file data to private and public clouds and integrates with HCP object store.

    HCP, Hitachi Content Platform, lets users securely move data to, from, and among multiple cloud services, and better manage use cases including data governance, IoT, and big data. Read IDC’s 2018 Marketscape report on HCP and see how HCP addresses security, multi-cloud, and performance. Data can be moved from one cloud to another without the additional cost of reading from one cloud to write to another. Since HCP always creates two copies, a new copy can be created on a new cloud repository while the old repository is crypto shredded.


HCP also provides out of the box integration with the Pentaho data integration and analytics platform enabling the use of Pentaho to ingest, integrate, cleanse and prepare data stored in HCP-based data lake environments for analytics and visualization. While there is tight integration between Pentaho and HCP, it can actually be used to support an abstracted data architecture enabled by the separation of compute and storage. Specifically, while the data might reside in HCP, Pentaho’s adaptive execution functionality enables users to choose their preferred execution engine (such as the Apache Spark in-memory processing engine, as well as Pentaho’s open source Kettle ETL engine) at runtime. This functionality could be used to execute data processing in multiple cloud environments. The vehicle history information provider CARFAX is doing just that: deploying HCP to combine structured and unstructured data in a single environment and cleansing and preparing it to use Pentaho before sending it to AWS or Microsoft Azure for processing, as appropriate for a given application. Read the 452 Research report on this capability

 

HCP.png

2. On Premise Cloud Deployment with Hitachi Enterprise Cloud (HEC).

The Hitachi Enterprise Cloud portfolio of on-premises enterprise cloud managed services, are pre-engineered and fully managed, to achieve faster time to value, and get guaranteed business outcomes and service levels for mission-critical applications and workloads, within a traditional IT infrastructure, aDevOps architecture, a microservices architecture, or in some combination Hitachi Enterprise Cloud integrates implementation, deployment services, and cloud-optimized software and infrastructure to deliver rapid business value. The first solution of its kind, it also offers the ability to add container capabilities to support both traditional virtualized environments and born on the web applications.

 

HEC.png

 

3. Accredited and Certified REAN Cloud services.
REAN Cloud has expertise working with the hyperscale public clouds. They are a Premier Consulting Partner in
the Amazon Web Services (AWS) Partner Network (APN) and a Microsoft Azure Silver Partner. REAN Cloud offers
managed services and solutions for hyperscale-integrated IaaS and PaaS providers and is one the few systems
integrators capable of supporting the entire cloud services life cycle. Backed by extensive security DNA and deep
compliance IP and expertise, REAN Cloud specializes in helping enterprise customers that operate in highly
regulated environments – Financial Services, Healthcare/Life Sciences, Education and the Public Sector –
accelerate their cloud investments while extracting maximum value from use of the cloud itself.

 

REAN.png

Last year REAN Cloud acquired 47Lining to provide deep capabilities in cloud-based analytics and machine
learning that expands Hitachi Vantara’s ability to maximize data-driven value for vertical IoT solutions. This
April, 47Lining, announced its Amazon Web Services (AWS) Industrial Time Series Data Connector Quick
Start
. The Connector Quick Start allows companies to quickly and easily synchronize their industrial time
series data to AWS so they can perform advanced predictive and historic analytics using the full breadth of
AWS big data and partner services.

 

 

Summary

Hitachi Vantara recognizes the value of private, hybrid and public clouds and provides the tools and services to enable our customers to choose the right combination of cloud solutions for their specific requirements. Cloud is much more than just using a network of remote servers hosted on the internet to store, manage, and process data. Cloud is really about methodology, automation, financial models, software development approaches, and more.

 

We’ve known that for years as we have provided storage, converged, hyperconverged, and other infrastructure solutions to our customers deploying cloud on-premises for private clouds.Private clouds are not simply existing data centers running virtualized, legacy workloads. They require highly modernized digital application and service environments running on true cloud platforms like Hitachi Enterprise Cloud

 

With the introduction of Hitachi Enterprise Cloud (HEC) a few years back, as a Service offering, and recent Smart Data Center initiative, we are building out the components to support private cloud and connectivity to public clouds for hybrid cloud deployments. Hybrid clouds must bond together private and public clouds through fundamental technology, which enable the transfer of data and applications.

 

With the introduction of REAN, we have a public cloud portfolio to complement our existing services. This allows us to step away from the limiting descriptions of” private”, “public”, and “hybrid”. Many, if not most, of our customers are expecting to manage an incredibly diversified environment based on intellectual property, latency, security, and integration needs. The main point is that we offer customers a range of capability that only a very few can. The big 3 public cloud providers (AWS, Azure, and Google Compute) don’t provide the infrastructure for private cloud. Most infrastructure companies do not have solutions for the public cloud and make the argument for private clouds to protect their product portfolio. The global SI’s, like Accenture, don’t provide the hardware and software that we do. As customers look for a partner that can help with the massive diversity of the multiple cloud environments, we are the only partner that has the breadth of capabilities they are seeking.


Cisco and Hitachi Adaptive Solutions for Converged Infrastructure

$
0
0

Ten years ago, in 2009, Cisco introduced the Unified Computing System (UCS), a data center server computer product line composed of computing hardware, virtualization support, switching fabric, storage, and management software. This was within a year of Oracle’s Database Machine, the first converged Infrastructure system with a single SKU which emphasized integration, engineering for performance, and pre-configured systems. Other infrastructure vendors quickly followed suit with their versions of converged infrastructure systems. The market was ripe for these solutions due to the growing demand for capacity, the complexity of configuring systems out of multi-vendor piece parts, the introduction of virtualization systems, and the competition of public clouds. The main benefits were simplified management and operations, validated solutions, and reduced costs to deploy and maintain.

 

Evolution of Converged Infrastructure

Past generations of converged infrastructure were more about packaging than innovation. There were plenty of “solutions” from rigid application stacks to more flexible reference architectures. Most vendors had more than one offering, and many systems were limited in scaling especially for storage systems when a vendor may have multiple storage offerings which they acquired over time. Today, converged infrastructures must adapt and leverage new technologies such as artificial intelligence (AI) - Smart operational analytics, Heuristics based Automation, and the Internet of Things (IoT), in order to meet SLAs with data-driven insights that provide the foundation for enhanced systems of record and innovation.

 

Introducing Cisco and Hitachi Adaptive Solutions for Converged Infrastructure

Hitachi and Cisco have been partnering for over 14 years with thousands of mutual customers around the world, setting new standards for storage networking, providing our customers with choice and breakthrough levels of operational efficiency. This partnership has delivered high innovation in state of-the-art storage and networking systems for virtualized converged solutions, and private cloud infrastructures. Now it is time to take our partnership to the next level with a converged infrastructure with best of breed component and management software that can adapt to current and future technologies and business requirements

 

Infrastructure Components

Adaptive Solutions for Converged Infrastructure combines Cisco UCS Integrated Infrastructure with Hitachi Virtual Storage Platform to help enterprise businesses meet the challenges of today and position themselves for the future. Leveraging decades of industry expertise and superior technology, Cisco UCS B-Series Blades with Hitachi VSP Storage integration offers a resilient, agile, flexible foundation for today’s businesses. In addition, the Cisco and Hitachi partnership extends beyond a single solution, enabling businesses to benefit from their ambitious roadmap of evolving technologies such as advanced analytics, IoT, cloud, and edge capabilities. With Cisco and Hitachi, organizations can confidently take the next step in their modernization journey and prepare themselves to take advantage of new business opportunities enabled by innovative technology. A key differentiator for this converged infrastructure is the VSP storage system that provides a common storage management platform from small midrange to full scale enterprise requirements, virtualization of external storage, AI and machine learning to reduce costs and increase operational efficiencies, and 100% data availability guarantee.

CVD Infra.png

Management Software

Hitachi and Cisco separately provide specialized management tools to simplify daily maintenance and provisioning tasks, which allow IT departments to manage fast-growing workloads without adding new staff and the opex associated with them. For VM administrator, VMware vCenter with rich Hitachi and Cisco plugins provide native interface for infrastructure monitoring and management - Cisco UCS Manager for Compute, Cisco Prime DCNM (Data Center Network Manager) plug-in for Network and Hitachi UCP Advisor for Storage.

 

Cisco separately offers UCS Manager for System Administrators and Data Center Network Manager (DCNM) for Network Administrators with advance capabilities. DCNM is a network management platform for all NX-OS enabled IP and Storage networking deployments.

 

Today, customers deploy Hitachi’s UCP Advisor to enable rapid deployment of production environments using policy based automated configuration, templates, workflow validations and simplified operations. Hitachi’s UCP Advisor together with Cisco’s software further reduces staff learning curves and barrier to adoption based on integration with the ubiquitous VMware vCenter software.

CVD Management.png

Data Protection and Copy Data Management

Data protection and the efficient use of capacity is a common requirement of workloads across the data center. Data protection is complicated. Each application and data source have its own unique method for making backup copies. Storage based replication is a well understood method to provide HA and DR for many mission critical applications and it helps to reduce the need to implement unique, solution specific, methodologies for different workloads. The Hitachi VSP storage family offers snapshot, cloning and replication capabilities to support these requirements. While this storage functionality is a good first step, a common tool to manage this functionality in context of the application adds even more value. HDID is well integrated into the Hitachi Vantara product portfolio and VMware vCenter; and allows customers to benefit from a common tool to manage Backup/Recovery for general purpose to virtualized to mission critical applications. The functionality of HDID goes well beyond Backup/Recovery. It creates and repurpose copies for other uses like DevOps and address requirements like Data Governance. It provides further visibility into abandoned, orphan, and rogue copies.

HDID.png

 

Delivered Through the Channel

Converged infrastructures give companies an integrated solution that simplifies infrastructure management, helps them consolidate their IT departments, and reduce the resources devoted to repetitive tasks. However, deployment in today’s complex environment still requires alot of architecture, testing, and tuning in order to fully utilize the innovation in today’s storage, network, and management systems. As the IT workforce transitions from infrastructure specialists to generalists who are more focussed on business outcomes, more services will be required for deployments,

 

Consequently, Adaptive Solutions for Converged Infrastructure will be delivered through the channel to leverage the deployment skills of our channel partners who have expertise in both Hitachi and Cisco systems. Hitachi Vantara and Cisco will identify and enable integration partners who will create and preconfigure a converged infrastructure that meets our customer’s requirements.

 

Efficient, Scalable, Cooperative Support

Cisco and Hitachi provide a cooperative support model where the customer contacts either vendor for support. The support team performs initial triage and directs customer to other vendor’s support organization, if needed. Where appropriate, both support teams remain engaged throughout the process. The benefits are clear ownership of support issues, improved cross-vendor collaboration, and faster issue resolution

 

Summary

Now is the time for the next evolution of converged infrastructure. Organizations need an agile solution, free from operational inefficiencies, to deliver continuous data availability, meet SLAs, prioritize innovation, and adapt to changing business requirements. But modernizing your data center can be a daunting task, and it’s vital to select trusted technology partners with proven expertise. With the right partner, companies can build for the future by enhancing systems of record, supporting systems of innovation, and growing their business. Cisco and Hitachi are the right partners.

 

Additional Resources

Hitachi Vantara and Connected Mainframe at SHARE 2019

$
0
0

SHARE 2019 is next week from March 10 – 15 in Phoenix, Arizona and should be very interesting this year because of all the enhancements that have become available in the z14 mainframe since July of 2017. The Motley Fool Reported that IBM Is enjoying the strongest mainframe cycle in Years. They reported that strong sales of mainframe systems have helped IBM report three consecutive quarters of revenue growth. The z14 is capable of processing five times as many transactions per day, has triple the memory, and boasts input-output performance three times that of the z13. The z14 includes a proprietary ASIC on chip hardware that is dedicated to cryptographic algorithms to encrypt all data without negatively affecting performance. This is a key feature for support of GDPR.

 

Because of the mainframe’s proven ability to securely handle billions of daily transactions a day, this 50 year old technology is still used by 80% of the word’s largest airlines, financial institutions, and retailers. In addition to improvements in speed and security, mainframes now include technologies such as containers, APIs, Java, Linux, microservices, and SOAs which brings the reliability and security of mainframes to the web and mobile apps. A recent study by IDC “The Business Value of the Connected Mainframe for Digital transformation”, based on interviews with enterprises which run significant mainframe operations, showed that the mainframe plays an increasingly central role in digital transformation. IDC reported that the modernization and integration of the mainframe into an organization's connected ecosystem, internally and externally -- which is the definition of "connected mainframe" -- leads to innovations that drive revenue growth and improve operational efficiency.

 

The connected mainframe connects externally to the outside world through the network via TCP/IP or through the mainframe’s proprietary FICON channel, which is an I/O channel technology specifically designed for the mainframe. While there are analytics that run internally on the mainframe, data has to be moved off the mainframe in order to be combined with non-mainframe data analytics. File Transport Protocol (FTP) over TCP/IP is the method most often used to feed mainframe data into an enterprise-wide analytics pipeline, However, FTP is not secure, introducing risk, slower and more expensive than FICON due to the processor cycles that are required to drive TCP/IP. A secure, faster and less expensive way to transfer mainframe data would be to use the FICON channel which offloads the transmission work from the processors to the channel processors.

MDI.png

Hitachi Vantara and Luminex have partnered on a Mainframe Data Integration (MDI) platform that leverages the Luminex mainframe channel I/O interface to securely share and transfer data between mainframes and distributed systems environments using the FICON channel. Since FICON channels are specifically designed and optimized for the purpose of moving data off the mainframe. MDI provides a faster, more secure, cheaper (less CPU) and easy (native) platform for connecting mainframe data vs. TCP/IP. A financial customer was able to reduce their fraud detection investigation from 50 days down to 5 days and instead of spent nding 90% of their time on data collection and only 10% on analyzing the data, they can now spend 80% of their time on investigations, improving the quality of results.

 

John Pilger from Hitachi Vantara and Colleen Gordon of Luminex will present a Lunch and Learn session “Hitachi Vantara and Luminex Unlocking Mainframe Data Value” on Thursday, March 14 from 12:30 pm to 1:30 pm in room 101A at the Phoenix Convention center, session number 24791. They will cover the MDI product and show that it is more than just a high speed data transfer tool but also a co-processor that can extend processing and interface capabilities. They will also provide use cases and customer success stories to quantify the business value. Please plan on attending SHARE Phoenix 2019 and this session in particular to learn how the “connected mainframe” can drive digital transformation with MDI. You can visit our engineers at the Hitachi/Luminex booth 312 at the convention center for more information on MDI and other connected mainframe solutions. There will be an in-booth hosted specialty beer bar on Tuesday.

Share 2019.png

Hitachi’s Contribution to Smart Cities in Turkey

$
0
0

Istanbul.png

Last week I was in Istanbul to participate in the World Cities Congress Istanbul 2019, which was sponsored by the Istanbul Metropolitan Municipality. I participated in two panels. One on Smart 4 Technologies and another on Smart Energy. Istanbul is a municipality of over 15 million in a country of 80 million. Istanbul is a megacity, which is defined as a city that is larger than 10 million people. As a megacity Istanbul has turned to smart technologies to answer the challenges of urbanization, with more efficient delivery of city services and increasing the quality and accessibility of such services as transportation, energy, healthcare, and social services. Smart cities also offer great opportunities for transformation and optimization of the urban economy with production systems based on information and technology. Hitachi is engaged with Istanbul to deliver Smart City Solutions. Here are some of the Smart City projects that Hitachi is supporting in Turkey.

 

Hitachi Establishes Strategic Regional Center for Healthcare Business in Istanbul

Healthcare.png

Istanbul is building a large airport with the capacity to service 150 Million people per year. One of the drivers for the new airport in Istanbul is the medical tourism business that is attracting people in the region with state of the art medical services. Istanbul’s strategic location and intellectual resources was the reasons for Hitachi Health care to establish a Regional Center for Healthcare in Istanbul in 2017. Hitachi will provide a healthcare business platform to strengthen operation services for hospital imaging and diagnostic systems, surgical treatment solutions and radiation therapy systems in Central Asia, Middle East, and Africa as well as Turkey. Hitachi will collaborate with companies in Turkey to deliver state of the art health care solutions that will contribute to the development of healthcare and improve the quality of life in the region.

 

Istanbul Buyuksehir Belediyesi, Traffic Management & Violation Detection System

Traffic Management.png

Istanbul’s Traffic Management & violation detection system uses thousands of endpoint devices with detectors, cameras and other sensors. Fines are issued according to the records on them. The requirement was to centralize and archive the data collected from these endpoint devices while keeping with regulation requirements. Hitachi Vantara provided HCP object storage to replace traditional storage for ease of access, search and index, privacy regulations and hybrid cloud.

 

City of Istanbul Governorship: Safe, Smart Campus

Istanbul Governorship.png

The challenge was to secure the governorship campus and include multiple existing video and IoT systems. The solution was to implement Hitachi Vantara’s Smart Spaces solution with Video Intelligence to integrate disparate systems into a single view. This enables real time alerts and analysis from facial recognition, traffic and parking, queue detection, people counting and license plate recognition. Smart cameras and edge devices provide situational awareness and end to end smart security.

 

Türkiye Petrol Rafinerileri (Tupras) | Opening the Data Lake to Data Science

Turas.png

Tupras operates four oil refineries processing crude from global markets. Time consuming manual steps made it difficult to make quick decisions in refinery operations. Data silos inhibited the collaboration between management engineers and IT resulting in short sighted and or incorrect decisions. They used a time series data base (TSDB) which is optimized for measuring change over time and makes time series data very different than other data workloads. Open-source monitoring solutions such as openTSDB lack analytics and smart alerting. They can also be a challenge to scale for capacity and maintain for reliability and performance. Tupras had no unified/scalable way to discover the (sliced content) on open TSDB and no framework for data science and no way to operationalize data science. The solution was to use Pentaho which connects natively to openTSDB, eliminating the need for 3rdparty vendors and leveraging distributed compute. Pentaho unlocked mapR to enable data science and enabled business users to operationalize data through self-service. This improved lead time from 2 days to less than 10 minutes. This also enabled the data science team, which included IT, data engineers and business analysts with the tools for data preparation and operationalize algorithms. They have a unified data architecture with one set of tools to unlock relevant sources of data.

 

One Hitachi

Hitachi Healthcare and Hitachi Vantara are two of the 5 Hitachi Companies in Turkey working as one Hitachi. The others are Kenki, the construction company, Hitachi Rail, and Hitachi Europe. Hitachi Kenki has the largest presence due to the many construction projects in Turkey. Hitachi Vantara is best known for enterprise storage which we sell primarily to financial services, telcos, and energy companies that have high availability requirements, but is also gaining recognition for big data and data analytics. Hitachi Europe has introduced Finger Vein biometrics for ATMs. Here is just a sampling of projects which Hitachi supports in Turkey.

One Hitachi.png

 

Hitachi delivers value-based outcomes with IoT solutions that bring devices, people, infrastructure and processes together. To learn more about Hitachi’s Smart City Solutions, click here

The internet is turning inside out – Time for Time Series DB?

$
0
0

Data Hole.png

The internet is normally accessed like a pyramid where a URL may be accessed by hundreds, thousand, even millions of users. Now, with the Internet of Things, we have a multitude of “things” sending millions of records to the Internet. In a sense the internet is being turned inside out with millions more data points being ingested than being served up thanks to the sensors that enable IoT.

 

An IoT device like an autonomous vehicle may have hundreds of sensors generating thousands of GB of data. It is estimated that a single autonomous car will collect over 4000 GB of data per day! The reason for this large amount of data is that IoT devices are concerned with change. In order to track change, the data must be collected as a time series, where new data is always added and not updated. It allows us to measure change and analyze how something has changed in the past, how it is changing in the present, and predict how it may change in the future. By focusing on change, we can understand how a system, process, behavior changes over time and automate the response to future changes.

 

The down side is that time series data generates a lot of data very rapidly. More data than can normally be absorbed by transactional or NoSQL data bases. This has spawned a rapidly developing market for time series data bases (TSDB). TSDBs are fine tuned for time series data. This fine tuning results in efficiencies around performance improvements, including higher ingest rates, faster queries at scale, and better data compression. TSDBs also include functions and operations common to timeseries data analysis such as data retention policies, continuous queries, flexible time aggregations, which results in improved user experience with time series data. You know that time series data bases are mainstream when AWS gets in the game. AWS has announced Amazon Timestream a fast, scalable, fully managed time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at 1/10th the cost of relational databases. The following chart from DB-Engines, November, 2018, shows the growing acceptance of TSDBs compared to other forms of data bases. In 2018,

TSDB graph.png

 

If an autonomous automobile can generate 4000GB of data per day, imagine what a more complex system like an oil refinery would produce. We recently worked with a large oil refinery in Europe which had thousands of sensors installed on equipment including heat exchange networks, power plants, pipelines, and many other systems collectively generating millions of data points every second. Their operators, process engineers, IT, and data scientists were collecting the data manually from these systems as well as Oracle, SQL Server, and SAP and used tools such as Excel to derive their insights. Data silos inhibited the collaboration between management, scientists, engineers and IT resulting in short-sighted and/or incorrect decisions. This was neither efficient, re-usable, nor scalable.

Oil Refinery.png

The internet is normally accessed like a pyramid where a URL may be accessed by hundreds, thousand, even millions of users. Now, with the Internet of Things, we have a multitude of “things” sending millions of records to the Internet. In a sense the internet is being turned inside out with millions more data points being ingested than being served up thanks to the sensors that enable IoT.

 

An IoT device like an autonomous vehicle may have hundreds of sensors generating thousands of GB of data. It is estimated that a single autonomous car will collect over 4000 GB of data per day! The reason for this large amount of data is that IoT devices are concerned with change. In order to track change, the data must be collected as a time series, where new data is always added and not updated. It allows us to measure change and analyze how something has changed in the past, how it is changing in the present, and predict how it may change in the future. By focusing on change, we can understand how a system, process, behavior changes over time and automate the response to future changes.

 

The down side is that time series data generates a lot of data very rapidly. More data than can normally be absorbed by transactional or NoSQL data bases. This has spawned a rapidly developing market for time series data bases (TSDB). TSDBs are fine tuned for time series data. This fine tuning results in efficiencies around performance improvements, including higher ingest rates, faster queries at scale, and better data compression. TSDBs also include functions and operations common to timeseries data analysis such as data retention policies, continuous queries, flexible time aggregations, which results in improved user experience with time series data. You know that time series data bases are mainstream when AWS gets in the game. AWS has announced Amazon Timestreama fast, scalable, fully managed time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at 1/10th the cost of relational databases. The following chart from DB-Engines, November, 2018, shows the growing acceptance of TSDBs compared to other forms of data bases. In 2018,

 

 

If an autonomous automobile can generate 4000GB of data per day, imagine what a more complex system like an oil refinery would produce. We recently worked with a large oil refinery in Europe which had thousands of sensors installed on equipment including heat exchange networks, power plants, pipelines, and many other systems collectively generating millions of data points every second. Their operators, process engineers, IT, and data scientists were collecting the data manually from these systems as well as Oracle, SQL Server, and SAP and used tools such as Excel to derive their insights. Data silos inhibited the collaboration between management, scientists, engineers and IT resulting in short-sighted and/or incorrect decisions. This was neither efficient, re-usable, nor scalable.

A TSDB, OpenTSDB, was used to collect all the sensors into a data lake. Pentaho Data Integrationwas used to connect to OpenTSDB, eliminating the need for 3rdparty vendors and leverage distributed compute. OpenTSDB has extensive, REST based, open APIs which gave our Pentaho engineers huge flexibility to retrieve data extremely fast and parse within Pentaho.The kind of analytics used varied from simple correlation, visualization and ML for predicting values. That being said, Pentaho’s value proposition was more on the data integration part; the data acquisition, extraction, blending, data science etc.; which consumes 80% of a data scientists time spend over mining and modeling. Pentaho also enabled the process engineers, IT, and data scientists to work as a team and enabled business users with self-service consumption of operational data. This not only led to better decisions, but also reduced the lead time from 2 days to less than 10 minutes.

 

Time seriesdata are simply measurements or events that are tracked, monitored, down sampled, and aggregated over time. This could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data. Time series data can be analyzed to understand the underlying structure and function that produce the observations. A mathematical model can be developed to explain the data in such a way that prediction, monitoring, or control can occur. As the internet turns inside out with Time series data bases, Hitachi Vantara’s Pentaho will be there to scale with the explosion of data and provide integration, analysis, and visualization for greater insights into current and new time series applications.

 

A TSDB, OpenTSDB, was used to collect all the sensors into a data lake. Pentaho Data Integration was used to connect to OpenTSDB, eliminating the need for 3rdparty vendors and leverage distributed compute. OpenTSDB has extensive, REST based, open APIs which gave our Pentaho engineers huge flexibility to retrieve data extremely fast and parse within Pentaho.The kind of analytics used varied from simple correlation, visualization and ML for predicting values. That being said, Pentaho’s value proposition was more on the data integration part; the data acquisition, extraction, blending, data science etc.; which consumes 80% of a data scientists time spend over mining and modeling. Pentaho also enabled the process engineers, IT, and data scientists to work as a team and enabled business users with self-service consumption of operational data. This not only led to better decisions, but also reduced the lead time from 2 days to less than 10 minutes.

 

Time seriesdata are simply measurements or events that are tracked, monitored, down sampled, and aggregated over time. This could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data. Time series data can be analyzed to understand the underlying structure and function that produce the observations. A mathematical model can be developed to explain the data in such a way that prediction, monitoring, or control can occur. As the internet turns inside out with Time series data bases, Hitachi Vantara’s Pentaho will be there to scale with the explosion of data and provide integration, analysis, and visualization for greater insights into current and new time series applications.

Tastes Like Chicken

$
0
0

In 2050 the population of the world is expected to be 9 billion versus the 7 billion today. The challenge will be to feed 2 billion more people with less arable land, less water and less farmers. One of the increasing demands will be for protein as people demand richer foods.

 

If you are over 50 years old, you may remember a satirical American comic strip that appeared in many newspapers in the United States, Canada and Europe, featuring a fictional clan of hillbillies in the impoverished mountain village of Dogpatch, USA. Written and drawn by Al Capp. In one of the episodes of this series, the young hero Lil Abner discovers the Shmoo in a hidden valley and introduces them to Dog Patch and the rest of the world. The Shmoo was a lovable creature that laid eggs, gave milk, loved to be eaten and tasted like any meat desired, chicken when fried, steak when broiled, pork when roasted, and catfish when baked. They multiplied like rabbits but required no feed or water, only air to breath. The perfect solution to world hunger.

Shmoo.png

 

Today we have something that is close to the Shmoo. That is today’s broiler chicken. In 1957 the average chicken weighed about 1 KG or 2.2 lb. Today a commercially grown broiler chicken weighs 9.3 lbs. after 8 weeks. It only takes 2.5 lbs. of feed and 468 gallons of water to produce one lb. of chicken meat, which is much more efficient than the production of a lb. of pork or beef, with much less waste, less space and less CO2 emissions.

 

Chicken.png

IT appears that chicken will be the meat for the masses.

 

IoT will help to increase agriculture efficiencies, reduce spoilage, and increase the freshness and nutritional content of healthy foods. The problem will not be about the production of foods, but how to build the infrastructure to provide equal access to that food to all 9 billion people. According to a BoA, Merrill Lynch, Global Investment Strategy report, populous countries like Nigeria, Pakistan and Kenya spend 47 to 57% of their household expenditure on food compared to 7% in the US and the UK.

Food Costs.png

To finish the story of the Shmoo. The Shmoo became so popular that people no longer needed to go to the stores to buy food. This caused a series of images reminiscent of the Wall Street Crash of 1929, and the Captains of Industry banded together to exterminate the Shmoo. Two of the Shmoo managed to escape to go back to their hidden valley in the mountains. Wikipedia described the Shmoo sequence as “massively popular, both as a commentary on the state of society and a classic allegory of greed and corruption tarnishing all that is good and innocent in the world. In their very few subsequent appearances in Li'l Abner, Shmoos are also identified by the U.S. military as a major economic threat to national security.”

 

Mr. Higashihara, our Hitachi CEO, always reminds us of the Light and Shadow of Digital Transformation. With every advancement in digital transformation we must be mindful of the possible shadows which may negate our vision for social innovation.

DataOps and Hitachi Vantara

$
0
0

According to the Harvard Business Review, "Cross-industry studies show that on average, less than half of an organization’s structured data is actively used in making decisions—and less than 1% of its unstructured data is analyzed or used at all. More than 70% of employees have access to data they should not, and 80% of analysts’ time is spent simply discovering and preparing data. Data breaches are common, rogue data sets propagate in silos, and companies’ data technology often isn’t up to the demands put on it." That was in a report back in 2017. What has changed since then?

 

Few Data Management Frameworks are Business Focused

Data management has been around since the beginning of IT, and a lot of technology has been focused on big data deployments, governance, best practices, tools, etc. However, large data hubs over the last 25 years (e.g., data warehouses, master data management, data lakes, Hadoop, Salesforce and ERP) have resulted in more data silos that are not easily understood, related, or shared. Few if any data management frameworks are business focused, to not only promote efficient use of data and allocation of resources, but also to curate the data to understand the meaning of the data as well as the technologies that are applied to the data so that data engineers can move and transform the essential data that data consumers need.

 

Introducing DataOps

Today more customer are focusing on the operational aspects of data rather than on the fundamentals of capturing, storing and protecting data. Following the success of DevOps (a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably) companies are now focusing on DataOps. DataOps can best be described by Andy Palmer, who coined the term in 2015, “The framework of tools and culture that allow data engineering organizations to deliver rapid, comprehensive and curated data to their users … [it] is the intersection of data engineering, data integration, data quality and data security. Fundamentally, DataOps is an umbrella term that attempts to unify all the roles and responsibilities in the data engineering domain by applying collaborative techniques to a team. Its mission is to deliver data by aligning the burden of testing together with various integration and deployment tasks.”

 

At Hitachi Vantara we have been applying our technologies to DataOps in four areas: Hitachi Content Platform, Pentaho, Enterprise IT Infrastructure, and REAN Cloud.

 

  • HCP:Object storage for unstructured data through our Hitachi Content Platform and Hitachi Content Intelligence software. Object storage with rich meta data, content intelligence, data integration, and analytics orchestration tools enable business executives to identify data sources, data quality issues, types of analysis and new work practices needed to use those insights

HCP DataOps.png

 

  • Pentaho: Pentaho streamlines the entire machine learning workflow and enables teams of data scientists, engineers and analysts to train, tune, test and deploy predictive models.

Pentaho DataOps.png

  • IT Infrastructure: Secure Enterprise IT Infrastructure that extends across edge to core to Cloud, based on REST APIs for easy integration with third party vendors. This gives us the opportunity to not only connect with other vendor’s management stacks like ServiceNow, but also apply analytics and machine learning and automate deployment of resources through REST APIs.

 

IT Data Ops.png

 

  • REAN Cloud: A cloud agnostic managed services platform for DataOps in the cloud. Highly differentiated offerings to migrate applications to the cloud, modernize applications to leverage the cloud offerings for data warehouse modernization, predictive agile analytics, and real time IoT. REAN Cloud also provides ongoing managed services.

REAN Data Ops.png

Summary

  • Big Data systems are becoming a center of gravity in terms of storage, access and operations.
  • Businesses are looking to DataOps, to speed up the process of turning data into business out comes.
  • DataOps is needed to understand the meaning of the data as well as the technologies that are applied to the data so that data engineers can move, automate and transform the essential data that data consumers need.
  • Hitachi Vantara provides DataOps tools and platforms through
    • Hitachi Content Platform,
    • Pentaho data integration and analytics orchestration,
    • Infrastructure analytics and automation
    • REAN Cloud migration, modernization, and managed services.

 

Blak Hole Pic.png

Grad Student Katie Bouman uses DataOps to capture first picture of a black hole.

International Women of Hitachi Vantara

$
0
0

Last year around this same time, I posted a blog about the women of Hitachi Vantara, and featured four of the women that I worked with on a regular basis here in Santa Clara. This year, I thought I would like to introduce three other women who I have known and worked with internationally. While these three women represent different countries and cultures, they all share the same attributes of the four that I profiled last year. They all know how to lead, innovate, and succeed.

 

Merete.png

Merete Soby has been very successful as the Country Manager for Hitachi Vantara in Denmark for the past 11 years. When she joined, what was then, Hitachi Data Systems-Denmark, we were a solid storage company with 15 -20% market share. Within 5 years, under her leadership, HDS Denmark was able to grow market share to 45-50% and became the number 1 storage vendor in the Danish market. Over the years, the Hitachi Vantara team in Denmark has won many big named accounts and created a strong winning culture in the company. The journey continued with new solutions, expanding beyond storage to converged solutions, solution for unstructured data as HNAS and Object solutions, analytics solutions, and REAN cloud services.

 

When I visit Denmark and talk to people in the industry, they always have great things to say about our team in Denmark. The first thing they comment on is the team’s commitment to customer support and engagement. A lot of credit is given to Merete who is described as an engaging, passionate, involved executor who empowers people to become better at what they do. When one of her team members that I worked with fell ill, Merete sought me out at a busy conference to assure me of that team member’s recovery, showing her awareness and concern for people and relationships.

 

I asked Merete if she ever felt limited in her career because she was a woman. She replied that she did not feel limited. “I believe that due to my relative young age, first as sales manager (26 years old) and later on as country manager in HDS (32 years old) I felt I needed to be a bit better and more prepared in every aspect of my business, but not directly because of my gender.”

 

Merete is a mom to three kids, 11 year old twins and an 8 year old boy. Her children have made her very focused on having the right work life balance, which she feels has increased her performance at work. She says that she does not mentor her children directly, “I show them how to behave and act in life by my own behavior. I show them to prioritize family and our values, by living them myself.” I believe that same philosophy extends to her leadership at work.

 

Basak.png

Basak Candanjoined the Hitachi Vantara team in Turkey two and half years ago as Office Manager and Marketing Coordinator. Last September she was promoted to be the Field Marketing Manager for Turkey and Middle East. She has the awesome responsibility to drive the end-to-end Field Marketing planning and execution in Turkey and the Middle East working very closely with the sales teams to win new business and grow revenue. She is also taking the lead in the Emerging Marketing team to support Brand Leadership Programs to ensure that we are building consistent and relevant messages across Emerging EMEA markets for our entire portfolio.

 

I recently worked with Basak when I was invited to participate in the World Cities Congress in Istanbul. She helped me prepare for my panel discussions at the conference and arranged for me to visit customers in Istanbul and Ankara. She was very helpful in helping me understand the marketing environment in Turkey. On the day before I was to fly to Ankara, she noticed that I did not have a top coat and she expressed concerned for my well-being. That evening, much to my surprise, the hotel concierge delivered a top coat to my room that they loaned to me for my trip to Ankara. I was very touched by Basak’s concern, creativity, and attention to detail.

 

Basak told me that If anyone had asked her as a child what she wanted to be when she grew up, it probably would not have been anything to do with technology. Before joining Hitachi, she had different sales and marketing roles in a number of large luxury hotel chains for more than 7 years. Her formal education was in hospitality and marketing, but she has been able to transfer those skills into a technology career.

 

Thanks to some strong, positive, influential women in her life who steered her in that direction, the real transformation started for her when she began working at Hitachi Vantara. She said working with Hitachi Vantara on storage, cloud, IoT, and Big Data Analytics, was like discovering a new planet. With the help of her Hitachi Manager, she applied to Boğazici University, which is among the top 200 universities in the world and was accepted into a Digital Marketing and Communication Program. There she worked on a project analyzing JetBlue Airways’ marketing campaigns on how they could digitally transform their marketing. Her project was judged by a jury and won a special prize. That gave her encouragement to grow and show her power in technical marketing. Basak typifies the type of person who is a self-starter. Someone who is capable of recognizing and seizing new opportunities. Self-starters immerse themselves in new endeavors and remain passionate about pursuing their vocation and honing their skills.

 

Ros.png

When I need help in understanding the tough technical question about 3 data center disaster recovery or the latest mainframe features for Geographically Dispersed Parallel Sysplex™, I call on the expert,Ros Schulman, and she is always up on the latest technologies and business processes for disaster recovery.

 

Ros has been with Hitachi Data Systems and Hitachi Vantara for over 20 years. In the last 9 years she has filled director level roles in product management, technical sales support, business development, and technical marketing with extensive skill sets around Data Protection (Replication and Backup), Business Continuity and Resiliency. Her experience in analyzing customer requirements, technologies and industry trends have helped to maximize revenue growth in these areas. She is always in demand to speak at customer events and industry forums.

 

Ros was born in London and went to school there. She started her career as a computer operator at the age of 18 in local government and later became a system programmer on MVS at a time when very few women were in that field. She later moved to the United States and continued her technical career working for both the vendor and customer sides. When I joined Hitachi Data Systems, Ros was already recognized as the technical expert in operating systems and disaster recovery. She is passionate about our storage and systems technology and is generous in sharing her experience and insights with others. She is not shy. I have seen this petite lady going toe-to-toe with several heavyweight MVS systems programmers debating the benefits of different systems.

 

When I asked her what her advice would be for women considering a technical career, she said, “It’s something you have to be passionate about. I still believe it’s much harder to move ahead, so you have to be willing to love what you do. I would also recommend that you take some business classes, as in today’s digital age, you need a lot more than just technical skills. My motivation is learning and growing, this industry fascinates me, when I started, we used disk drives that were 20MB in size and MF had less than 2GB memory and look where we are today. I do not know of another career where things have changed so radically and continue to change and have now been embraced in every facet of our lives.”

 

It is one thing to have the knowledge and skills to be technical. However, it requires passion and enthusiasm to excel in a technical area; and be recognized as the go-to expert. Ros Schulman is my go-to expert.

 

Hitachi Vantara recognizes the value of diversity. The Women of Hitachi play an important role in defining our culture and contributing to our success as a technology company. Women are well represented in our sales and marketing organizations, as well as in product management and technical support roles. Our CIO, CFO, and Chief Human Resource Officer are women. Women account for more than 25% of our IT team – just over the industry average – according to CIO Renée McKaskle.

 

A recentWall Street Journalblog reports that:

 

“She (Renee McKaskle) credits the Hitachi Inc. subsidiary’s “double” bottom-line goal, saying “a healthy bottom line is important but doing what is right for society is important, too.”

To that end, she said, the company supports several global and local diversity initiatives, including women’s summits and mentoring programs.

“These programs have been critical to forging the diversity we have in place today, with positive indicators that this will continue to increase,” she added.”

 

One of the things I enjoy most about my job is the ability to work with wide variety of people, and see them in action, celebrate their successes, and hear their stories. I hope you enjoyed hearing about these women who have inspired me and will perhaps inspire you as well.


Cisco and Hitachi Vantara: The Power of Two

$
0
0

Power of two.png

It’s been just over two months since our strategic partner Cisco and Hitachi Vantara announced the further strengthening of our 15+ year relationship with the launch of our jointly developed Cisco and Hitachi Adaptive Solutions for Converged Infrastructure.

I thought it would be good to provide you with some recent updates as well as answer some questions that have come up from our customers and partners.

Why this solution now?

Well, with all sincerity, it’s all about you.At Hitachi Vantara, we realize that to best serve our customers and enable them to meet their IT and business and data management objectives, we must embrace and recognize complementary technologies.

In this case, we chose to partner with Cisco for its industry-leading technologies, combined with our customer-proven Hitachi Virtual Storage Platform (VSP) all-flash and hybrid arrays and AI operations software - the result is a comprehensive converged solution for truly demanding virtualized workloads and enterprise applications.

It’s this “Power of Two” philosophy that also encompasses a company-wide and executive commitment in the partnership to benefit our customers, for the long term.

This is especially critical in the dynamic business environment that our customers are facing today – from compliance demands, resource limitations, and resource constraints.

According to Enterprise Strategy Group research, “38% of organizations have a problematic shortage of existing skills in IT architecture/planning”

In a previous blog I wrote about ourContinuous Business Operationscapability that enables customers to achieve strict zero RTO/RPO requirements.

We’ve extended this capability toCisco and Hitachi Adaptive Solutions for Converged Infrastructure,specifically for VMware vSphere virtualized environments.

We’ve enabled the disaster recovery orchestration to be much easier for customers via Hitachi Data Instance Director, which removes the complexity and simplifies Global Active Device deployment to a series of clicks (vs complex CLI and scripting).

Cisco Hitachi Adaptive .png

Cisco and Hitachi Adaptive Solutions for Converged Infrastructure. Meet in the Channel For Flexibility

Cisco and Hitachi Vantara have a select group of channel partners that can customize this solution to specific customer requirements, thereby enabling you to choose the validated Cisco networking, Cisco servers, and Hitachi Virtual Storage Platform configurations that best fit your needs, all with the assurance of a fully qualified and supported solution by Hitachi and Cisco.

l invite you to join my colleague, Tony Huynh, our solutions marketing manager for Hitachi Vantara, team up with the Enterprise Strategy Group for an upcoming webinar on May 15th, 2019 at 9AM PST to 10AM PST where we will discuss this and other items of interest.

Webinar Registration:

https://www.brighttalk.com/webcast/12821/357216?utm_source=Webinar&utm_medium=Email&utm_campaign=Sales

ESG Analyst Report:

Read this ESG white paper to know how Cisco and Hitachi Adaptive Solutions for Converged Infrastructure can help organizations achieve digital transformation milestones on a reliable, secure infrastructure that ensures access to their data.

https://www.hitachivantara.com/en-us/pdf/analyst-content/cisco-hitachi-adaptive-solutions-for-converged-infrastructure-esg-whitepaper.pdf

More information onCisco and Hitachi Adaptive Solutions for Converged Infrastructure

https://www.hitachivantara.com/en-us/products/converged-systems/cisco-hitachi-adaptive-solutions-for-converged-infrastructure.html

Forget the Rules, Listen to the Data

$
0
0

 

Rule-based fraud detection software is being replaced or augmented by machine-learning algorithms that do a better job of recognizing fraud patterns that can be correlated across several data sources. DataOps is required to engineer and prepare the data so that the machine learning algorithms can be efficient and effective.

 

Fraud detection software developed in the past have traditionally been based on rules -based models.A 2016 CyberSource report claimed that over 90% of online fraud detection platforms use transaction rules to detect suspicious transactions which are then directed to a human for review. We’ve all received that phone call from our credit card company asking if we made a purchase in some foreign city.

 

This traditional approach of using rules or logic statement to query transactions is still used by many banks and payment gateways today and the bad guys are having a field day. In the past 10 years the incidents of fraud have escalated thanks to new technologies, like mobile, that have been adopted by banks to better serve their customers. These new technologies open up new risks such as phishing, identity theft, card skimming, viruses and Trojans, spyware and adware, social engineering, website cloning and cyber stalking and vishing (If you have a mobile phone, you’ve likely had to contend with the increasing number and sophistication of vishing scams). Criminal gangs use malware and phishing emails as a means to compromise customers’ security and personal details to commit fraud. Fraudsters can easily game a rules-based system. Rule based systems are also prone to false positives which can drive away good customers. Rules based systems become unwieldy as more exceptions and changes are added and are overwhelmed by today’s sheer volume and variety of new data sources.

 

For this reason, many financial institutions are converting their fraud detection systems to machine learning and advanced analytics and letting the data detect fraudulent activity.Today’s analytic tools with modern compute and storage systems can analyze huge volumes of data in real time, integrate and visualize an intricate network of unstructured data and structured data, and generate meaningful insights, and provide real-time fraud detection.

 

However, in the rush to do this, many of these systems have been poorly architected to address the total analytics pipeline. This is where DataOps comes into play. A Big Data Analytics pipeline– from ingestion of data to embedding analytics consists of three steps

 

  1. Data Engineering: The first step is flexible data on-boarding that accelerates time to value. This requires a product that can ETL (Extract Transform Load) the data from the acquisition application which may be a transactional data base or sensor data and load it using a data format that can be processed by an analytics platform. Regulated data also needs to show lineage, a history of where the data came from and what has been done with it. This will require another product for data governance.
  2. Data Preparation: Data integrationthat is intuitive and powerful. Data typically goes through transforms to put it into an appropriate format, this can be called data engineering and preparation. This is colloquially called data wrangling. The data wrangling part requires another set of products.
  3. Analytics: Integrated analytics to drive business insights. This will require analytic products that may be specific to the data scientist or analyst depending on their preference for analytic models and programming languages.

 

A data pipeline that is architected around so many piece parts will be costly, hard to manage and very brittle as data moves from product to product. 

 

Hitachi Vantara’s Pentaho Business Analytics can address DataOps for the entire Big Data Analytics pipeline with one flexible orchestration platform that can integrate different products and enable teams of data scientists, engineers, and analysts to train, tune, test and deploy predictive models.

 

Pentaho is open source-based and has a library of PDI (Pentaho Data Integration) connectors that can ingest structured and unstructured data including MQTT (Message Queue Telemetry Transport) data flows from sensors. A variety of data sources, processing engines, and targets like Spark, Cloudera, Hortonworks, MAPR, Cassandra, GreenPlum, Microsoft and Google Cloud are supported.  It also has a data science pack that allows you to operationalize models trained in Python, Scala, R, Spark, and Weka.  It also supports deep learning through a TensorFlow step.  And since it is open, it can interface with products like Tableau, etc. if they are preferred by the user. Pentaho provides an Intuitive drag-and-drop interface to simplify the creation of analytic data pipelines. For a complete list of the PDI connectors, data sources and targets, languages, and analytics, see thePentaho Data Sheet.

 

Pentaho enables the DataOps team to streamline the data engineering, data preparation and analytics process and enable more citizen data scientists that Gartner defines in “Citizen Data Science Augments Data Discovery and Simplifies Data Science” . This is a person who creates or generates models that use advanced diagnostic analytics or predictive and prescriptive capabilities, but whose primary job function is outside the field of statistics and analytics. Pentaho’s approach to DataOps has made it easier for non-specialists to create robust analytics data pipelines. It enables analytic and BI tools to extend their reach to incorporate easier accessibility to both data and analytics. Citizen data scientists are “power users” who can perform both simple and moderately sophisticated analytical tasks that would previously have required more expertise. They do not replace the data science experts, as they do not have the specific, advanced data science expertise to do so, but they certainly bring their individual expertise around the business problems and innovations that are relevant.

 

In fraud detection the data and scenarios are changing faster than a rules based system can keep track of, leading to a rise in false positive and false negative rates which is making these systems no longer useful. Machine Learning can solve this problem since it is probalilistic and uses statistical models rather than deterministic rules. The machine learning models need to be trained using historic data. The creation of rules is replaced by the engineering of features which are input variables related to trends in historic data. In a world where data sources, compute platforms, and use cases are changing rapidly, unexpected changes in data structure and semantics (known as data drift) require a DataOps platform like Pentaho Machine Learning Orchestration to ensure the efficiency and effectiveness of Machine learning.

 

You can visit ourwebsite for a hands on demo for building a data pipeline with Pentaho and see how easy Pentaho makes it to “listen to the Data.

AI and Solomon's Code

$
0
0

There once was a king of Israel named Solomon, who was the wisest man on earth. One day he was asked to rule between two women, both claiming to be the mother of a child. The arguments on both sides were equally compelling. How would he decide this case? He ordered that the child be cut in halve so that each woman would have an equal portion of the child. One mother agreed while the other mother pleaded that the baby be spared and given to the care of the other women. In this way King Solomon determined who was the real mother.

 

If we had submitted this to an AI machine would the decision have been any different?

 

Solomon’s Code is a book that was written by Olaf Groth and Mark Nitzberg and published in November of last year, so it is fairly up to date on the recent happenings in the AI world. “It is a thought provoking examination of Artificial Intelligence and how it will reshape human values, trust, and power around the World.” I greatly recommend that you read this book to understand the potential impact AI will have on our lives, for good or bad.

 

The book begins with the story of Ava who is living with AI in the not too distant future. AI has calculated her probability of developing cancer like her mother and has prescribed a course of treatment tied to sensors in her refrigerator, toilet, and ActiSensor mattress. Her wearable personal assistant senses her moods. The Insurance company and her doctor put together a complete treatment plan that would consider everything from her emotional well-being, her work activities, and even the friends that she would associate with. Her personal assistant makes decisions for her as to where she goes to eat, what music she listens too, and who she calls for support.  

 

As we cede more of our daily decisions to AI, what are we really giving up? Do AI systems have biases? If AI models are developed by data scientists whose personality, interests and values may be different than an agricultural worker or a factory worker, how will that influence the AI results? What data is being used to train the AI model? Does it make a difference if the data is from China or the United Kingdom?

 

The story of Solomon is a cautionary tale. He built a magnificent kingdom, but the kingdom imploded due to his own sins and it was followed by an era of violence and social unrest. “The gift of wisdom was squandered, and society paid the price”

 

The Introduction to this book ends with this statement.

 

‘Humanity’s innate undaunted desire to explore, develop, and advance will continue to spawn transformative new applications of artificial intelligence. The genie is out of the bottle, despite the unknown risks and rewards that might come of it. If we endeavor to build a machine that facilitates our higher development -rather than the other way around – we must maintain a focus on the subtle ways AI will transform values, trust, and power. And to do that, we must understand what AI can tell us about humanity itself, with all its rich global diversity, its critical challenges, and its remarkable potential.”

                                                                                                                                                           

This book was of particular interest to me since Hitachi’s core strategy is around Social Innovation.Where we will operate business to create three value propositions: improving customer’s social values, environmental values, and economic values. In order to do this we must  be focused on understanding the transformative power of technologies like AI for good or bad.                     

                                  

Losing Revenue in a Growing Market

$
0
0

The latest International Data Corporation (IDCWorldwide Quarterly Enterprise Storage Systems Tracker, was published on March 4, 2019. It showed vendor revenue in the worldwide enterprise storage systems market is still increasing: 7.4% year over year to $14.5 billion during the fourth quarter of 2018 (4Q18). Total capacity shipments were up 1.7% year over year to 92.5 exabytes during the quarter. The total All Flash Array (AFA) market generated just over $2.73 billion in revenue during the quarter, up 37.6% year over year; and the Hybrid Flash Array (HFA) market was worth slightly more than $3.06 billion in revenue, up 13.4% from 4Q17.

 

The Revenue generated by the group of original design manufacturers (ODMs) selling directly to hyperscale datacenters (public cloud) did decline 1.5% year over year in 4Q18 to $2.7 billion due to significant existing capacity. The report noted the increasing trend to hybrid clouds as enterprise customers place a higher priority on ensuring that storage systems support both a hybrid cloud model as well as increasingly data thirsty on-premise compute platforms. OEM vendors selling dedicated storage arrays are addressing demand from businesses investing in both on-premises and public cloud infrastructure. The move to hybrid storage means that enterprises are starting to look at their total storage environment and looking at the operational aspects of their data to maximize their business outcomes.

 

As a result, the revenue misses reported this week by Pure and NetApp were not surprising.

 

On Wednesday May 22, 2019, Pure Storage announced disappointing Q1 results and reduced their fiscal year guidance downward. The stock has tumbled down more than 20% in after-hours and early next-day trading following the release of the report. Pure Storage is simply that: purely storage and their prospects are directly tied to the storage market as that is the only thing they sell. It is even more restricted in that it is an all flash play which is less than 19% of the 14.5B enterprise storage market in 4Q 2018. As companies start to look at their total data environments, pure play companies such as Pure will not be as relevant to customers in the future.

 

After the market close on May 22, 2019, NetApp announced disappointing Q4 and full fiscal year 2019 results, missing on consensus revenue estimates, consensus earnings per share estimates, and providing lower-than-expected guidance for both revenue and EPS for the upcoming quarter. NetApp blamed their revenue performance on a variety of issues - sub-optimal sales resource allocation, declining OEM business, decreased ELA renewals – but also currency and macroeconomic headwinds, extended purchase decisions and sales cycles. While NetApp has a broader portfolio than Pure, it is still primarily a midrange storage play with a lot of legacy storage in the market.

 

Customers expect more than a place to store their data. While a faster flash storage array can shave milliseconds off an I/O response time, it doesn’t help your bottom line if the right data is not in the right place at the right time. The fact that enterprises are extending their purchase decisions, thinking twice about purpose built OEM solutions, and evaluating hybrid storage solutions, indicates that they realize that their problem is not about storing data, but about unlocking the information that exists in the data they have. This takes DataOps.

 

DataOps is needed to understand the meaning of data as well as the technologies that are applied to the data so that data engineers can move, automate and transform the essential data that data consumers need. Hitachi Vantara offers a proven, end-to-end, DataOps methodology that lets businesses deliver better quality, superior management of data and reduced cycle time for analytics. At Hitachi Vantara we empower our customers to realize their DataOps advantage through a unique combination of industry expertise and integrated systems.

Data Challenges are Killing AI Projects

$
0
0

Today’s May 28, 2019, Wall Street Journal reports that data challenges are halting AI projects. They quoted IBM executive Arvind Krishna as saying, “ Data-related challenges are a top reason IBM clients have halted or canceled artificial-intelligence projects”.” About 80% of the work with an AI project is collecting and preparing data. Some companies aren’t prepared for the cost and work associated with that going in”, he added.

 

This is not a criticism of IBM’s AI tools. Our AI tools would have the same problems if the data was not collected and curated properly. This is supported by a report this month by Forrester Research Inc. which found that data quality is among the biggest AI project challenges. This report said that companies pursuing such projects generally lack an expert understanding of what data is needed for machine-learning models and struggle with preparing data in a way that’s beneficial to those systems.

 

At Hitachi Vantara, we appreciate the importance of preparing data for analytics, and we include that in our DataOps initiatives. DataOps is a framework of tools and collaborative techniques that enable data engineering organizations to deliver rapid, comprehensive and curated data to their users. It is the intersection of data engineering, data integration, data governance and data security that attempts to unify all the roles and responsibilities in the data engineering domain by applying collaborative techniques to a team that includes the data scientists and the business analysts. We have a number of tools like Pentaho Data Integration, PDI, and Hitachi Content Platform, HCP, but we also include other best of breed tools to fit different analytic and reporting requirements.

 

Its time to press your DataOps Advantage

Viewing all 426 articles
Browse latest View live