That’s all very well, but until we see real numbers testifying to its effectiveness we can’t assume AIRE InfoSight is anything other than promising tech. Lest we think it just airy-fairy stuff, HPE has corralled a quote from Justin Giardina, CTO for iland Secure Cloud, who said: “The new recommendation engine is phenomenal as it’s making proactive decisions, showing us how we can improve our environment.” There are no actual numbers to show how the new AIRE InfoSight will improve things bettering the six “nines” for example, by making it seven. Chris Mellor highlights just this point in his article on the new InfoSight news: Exactly what the intelligence is, how the actions are taken and what the results are, aren’t entirely clear. Now the devil could be in the detail here. So it looks like machine learning is already widespread. Kaminario also claims their Clarity analytics tool has “application-level intelligence” and leverages “machine learning” ( link) – this from a press release in September 2016. However, I do remember the AI message already being promoted by Pure Storage at Accelerate earlier this year ( see this post). HPE is claiming they are the first to achieve this. The latest step forward for InfoSight is to use machine learning and artificial intelligence to identify and resolve issues that wouldn’t otherwise be obvious to a human being. Array vendors implemented some predictive diagnosis, but the rate of data collection was never as detailed and as frequent as it is today. This collective knowledge is a step up from the previous best practice where storage arrays all had the ability to call home. However, the real benefit is when the collective set of data from all customers is analysed to highlight problems other customers might have not yet experienced. Some of the information InfoSight produces is relatively straightforward, such as capacity planning data. External issues include configuration and integration problems with storage networking, hosts, and applications. Automated resolution is an impressive 86%, with 54% of issues being attributed to problems outside of the array itself. The granularity is per second, with some 30-70 million pieces of data collected from an array each day. Nimble claims 4000 individual data points are retrieved every 5 minutes from each system, including the hardware and file system. That’s seven years of customer data from hardware in the field. Nimble Storage (now an HPE brand/company) has been collecting metrics since the first arrays shipped on 15 April 2010. This insight translates into actions that minimise downtime and otherwise keep our storage arrays ticking along nicely. In recent years, storage vendors have been keen to tell us about their ability to build insight from data collected in the field. But making storage behave in a predictable manner isn’t always easy. We also value performance, low latency and consistency. Of course, having great availability isn’t the only measure of storage quality. Who doesn’t like the idea of having six 9’s of uptime for their storage? Availability is a major consideration for any storage, public or private.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |