Following closely on the heels of the DS8900 launch last month, IBM announced some major storage products (both hardware and software) today at the TechU conference in Prague. My analysis here is based not only on briefing material from Eric Herzog and his team, but also from my visit to IBM’s Think UK conference last week, where ecology was very much at the fore.
Today’s announcements cover a number of areas. Let’s look at each of them in turn.
AI, Big Data analytics and machine learning
Notes: I borrowed the Figure above from IBM; ML/DL stands for Machine Learning/Deep Learning
AI and Machine Learning applications are still in their infancy. Governments and other supercomputing users have been investing for some time, but it’s still a time-consuming complex job requiring scarce expertise. IBM’s Spectrum Scale is central to its approach and experience strong revenue growth last year. IBM has made a number of announcements, lowering the barriers to entry for many more of its enterprise customers. The major announcements today are:
- Elastic Storage System 3000 is a new, simpler to implement 2U array with IBM Spectrum Scale software embedded in the array; otherwise the storage subsystem is almost identical to its FS9100, Storwize 7000 and 5100 storage subsystems. It is based on NVMe drives internally and connects to the usual fabrics outside, such as NVMe over Ethernet and Infiniband. Customers can build their systems incrementally at 25TB in a single 2RU array… and put up to 370TB into a single Elastic Storage System 3000. The single system offers 40GB/second throughput, which scales linearly with additional systems to deliver TBs/second. A containerised version of Spectrum Scale on Red Hat Enterprise Linux 8 is embedded onto the storage array, making it easy to install and use. Getting it up and running takes as little as 3 hours – a significant improvement over the pre-existing Elastic Storage Server. Coincidentally another difference is the new system’s use of Intel rather than Power processors. It works with any server running Linux operating systems, including Nvidia servers, Dell, Cisco, IBM Power, z and Linux One.
- Spectrum Discover has been expanded to allow its huge metadata catalogues to cover backup environments for the first time, which follows on from its earlier extension to work on NetApp and Dell EMC Isilon arrays, as well as IBM scale-out file and object storage solutions (both SDS and array-based) and S3 clouds. Additionally, users can now find and install third party extensions via Command Line Interfaces (CLIs) with dockerhub, while developers of those extensions can take advantage of IBM’s fully published APIs.
Spectrum Scale V5 now supports erasure coding for building out the parallel filesystem on storage rich servers, which covers RHEL, protocols and interfaces making it more resilient to failures through distributed data and automated recovery. The Elastic Storage Server has also been enhanced.
IBM extends data protection in Linux, Vmware and Tape environments
Data protection is a must for all styles of computing and customer size and – as always – IBM is at the forefront of enhancing its storage software and storage system to keep up. Today’s announcements include:
- Enhancements to Spectrum Protect Plus, the software used by its customers for long-term data protection of backup environments. It can now provide protection of those using Red Hat OpenShift through integrating with Kubernetes, CSI snapshot support, the addition of pre-defined policies for SLA support and the ability to store container data in repositories outside Kubernetes itself, increasing its data resiliency. It is also now supports AWS-hosted VMWare virtual machines, databases and applications for the first time – adding to similar support for IBM cloud-based ones already addressed. These enhancements help enterprises’ and service providers’ moves to hybrid multi-cloud infrastructures.
- The TS7770 Virtual Tape Library (VTL). Despite the prejudice against the tape drive market by all but a handful of vendors, IBM continues to invest in new products, citing a number of advantages. Tape needs no electricity if not in use and can be used to create an air-gap from those mounting malware and randomware attacks. IBM’s new machine is based on its Power 9 processors, which reduce processing times by up to 12%. It also boasts an availability of >99.999%. It has added FICON and Object Store support and more host adaptors than before. It also has extended its data retention options significantly. When used to for Object Store offloads from the new DS8900 it can reduce host processing times for disaster recovery by up to 50%. It also offers encryption at rest and in-flight over Ethernet.
Digital risk is growing alongside the greater use of computing through new applications and architectures; so it’s good to see IBM storage division continue to enhance its data protection offerings in step.
IBM storage buyers can now chose between Utility and Subscription payments
For two years IBM has offered its customers utility pricing which, like a car lease, allows them to pay for what the capacity they use on a monthly basis over a three-year term. Around 90% of IBM’s storage systems are covered by this ‘pay as you grow’ policy. It’s now decided to add a subscription service, which (although 20-25% more expensive than its utility offering) runs for a maximum of four years, giving them the ability to walk away from it anytime after the first year. The new offering is for those working on time-limited projects with no expectation for data growth, such as cloud service and managed service providers.
Initially IBM is offering dual StorWize V7000, FS9100 or Elastic Storage Server configurations. This scheme is more like the way customers buy cloud services than a car lease – similar in ways to HPE’s Greenlake offering. The capacity monitoring of both its schemes is through its Storage Insights program. Neither require customers to add capacity over the term of the contract, although only the utility offering allows capacity to be added mid-term.
IBM supports innovation as an evolution process
Enterprise computing is getting ever more complex – not least because every generation of system remains around for a long time. Many new companies have been able to leapfrog the incumbents by designing new systems which don’t need to be fitted into older infrastructures and architectures. I’ve been around long enough to see many of these startups be eclipsed by even newer ones over time. As the oldest computer company still in existence, IBM has managed to stay its course by helping its customers innovate through pioneering new types of computing (Open Source, Quantum, AI/ML, blockchain, etc.), while constantly adjusting its offerings to help all of its customers’ journeys. In the case of its storage division the shift to wards a software defined approach, the focus of hybrid multi-cloud strategies, the fast adoption of new base technologies (flash memory, NVMe drives, etc.) and its insistence that the data is secure and protected will make it ever more important to those who believe enterprise computing is an evolutionary – rather than revolutionary – process.
©ITCandor Limited – unauthorised copying of this content is illegal and will be rigorously defended by us through court action