IBM continues to add features and functions to its storage offerings, both software and systems, to address the latest trends in enterprise computing… and doing its best to make it simpler to understand and implement. In today’s announcement it covers a lot of ground with a strong emphasis on supporting container application deployments.
Its support for Red Hat started well before IBM acquired it, but today it shows it’s integrating as fully as it can, while covering a lot of other ground as well.
As always, I have the advantage of having been pre-briefed by Eric Herzog (IBM’s CMO and VP of Worldwide Storage Channels) and his team to be able to synchronise my analysis with the announcements.
The Spectrum software enhancements
There are enhancements to four software offerings (see my Figure above) making them more relevant and easier to deploy for those facing the difficult issue of managing, backing up and restoring data in container environments, deploying solutions (including AI) spread across on pre and public cloud infrastructure. In particular:
- Spectrum Protect Plus (secure data backup for applications in public clouds, virtual machines and containers), whose server can now be run in a container. It has also now been added to its Cloud Pak for Multi-Cloud Management (MCM), making it much easier to deploy. It can also backup Red Hat OpenShift/Kubernetes clusters. IBM has also announced the beta test for running Spectrum Protect Plus on Azure.
- Spectrum Protect (long-term data protection/backup) can now be used to create an air-gapped tape copy of backup repositories. Google Cloud has been added to IBM Cloud, AWS and Azure as a public cloud backup repository. The new S3 API now enables applications which have data backups built-in to use the Spectrum Protect repository.
- Spectrum Scale (file-level storage for unstructured data) will provide container-native storage access for the first time. For those working on AI and ML and others, being able to create storage quickly in containers will make things significantly less complicated. The transparent parallel access to this data will also allow their organizations to cut costs by eliminating many of the data silos currently in place.
- Spectrum Virtualize (storage hypervizing for block-level storage) v.8.4 brings a number of enhancements for those using its FlashSystem 7200 and 9200/R arrays including across the board support for the use of NVMe drives in VMware 7 environments, ‘redirect-on-write’ snapshots in data reduction pools, a 10-15% performance improvement through the use of distributed RAID 1 and an increase in the number of Storage Class Memory drives from 3 to 12. It has also added a data replication management module for those who use Red Hat Ansible (its automation and orchestration solution) along side IBM FlashSystem.
These are interesting adjustments, demonstrating that IBM not only endorses the latest technology and approaches from others, but also aims to make it easier for its customers to adopt. The advantages are not exclusively for IBM Storage hardware customers either – Spectrum Virtualize supports all of its competitors’ arrays as well through virtualization.
The systems enhancements
IBM has made two systems announcements today as well. In particular:
- Its Cloud Object Storage solution now supports the open source s3fs API for containerized workloads, which is useful for those using Red Hat OpenShift.
- LTO 9 is a new tape drive specification announced by the LTO Consortium in September. Today IBM has announced plans for new tape drives which will be cheaper, have a greater capacity, faster transfer rates and data access retrieval than the LTO 8 drives currently in use. The new drives also provide AES 256 bit encryption to prevent the data recorded on them to be accessed if mislaid or stolen. The new drives will also be fully backwards-compatible with LTO 8 ones. The new LTO 9 drives will also be available in external tape subsystems, tape autoloaders and tape libraries.
As it’s embedded as storage management it is arrays, I could have included Spectrum Virtualize’s new features as ‘system enhancements’ as well of course. IBM is one of a small number of suppliers of tape drives (the LTO Consortium also includes HPE and Quantum), however their use is growing in importance in providing immutable copies of data that are disaster resilient – a future earthquake that destroys servers and storage arrays in a data center could be rebuilt using data held on tape.
In conclusion
IBM is succeeding in addressing new and future trends in these announcements. Even if only 10-15% of enterprise applications are currently based on containers, the proportion will undoubtedly grow significantly (and steeply) in future, as virtual machine based ones did nearly 20 years’ ago (virtual machines account for 45% of application deployments today if you include all servers, including those used by SMEs). IBM constantly backs the winning horse – as it did with VMware and KVM, as well as its early insistence on addressing hybrid multi-cloud environments with its storage strategy. Its challenge has always been to make a commercial success of its early moves.
The importance of its container-based approach has been increased by IBM’s acquisition of Red Hat (who’s OpenShift, Ansible and other offerings make it the most important vendor in this area). Its Storage Division’s commercial success over coming years will be dependent not only on creating new technologies and techniques, but also on making the offerings it produces easy to understand and deploy by enterprise customers. It’s doing very well for now.