Reflecting on a Decade of Storage

Cloud Backup

One of the biggest disruptions in storage over the past decade has been the rapid pace at which public clouds are taking over the secondary storage market by delivering extremely price competitive storage for secondary storage workloads. Now, companies like Google are able to deliver archive storage that can match the cost of tape – considered a benchmark for the cheapest storage you can find for your data.

Storage solutions like those made popular by Data Domain were considered the benchmark for cost-effective storage of large amounts of data with their deduplication technology. About five years ago, scale out secondary storage started taking off as a great replacement for Data Domain. Now, even those solutions providers realize that they cannot stop customers from adopting public clouds and thus they are providing an option for customers to offload data to public clouds right from their backup targets.

Customers now have a choice when they want to use public clouds. They can front end their storage with a scale out secondary storage on-premises or use intelligent software that can dynamically choose what to keep locally and what to keep in the cloud to keep the cost low. i.e. why pay for a virtualization layer if your software can do this at no extra cost.

This secondary storage trend carries forward even as customers migrate their production loads to public clouds. We tell customers that they really need to think hard if they want to run a software version of their On-premises de-dupe appliance in the cloud or use native storage. While the software de-dupe appliances might optimize the storage, they do end up using a significant amount of compute and memory which in many cases completely outweighs any cost savings.

The second trend that has emerged is true backup as a service. Many have imagined this for more than a decade. For a long time, to satisfy the hype, vendors took their classic On-premises software infrastructure, virtualized it and provided images for customers to run in the cloud and called it “as a service.” While that was a good “marketecture,” it was not a true SaaS offering. Fast forward to late 2018 where we started to see true SaaS solutions emerge in the secondary storage space.

If you are wondering, “how do I determine a true SaaS offering,” here are the important items you should check:

  1. It should be something you can turn on and off in an instance.
  2. It should be something you don’t have to do sizing – because it should be able to scale from 1 VM to hundreds of thousands.
  3. It should leverage the services of the cloud you choose.
  4. It should be something you don’t have to upgrade, maintain or monitor.
  5. It should not require training.

While the past decade has seen a number of significant advances, we should expect to see even more. It’s never been a better time to be in the industry.

(Editor’s Note: A variation of this blog appeared on Storage Newsletter, “2010-2020 Storage Decade.”)

Share: