Written by
Peter Shand
Chief Technology Officer, Americas

The amount of content and data being created by people, businesses, and organizations, continues to increase exponentially.

Industry experts suggest that there will be 17 zettabytes of data being created yearly, and by 2025 it will be close to 150 zettabytes. To put this into perspective, one zettabyte is equal to one trillion gigabytes.

This trend is evident in our personal lives, as we continue to store pictures, videos, receipts, legal documents, invoices, etc., in digital formats, just as much as we keep them locked away in drawers and safety deposit boxes.

Businesses also need to store a great deal of digital content. Aside from the structured data essential to their mission critical applications, the amount of unstructured data being produced is many orders of magnitude greater than even five years ago. There are many reasons for creating and storing this content, including:

  • Business Processes Much of the content that is created and stored is part of regular operational processes, human resources, and finance department documentation. Some businesses reuse templates to accelerate delivery of services, they document interactions with customers and partners, and they log interactions and processes created by applications and by IoT devices.
  • Compliance Many industries have strict regulations governing how long content should be kept. Industries include, but are not limited to, financial services, government and healthcare.
  • Competitive Edge Many businesses use data as a tool to provide services. Some use unstructured data to help provide competitive advantage by mining the unstructured text, image or audio/video data.

In the past, the NAS was the solution to the problem of storing all this unstructured data, and for a period of time it did an admirable job. However, as the number of files increased, it became more difficult to manage the underlying filesystems as well as maintain the availability and performance required for applications that create and use unstructured data. It became increasingly difficult to create massive RAID volumes that performed well and maintained data durability when disk members failed. Plus, there was the challenge of keeping the cost per gigabyte relatively low compared to high performance primary storage arrays.

The public cloud helped to provide a new way store unstructured content while maintaining high durability and availability at a low cost per gigabyte. It also added effortless application and programmatic integration. Amazon S3 brought object storage into the limelight and Azure Blob and Google Cloud Storage continued that trend. It was only natural that this successful and industry changing technology would be applied to similar on-premise challenges. This means that businesses can now have their own version of S3 or Blob storage in their data center.

One of the main differences between file storage, block storage, and object storage, is that object storage is not directly accessed by the operating system. It is not seen as a local or remote filesystem. Instead, interaction occurs at the application level via an API. Block storage and file storage are designed to be consumed by an operating system, while object storage is designed to be consumed by your application. Below are a few of the key characteristics of object storage that differentiates it from other storage technologies:

  • Object storage uses a flat structure, storing objects in containers, rather than a nested tree structure.
  • Object metadata lives directly in the object, rather than stored separately, so a single API call will include the object as well as the metadata associated with the object.
  • Many object storage systems leverage erasure coding algorithms to verify file consistency and handle failed drives, bit-rot and compute failures, which lead to very high levels of availability and durability.
  • Object storage platforms are designed to run on commodity hardware, even with the overhead of storing multiple copies, the price per gigabyte can be very attractive compared to enterprise file or block options.

Server hardware vendors such as Cisco with Unified Computing System (UCS), and Hewlett Packard Enterprise with Proliant, have partnerships with purpose-built object storage software providers such as Scality, Switftstack and Caringo that provide a validated design approach. Additionally, Red Hat, Suse and the Openstack project have proven implementations with a multiplicity of hardware vendors that are also viable alternatives for object storage deployments. Innovators such as Storreduce have bolt on solutions that deduplicate data being ingested into object stores and allow for tiering to AWS S3, Azure Blob and GCP for customers that have hybrid needs.

If you are an IT decision maker and you can answer yes to more than 3 of the following questions, then object storage might be a technology to consider:

  • Do you have large volumes of media files or other forms of rich content?
  • Is it sprawled across multiple NAS or File Server platforms?
  • Do you have file deletion triage exercises, where files are deleted because filesystems are running out of capacity, but you are not sure of the value of the data being deleted?
  • Are you using expensive primary storage arrays to story infrequently accessed files?
  • Are you constantly having capacity issues with respect to backup and archiving repositories?
  • Would you like to programmatically access or perform operations on this content?
  • Are you incurring increasing costs monthly from public cloud providers?
  • Would you want the benefits of public cloud storage but have data sovereignty and security concerns?
  • Are you looking for upwards of 100 terabytes or multi-petabyte data stores?

Unified Technologies has the computing, storage, network, and security technology, as well as the expertise to meet your business needs at a low cost per gigabyte and with predictable and reliable results.