Accelerating Data Compression in the Datacenter

Data Center Background

By: Matt Bromage | Director of Product Management

Introduction

Computing is becoming increasingly disaggregated, placing different compute components exactly where they are needed and managing these disparate components through software-defined infrastructure frameworks. In addition, data is getting generated at more endpoints, and getting increasingly distributed. Until recently, dedicated hardware accelerators have been applied to compute-intensive tasks such as AI inferencing and other HPC applications. More recently, a new class of accelerators are targeting network and storage-related tasks in order to free the host CPU to be focused on application-related activities. Compression is one such task that shows major performance and TCO benefits for hardware-based accelerators. This blog post will review a few of the different approaches to compression and compare the tradeoffs for each. Finally, we will review the innovative compression capabilities of the Pliops Extreme Data Processor (XDP) and see how its novel approach addresses the major drawbacks of other solutions.

The Value of Compression

Generally applied, compression enables higher bit density in the datacenter, which requires less power, less cooling, and fewer CAPEX purchases, resulting in an overall greener infrastructure footprint. Compression can also enable developers to work with higher fidelity data sets. The thoughtful use of compression can also be a major performance multiplier by sending less data to and from storage devices and across network interfaces.

How Compressible is my Data?

Most relational data is compressible to a large degree, apart from previously compressed or encrypted data. Even then, some optimizations can be made to handle this type of data efficiently. One of the most popular databases globally, MySQL, reports that typical user data stored in MySQL/InnoDB can be compressed by 50% or more (https://dev.mysql.com/doc/refman/5.7/en/innodb-compression-internals.html).

There are also further compression opportunities available. For example, B+ Tree databases store empty leaf nodes as blocks of zeros that can be easily compressed, recovering approximately 33% of usable storage capacity.

Challenges of Compression

Software compression can, at first glance, seem like an easy way to condense your data set. There are tradeoffs to this approach, however. First, there is a performance impact. For writes, cycles are spent compressing data on an expensive CPU that could otherwise be used for higher priority, revenue-generating activities. For reads, the data must be first decompressed before being used, which adds an inherent latency. A more expensive CPU and memory selection would then be required to offset this overhead. At worst, the backend infrastructure would need to be scaled to accommodate the additional burden of software compression. All associated cooling costs and redundancy requirements lead to an unwanted multiplication in the TCO of running a datacenter in this manner.

Second, there is a significant time investment in configuring, tuning, and monitoring database parameters, compression ratios, and performance metrics to optimize the selection of compression settings. As a result, a database operator may choose only to compress a subset of the available tables in the dataset. Unfortunately, the selection of static variables in the compression settings could reduce system performance due to changing workloads in real-world conditions. This is exemplified by the typical situation in which compressed objects span beyond the preset calculated boundaries, triggering additional reads and writes to the drive to handle the overflow.

Software Compression: Approaches and Tradeoffs

There are several approaches to applying compression and managing the previously mentioned challenges, each with different tradeoffs.

Compression performed at the file level is probably the most widely utilized method. For example, compressing files using gzip or photos using jpeg. The benefit of this approach is that the compression algorithm has a larger data size to work with to find extended and more complex patterns in the data. This results in a higher achievable compression ratio, but at the cost of having a large granularity of compressed data. The entire file must be read from storage and decrypted to pull out a piece of data from within the file. This approach works well for large, static amounts of data like photos and videos.

Alternatively, compression can be done at the application level. Here, a much smaller page size boundary is typically used as the compression granularity. Page size is the unit of transfer of data to and from storage devices such as an SSD. For example, the default page size for MySQL is 16KB.

InnoDB, the database engine for MySQL, utilizes the popular zlib Linux library to provide the option for compression at the database level. Zlib offers different compression levels from Level 1 to Level 9, with each subsequent level requiring more CPU resources than the last. The use of InnoDB compression is not a straightforward task. It requires the database to be first reconfigured so that each table is stored as a separate file.  The max size of compressed objects needs to be set in advance within these files.

However, selecting the right compressed object size is complex and requires repetitive and labor-intensive testing of different compression settings to ensure performance is not impaired. A few considerations:

  • Larger compressed object sizes are likely to result in wasted space, increasing the $/TB of the underlying storage.
  • Smaller object sizes could result in an overflow condition where objects must be split across multiple pages with the effect of reducing the baseline performance.
  • Smaller page sizes generally give better performance, but when compression is enabled, larger buffer pools are required in order to maintain both compressed and uncompressed pages.
  • Adding new data (insert, update, delete) to a compressed page can result in a “compression failure” overflow event, requiring splitting the page into two new ones and compressing each separately.
  • If the data mix changes as a result of the application usage evolving, the carefully selected compression settings become outdated and must recalculated at great expense.

Even with ideal settings, performance is subject to latency spikes from the underlying SSD storage device due to internal read/write collisions and other background activities.

More generally, these problems stem from the need to pack variable-sized compressed objects into statically set drive sector sizes in a constantly changing and dynamic environment. The simple approach of attempting to guess the average compressed object size ahead of time and hoping that you’ve found the sweet spot is a recipe for a massive investment of time and resources with the potential outcomes of inadvertently reducing application performance and increasing wasted storage space. 

Next, we will review the Pliops solution to in-line, transparent, and dynamic compression and the efficient management of the resulting compressed objects.

The Pliops Approach

Pliops XDP is a dedicated storage accelerator for the datacenter that multiplies the scalability of workloads and data capacity by delivering fully transparent, in-line data compression for SSD-based workloads. Considering the compression challenges just discussed, the Pliops solution automatically performs compression on all data stored to disk. This is in contrast to application-based compression that is only able to access and compress application data, leaving the filesystem metadata uncompressed. XDP compression adapts to dynamic workloads without the hassle of configuration and tuning. It utilizes a series of hardware-accelerated compression engines that operate at PCIe line-rate speeds resulting in no drop in performance or spikes in latency.

Finally, the patented Pliops indexing algorithms enable compressed objects to be tightly compacted with virtually no wasted space. The compressed data is then written out to SSDs sequentially, mitigating the performance penalties from wear-leveling, garbage collection, and other background operations. This enables the endurance of the underlying Enterprise or Datacenter SSDs to be 3-7x higher due to a dramatic reduction in write amplification.

Conclusion

A few takeaways from this blog post:

  • Computing is becoming increasingly disaggregated, with data being generated at more endpoints.
  • The use of accelerators is increasing in the datacenter to free the host CPU to focus on more revenue-generating activities.
  • Compression (when done right) has the benefits of increased bit density, increased application performance, and a lower overall TCO for the datacenter.
  • Software compression can be time-intensive to set up and may end up reducing system performance in a dynamic, real-world environment.
  • Pliops XDP delivers in-line, transparent hardware-based compression with patented and novel object indexing algorithms that work together to provide a superior compression solution to both free the host CPU and adapt to changing workloads without requiring any upfront configuration.

Talk to a Product Expert!

Speak with a data expert to learn how Pliops XDP can exponentially increase your business needs.