One of the exciting new features of Oracle 11g is their inline data compression utility. While it is true that data storage prices have fallen dramatically over the last decade, and continue to fall rapidly, Oracle data compression has far more appealing benefits than simply saving on disk storage cost. Because indexes and the data itself can be highly compressed, information can be fetched off of the disk devices with less physical IO, which radically improves query performance under certain conditions.
Let's take a closer look at how one would implement Oracle 11g Data Compression in order to achieve the optimal results.
Understanding data compression
Data compression techniques, such as the Huffman algorithm, have been around for nearly a century, but only today are being put to use within main stream information systems processing. Using these techniques, a decompression utility is called immediately upon the data block fetch. Within the Oracle data buffers, the fully uncompressed version of the data remains in the data buffers, even though the information remains compressed on the data blocks themselves. This leads to an anomaly between the size of information on the data blocks and the size of the information within the data buffers. Upon applying Oracle data compression, people will find that far more rows will fit on a data block of a given size, but there is still no impact on the data base management system from the point of view of the SGA (system global area). Because the decompression routine is called upon block fetch, the Oracle data buffers remain largely unchanged while the data blocks themselves tend
to have a lot more data on them.
Tests show that 11g compression results in slower transaction throughput but creates less writes because of higher row density on the data block. Overall, the benchmark slows that I/O writes being reduced while CPU increases, resulting in slowing SQL throughput:
* Slower transaction throughput – As we expect, Oracle transactions run faster without the encryption/decryption processing overhead. This encryption benchmark shows significantly slower throughput when deploying TDE, almost 20% (81 transactions/second with TDE, 121 transactions/second with TDE).
* Less Disk Writes – Since transparent data encryption compresses the data, the benchmark with TDE required less disk writes.
* More CPU required - As we would expert, TDE required CPU cycles for the encrypt/decrypt operations, and in this benchmark test we see User CPU rise from 46 to 80 when using TDE data encryption.
Read more