OLTP Table Compression

OLTP Table Compression

This post describes why OLTP compression is still always not a good idea. The example was prepared as an illustration for a customer who has a DSS database with compression enabled at tablespace level for some of the larger tables. The rows are compressed as they are inserted into the table. However, all rows are immediately updated following this insertion.

The same behaviour occurs when compression is enabled at tablespace or table level.

The example was developed in Oracle 11.2.0.3 on Linux. The sample data is taken from my Formula 1 database

Uncompressed table

First an example of a uncompressed table (CAR43):

CREATE TABLE car43 AS SELECT * FROM car;

We will gather some statistics on the new table:

BEGIN
  dbms_stats.gather_table_stats
  (
    ownname => 'GP',
    tabname => 'CAR43',
    estimate_percent => NULL
  );
END;
/

In the above PL/SQL block, ESTIMATE_PERCENT is set to NULL to force statistics to be computed. Let’s have a look at the results:

SELECT num_rows,blocks FROM dba_tables
WHERE owner = 'GP'
AND table_name = 'CAR43';

NUM_ROWS    BLOCKS
--------    ------
   18493       125

Now we will update all rows in the table:

UPDATE car43 SET driver_points = driver_points + 1;

18493 rows updated.

Gather statistics again:

BEGIN
  dbms_stats.gather_table_stats
  (
    ownname => 'GP',
    tabname => 'CAR43',
    estimate_percent => NULL
  );
END;
/

Check the results

SELECT num_rows,blocks FROM dba_tables
WHERE owner = 'GP'
AND table_name = 'CAR43';

NUM_ROWS    BLOCKS
--------    ------
   18493       125

In this case the number of blocks (125) has not changed between insertion and update.

Compressed table

Now an example of a compressed table (CAR44):

CREATE TABLE car44 COMPRESS AS SELECT * FROM car;

We will gather some statistics on the new table:

BEGIN
  dbms_stats.gather_table_stats
  (
    ownname => 'GP',
    tabname => 'CAR44',
    estimate_percent => NULL
  );
END;
/

Let’s have a look at the results:

SELECT num_rows,blocks FROM dba_tables
WHERE owner = 'GP'
AND table_name = 'CAR44';

NUM_ROWS    BLOCKS
--------    ------
   18493        56

The table has compressed surprisingly well reducing in size from 125 blocks to 56 blocks.

Now we will update all rows in the table

UPDATE car44 SET driver_points = driver_points + 1;

18493 rows updated.

Gather statistics again:

BEGIN
  dbms_stats.gather_table_stats
  (
    ownname => 'GP',
    tabname => 'CAR44',
    estimate_percent => NULL
  );
END;
/

Check the results:

SELECT num_rows,blocks FROM dba_tables
WHERE owner = 'GP'
AND table_name = 'CAR44';

NUM_ROWS    BLOCKS
--------    ------
   18493       250

In this case the update operation has increased the number of blocks in the table from 125 to 250.

While compression is more efficient when the data is initially inserted, subsequent updates will inflate theĀ blocks to a much larger size than if they had not been compressed in the first place.

Redo Generation

We can also check the amount of redo generated by each operation using the following query:

SELECT m.value
FROM v$mystat m, v$statname s
WHERE m.statistic# = s.statistic#
AND s.name = 'redo size';

The above query was executed before and after each CREATE TABLE AS SELECT and UPDATE operation. Results were as follows:

Operation Uncompressed Table
Redo Bytes
Compressed Table
Redo Bytes
CREATE TABLE AS SELECT 1,046,668 460,024
UPDATE 10,937,528 14,282,392
Total Redo 11,984,196 14,742,416

So less redo is generated by the CREATE TABLE AS SELECT operation for the compressed table than the uncompressed table. However, significantly more redo is generated by the UPDATE operation for the compressed table than for the uncompressed table.

Conclusion

Take care when enabling compression at tablespace level or at table level that you understand what your application will be doing with the data.

End of post

Leave a Reply

* Copy This Password *

* Type Or Paste Password Here *