Oracle ASM vs Linux EXT4 I/O Testing with SLOB

Hard Drive Testing
Hard Drive Testing

The Silly Little Oracle Benchmark (SLOB) is an I/O testing toolkit for the Oracle database. It is provided by Kevin Closson. I used this recently to test the I/O differences between Oracle ASM and Linux EXT4 disks.

Background

I’m using a 19c Oracle database that is using an EXT4 file system. I’ve already installed the Grid Infrastructure (GI) for a standalone server to leverage ASM. The GI version is also 19c. The disks are labeled with Oracle’s ASM filter driver. Before I migrate the database to ASM, I want to confirm that my I/O throughput will be equal or better than the current EXT4. The disks are SSDs from a Nutanix hardware cluster.

SLOB Installation

The latest version of SLOB can be downloaded from GitHub. Once it was downloaded, I transferred it to my Linux server. I created a new directory on an unused mount point.

$ mkdir -p /u05/slob

Next, I’ll extract the files from the archive.

$ tar -xf 2021.05.12.slob_2.5.4.0.tar.gz

Navigate to the wait_kit directory and execute the make command.

$ cd wait_kit/

$ make
rm -fr *.o mywait trigger create_sem
cc     -c -o mywait.o mywait.c
cc -o mywait mywait.o
cc     -c -o trigger.o trigger.c
cc -o trigger trigger.o
cc     -c -o create_sem.o create_sem.c
cc -o create_sem create_sem.o
cp mywait trigger create_sem ../
rm -fr *.o

Create some Tablespaces

For my testing, I’ve created two tablespaces of 100 GB apiece. One will be on EXT4 while the other ASM. This size was chosen as it’s larger than the entire SGA to minimize caching of data that might skew our results.

EXT4 Tablespace

SQL> CREATE TABLESPACE SLOB_EXT4
       DATAFILE '/u01/app/oradata/MYDB/slob_ext4_01.dbf' SIZE 25G,
    '/u01/app/oradata/MYDB/slob_ext4_02.dbf' SIZE 25G,
        '/u01/app/oradata/MYDB/slob_ext4_03.dbf' SIZE 25G,
    '/u01/app/oradata/MYDB/slob_ext4_04.dbf' SIZE 25G; 

Tablespace created.

ASM Tablespace

SQL> CREATE TABLESPACE SLOB_ASM
    DATAFILE '+DATA01' SIZE 25G,
    '+DATA01' SIZE 25G,
    '+DATA01' SIZE 25G,
    '+DATA01' SIZE 25G;

Tablespace created.

Next, I navigate to the SLOB installation directory. I modified the slob.conf to use AWR report versus statspack, increase the scale to 12G, and added my system password. I used a scale of 12 GB with a parameter of 8 thread. This will fill my 100 GB tablespace to 96 GB with data.

Additionally, I also modified the ADMIN_SQLNET_SERVICE and SQLNET_SERVICE_BASE as I was connecting to a pluggable database. The Oracle multitenant option is not really supported by SLOB but I found that this worked for me.

SCALE=12G

DATABASE_STATISTICS_TYPE=awr   # Permitted values: [statspack|awr]

ADMIN_SQLNET_SERVICE="MYPDB"
SQLNET_SERVICE_BASE="MYPDB"

SYSDBA_PASSWD="<my system password>"

Next, I execute the SLOB setup script. I’m passing the name of the EXT4 tablespace and the number of 8 for number of SLOB users I would like.

$ ./setup.sh SLOB_EXT4 8

We can confirm the users have been setup by executing the following command in SQL*Plus.

SQL> select username, default_tablespace
from dba_users
where username like 'USER%'
order by username;   2    3    4

USERNAME   DEFAULT_TABLESPACE
---------- ------------------------------
USER0      SLOB_EXT4
USER1      SLOB_EXT4
USER2      SLOB_EXT4
USER3      SLOB_EXT4
USER4      SLOB_EXT4
USER5      SLOB_EXT4
USER6      SLOB_EXT4
USER7      SLOB_EXT4
USER8      SLOB_EXT4

9 rows selected.

Additionally, we can confirm that SLOB has created some objects in our database by issuing the following command:

SQL> select owner, object_name, object_type from dba_objects
where owner like 'USER%'
order by owner;  2    3

OWNER        OBJECT_NAME    OBJECT_TYPE
------------ -------------- -----------------------
USER1        CF2            TABLE
USER1        CF1            TABLE
USER1        I_CF1          INDEX
USER1        SLOBUPDATE     PROCEDURE
USER2        CF1            TABLE
USER2        I_CF1          INDEX
USER2        CF2            TABLE
USER3        CF2            TABLE
USER3        CF1            TABLE
USER3        I_CF1          INDEX
USER4        CF2            TABLE
USER4        CF1            TABLE
USER4        I_CF1          INDEX
USER5        CF2            TABLE
USER5        CF1            TABLE
USER5        I_CF1          INDEX
USER6        CF2            TABLE
USER6        CF1            TABLE
USER6        I_CF1          INDEX
USER7        CF1            TABLE
USER7        CF2            TABLE
USER7        I_CF1          INDEX
USER8        CF1            TABLE
USER8        CF2            TABLE
USER8        I_CF1          INDEX

25 rows selected.

Begin Testing!

First let’s start by executing our first load test. I repeat the test three times for each type of tablespace (EXT4 and ASM). The command to execute this is easy:

$ ./runit.sh -s 8 -t 1

Note: The AWR reports do not get generated properly. The reason is this is a pluggable database. What I discovered is that SLOB is attempting to pull the reports from the CDB AWR snapshots and not the PDB where the snapshots were taken. I ended up generating the reports manually at the end.

Test Results

I reviewed the AWR reports. Specifically the Instance Activity Stats section. The ASM tablespace was just a little better performing than my EXT4 tablespace. Below I pulled some of the more pertinent pieces of information from the AWR report.

MetricEXT4ASM
physical read IO requests6,306,0789,368,634
physical read bytes51,676,971,00880,276,373,504
physical read total IO requests6,311,0379,373,578
physical read total bytes51,755,966,46480,353,419,264
physical reads6,308,2249,799,362
physical reads cache6,308,2199,340,260
physical reads cache prefetch5,333,2557,874,013
physical write IO requests5920
physical write bytes483,328163,840
physical write total IO requests5920
physical write total bytes483,328163,840
physical writes5920
physical writes direct5920
db file sequential read (Avg Wait)1.69ms1.38ms
db file parallel read (Avg Wait)4.92ms2.63ms
Wait Class – User I/O (Avg Wait)2.05ms1.46ms

Disk Group Variability

In full disclosure, there are some differences between the EXT4 volume group and the ASM disk group. The EXT4 is made up of 6 disks at 400 GB a piece. The ASM disk group consists of 8 disks at 1 TB. The number of drives, not so much the size, is probably contributing to the difference in throughput. Conversely, I tested on a different EXT4 volume group which comprised of 12 disks, where the throughput was consistently better than the ASM disks. I had to test with what I had. I would have preferred to have an equal set of drive number and size for both disk groups.

Conclusion

This is my first attempt at using SLOB. I’ll need to test using different parameters to learn a little more of its capabilities. This test was enough to give us a level of comfort to move forward with migrating to ASM without any concerns of a performance penalty. I’ll have a future blog post with some more testing with SLOB.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.