We implemented Oracle VM 3.3.2 around two years ago and biggest issue we had was using iSCSI storage having sub-optimal latency and throughput with parallel sessions and high load in our performance tests. In the end this was resolved by linux disk partition alignment and few iSCSI optimizations.


After doing the initial Oracle VM installation we started testing on the functionality and performance. We were going to place all our test instances and some production servers in the Oracle VM infrastructure so performance had to be acceptable.

However as we had 10 GbE iSCSI network for storage we expected better than the 250 MB/s throughput and 400ms latency we got with initial tests!

Initial setup

We had used iSCSI with multipathing for our Oracle Linux infrastructure for several years so after installing the OVS servers we did slight modifications on each server based on our experiences. Obviously we tested before & after these directly from OVS to see the performance didn’t go worse. These changes were mainly to /etc/iscsi.conf.

One key document is the  below document what Oracle has on tuning OVM for 10GbE networks:

Click to access ovm3-10gbe-perf-1900032.pdf

Like they mention in the pdf the biggest tuning effort is to limit number of dom0 vCPU’s to number of available CPU threads in one socket. There are also some performance testing results in the pdf. However we didn’t see so huge impact with these changes during our testing. We still implemented these as they were recommended.


I came across on document on linux partition alignment for ZFS storage and was curious to see how it will impact on guest VM’s.


In short and really simplified if the partition isn’t aligned for one block of data you will do several I/O’s compared when the disk alignment is done you do only one I/O for one block.

First you need to check what is starting sector for your disk and check if this is optimal. For this I used parted:

(parted) unit s
(parted) p

Model: Xen Virtual Block Device (xvd)

Disk /dev/xvdc: 419430400s

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Number Start End Size Type File system Flags
1 63s 419428351s 419426304s ext4 primary

(parted) align-check optimal 1
1 not aligned

From above you can see the partition table starts from sector 63 which isn’t optimal starting point. However if we recreate the partition using 0% as starting point the system knows to use starting sector of 4096 (2MB) which is optimal for our storage system.

(parted) rm 1
(parted) mkpart primary ext4 0% 100%
(parted) p

Model: Xen Virtual Block Device (xvd)

Disk /dev/xvdc: 419430400s

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Number Start End Size Type File system Flags
1 4096s 419428351s 419426304s ext4 primary

(parted) align-check optimal 1
1 aligned

For us having the partition alignment done for disks (LUNs) presented to Oracle VM via iSCSI made a huge difference. First it dropped the read latency to 20-60ms when performance test had 64 sessions and the throughput went from 250MB/s to almost 1100MB/s.

As we use iSCSI with multipathing there is potential to go up to 2200MB/s like below picture shows. I only copied three main tests which made the most difference in this benchmark. First test was directly from the underlying physical server and two others from the virtual machine.

VM performance
Read performance test throughput and latency having 64 parallel sessions

Still as the VM performance could be better we keep on testing the performance with newer Oracle VM versions and Linux distributions. So far we haven’t had any actual performance issues after two years being live so the current performance is good for us.

Some other helpful links which explain the partition alignment lot better:

Linux Disk Alignment Reloaded

How to align partitions for best performance using parted

Leave a Reply

Your email address will not be published.