So, maybe you think there are hotspots in disk IO queuing. I often think that when looking at #Oracle 11GR2 on #AIX #IBMPower servers. The lvmstat command can be your best friend in tracking down host side LVM hotspots. Believe me, sometimes you can get a big benefit by moving a small amount of data. But, usually you end up proving some known best practices - like keeping Oracle redo logs on separate logical and physical volumes from Oracle database .dbf files, and keeping ETL flat files on separate logical/physical volumes from both redo logs and dbf database files :)
At any rate, lvmstat is a mega-useful tool. A few references for your reading enjoyment.
http://pic.dhe.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.prftungd/doc/prftungd/lvm_perf_mon_lvmstat.htm
http://poweritpro.com/performance/if-your-disks-are-busy-call-lvmstat
Root privileges are required for lvmstat, and unlike iostat there isn't a parameter to timestamp its output. Easily remedied with awk.
Here's some iostat info from mountainhome, a server chosen totally at random. :)
# iostat -DlRTV hdisk0
System configuration: lcpu=24 drives=6 paths=10 vdisks=2
Disks: xfers read write queue time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
%tm bps tps bread bwrtn rps avg min max time fail wps avg min max time fail avg min max avg avg serv
act serv serv serv outs serv serv serv outs time time time wqsz sqsz qfull
hdisk0 0.6 44.3K 6.3 11.7K 32.6K 2.5 5.3 0.1 215.6 0 0 3.8 1.8 0.2 165.3 0 0 11.8 0.0 130.5 0.0 0.0 2.8 12:10:08
My eagle-sharp eyes train in on the serv qfull number - a rate of 2.8 per second for the monitoring period! Time for intervention! At least for me - I hate qfulls.
Lets try to be scientifical about this.
Enable the volume groups/logical volumes for stats collection.
A script I've got for that - ain't pretty but it works. (execute as root)
#enable lvmstat stats
for vg in `lsvg | /usr/bin/awk -u '{print $1 ;}'`;
do
lvmstat -v $vg -e
for lv in `lsvg -l $vg | /usr/bin/awk -u 'NR <= 2 { next } {print $1 ;}'`;
do
lvmstat -l $lv -e ;
done ;
done ;
Once stats are enabled for the volume groups/logical volumes that you care about... start collecting! (execute as root)
./sasquatch_lvmstat_script 3 3 $(pwd) 4 &
#sasquatch_lvmstat_script
# Begin Functions
lvmon_cmd(){
/usr/sbin/lvmstat -s -l $1 -c$2 $3 $4 | /usr/bin/awk -u '{print $0, date}' date="`date '+%D %H:%M:%S'`" >> $5/$(hostname)_$6_$1_$(date +%Y%m%d)
}
# End functions
interval=${1:-2}
count=${2:-2}
logdir=${3:-/tmp}
top=${4:-32}
for vg in `lsvg | /usr/bin/awk -u '{print $1 ;}'`;
do
for lv in `lsvg -l $vg | /usr/bin/awk -u 'NR <= 2 { next } {print $1 ;}'`;
do
lvmon_cmd $lv $top $interval $count $logdir $vg $lv &
done ;
done ;
Lets pretend I've looked at enough of the output files to determine that lvmcore is the heavy hitter logical volume in rootvg.
The output files from the lvmstat script look kinda like this:
cat mountainhome_rootvg_lvcore_20130722
07/22/13 11:42:47
Log_part mirror# iocnt Kb_read Kb_wrtn Kbps 07/22/13 11:42:47
56 1 53 212 0 0.00 07/22/13 11:42:47
1 1 52 208 0 0.00 07/22/13 11:42:47
53 1 10 40 0 0.00 07/22/13 11:42:47
2 1 7 28 0 0.00 07/22/13 11:42:47
.. 07/22/13 11:42:47
Log_part are the logical partitions of the logical volume. Match them up to physical partitions and physical volumes based on the output of something like 'lslv -m'.
# lslv -m lvcore | grep -e 0056 -e 0053 -e 0001 -e 0002
0001 0704 hdisk0
0002 0705 hdisk0
0053 0756 hdisk0
0056 0759 hdisk0
So... what are the lines with nothing but periods and a timestamp? Those are quiet collection intervals - lvmstat collapsed them for you. That was nice! Kinda like including -V in iostat -DlRTV you only get what you need.
OK. So Logical partitions 0056 and 0001 are the busiest in logical volume lvcore. They are both on hdisk0. And we supposedly compared the output across our logical volumes in rootvg and saw that, while hdisk0 is under qfull pressure, lvmcore is the heavy hitter in rootvg volume group. If there's a quiet hdisk in rootvg, moving one of those logical partitions - or both of them... could relieve the pressure. Or, maybe you know the contents. (The fileplace command can assist by mapping individual file fragments to logical partitions of the logical volume - that's a lesson for another day.) Sometimes its easiest to say "golly! I guess I could put such-n-such into a different filesystem on a different volume group! It wouldn't put any pressure on filesystem buffers or any pressure on those physical volume IO queues then!" That kinda stuff requires negotiation, though. And I'm not a negotiator, just a sasquatch.
When you are done with lvmstat monitoring, disable the stats collection. Its a small amount of overhead, but if you aren't baselining or investigating... no need to have the stats enabled. (execute as root)
#disable lvmstat stats
for vg in `lsvg | /usr/bin/awk -u '{print $1 ;}'`;
do
/usr/sbin/lvmstat -v $vg -d
for lv in `lsvg -l $vg | /usr/bin/awk -u 'NR <= 2 { next } {print $1 ;}'`;
do
/usr/sbin/lvmstat -l $lv -d ;
done ;
done ;
No comments:
Post a Comment