Tuesday, April 22, 2014

SQL Server: Win some, learn some... try a whole buncha

It isn't that I am opposed to query tuning... far from it!  However, I'm in a rather unusual position where changing the SQL text of any given query may require waiting for changes in two separate products from two different vendors.  Then waiting for adoption of those versions.  There are only a few things I'm good at - waiting isn't one of them.  Until then index can be added (or removed), stats and index maintenance strategies can be implemented and modified... but not too much in terms of tuning individual queries lest the customizations make future package upgrades more precarious.

So I do everything I can to tune the underlying hardware/driver/filesystem/OS system.  The idea is to deliver as much reliability and performance capacity as possible, and coax the database into leveraging those attributes as much as possible.

This is why I spend a lot of time thinking and writing about NUMA, memory management, disk IO optimization and the like.

Its also the reason I spend so much time learning about SQL Server trace flags.  Sometimes, the database can be encouraged to make better use of the system performance capacity for my workloads with a trace flag.  That is certainly the case with T8048, which removes a significant bottleneck in stealing query memory when there are multiple queries per NUMA node (or scheduler group) that are stealing lots of query memory.  There are other trace flags that have the effect of 'tuning' sets of queries, all at the same time.  For example, the enhanced join ordering available with trace flag 4101.  That one really helped me out of a jam - I saw some memory grants drop in size from ten GB or more to 1 mb with no other change than adding that trace flag.  (Tested benefits and looked for problems first with querytraceon, then promoted it to instance-wide enabled.)

So this year, here are some trace flags that I'll be evaluating with my workloads.   Not a lot of info about them.  As I test with them in SQL Server 2012 and 2014 I hope to provide some details about what I see - especially if I see no discernible difference at all.

T342
T345
T2328 http://blogs.msdn.com/b/ianjo/archive/2006/03/28/563419.aspx
T4138 http://support.microsoft.com/kb/2667211
T9082 http://support.microsoft.com/kb/942906

For some information on these and many other trace flags, you can check out this post, and the accompanying pdf.
http://sqlcrossjoin.wordpress.com/2013/10/28/a-topical-collection-of-sql-server-flags/

Here's my disclaimer: trace flags are not to be trifled with.  Test them in isolation if possible first on a nonprod system.  Measure the effects, compare to expected/desired behavior and baseline.  When possible, test in full context (full scale workload at full concurrency) in nonproduction before promoting to production.

Wednesday, April 9, 2014

srarw - sequential read after random write (ZFS, WAFL, reFS)

I like good marketing terms as much as the next guy, but what I really crave are terms and phrases that explain system and storage patterns and phenomena that I become very familiar with, and have a hard time explaining to other folks.

Several years ago, I was in exactly that position - having seen a condition that degraded database performance on a shadow paging filesystem* with HDDs and trying - in vain mostly - to explain the condition and my concern to colleagues.

*OK... a brief aside.  What on earth is a "shadow paging filesystem"?  Most familiar hard drive storage technology is "write-in-place".  Expand a database file by 1 GB, and that 1 GB of database file expansion translates through the OS filesystem, OS logical volume manager, SAN layers, etc to specific disk sectors.  Write database contents to that 1 GB file expansion, and the contents will be written to those disk sectors.  Update every single row, and the updates will take place via writes to those same disk sectors.  That is "write-in-place".
The alternative is a continual "redirect on write", or a shadow paging filesystem.  The initial contents contents get written to 1 GB worth of disk sectors A-Z.  Update every single row, and the updated contents don't get written in place, but rather get written to a new/different 1 GB of disk sectors a'-z'.
Once the new disk writes are complete, updates are made to the inode/pointer structure that stitches together the file(or LUN) presented to the host operating system.  The most common example of this type of continual redirect-on-write strategy is WAFL, used by the ONTAP operating system on NetApp storage.*

The issue was that a database object structure could be written initially completely sequentially, from the standpoint of the database, database server filesystem/LVM, AND the storage system.  However, later updates could occur to a small and scattered sample of the data within that contiguous range.  After the updates (assuming no page splits/migrated rows due to overflow of datbaase page/block boundaries), the data would be contiguous from database and database server filesystem/LVM standpoint.  But, the updated blocks would be rewritten in a new sequential location - making sequential on disk contents that were scattered from the standpoint of database and database server filesystem/LVM.

What's the harm in that?    Consider a full scan of such an object, whether in response to a user query or to fulfill an integrity check.  Before the interior, scattered updates the 1 GB range may very well be read with a minimal number of maximum size read operations at the OS and storage levels, with a minimal amount of disk head movement.  After the scattered internal updates?  The number and size of OS level read commands won't change (because I've stipulated earlier that none of the updates caused page splits/migrated rows).  However, the number and size of commands to retrieve the data from the hard drives to the storage controllers in the array would almost certainly have changed.  And the amount of disk head movement to retrieve the data could also have changed significantly.  What if 6 months of time and accumulated data had accrued between the original, completely sequential write of the data and the later scattered updates?  That could introduce significant head movement and significant wait into the data retrieval.

When I began discussing this phenomena with various engineers, the most common reply was: "yeah, but aren't you most concerned with OLTP performance, anyway?"  At that time in my life, that was completely true... however...
I also knew that a production system with true, all day, pure OLTP workload simply doesn't exist outside of very limited examples.  Integrity checks and backups are the main reason.  Show me a critical production database that operates without backups and integrity checks, and you've shown me a contender for a true all-day, every-day pure OLTP.
Otherwise, the degradation of sequential reads after scattered, internal, small updates is a REAL concern for every database when operating on a shadow paging filesystem.  That's true if the shadow paging filesystem is on the database server host (eg ZFS, of which I am a big fan), or on the storage subsystem (NetApp, or a system using ZFS).

Here's the kicker... someday I think it'll matter for SQL Server on Windows, too regardless of the underlying storage.  Although Microsoft reFS is not a good fit for SQL Server today (I'll come back and add links as to why later), I think future enhancements are likely to bring it into focus for SQL Server.

Finally, I found a name for the performance concern: SRARW.  Decoded: sequential read after random write.  And I found the name in a somewhat unlikely source: a paper written by a NetApp engineer.  In truth, I shouldn't have been all that surprised.  NetApp has a lot of brilliant people working for and with them.
Here's the paper that introduced the term SRARW to me:
Improving throughput for small disk requests with proximal I/O
Jiri Schindler, Sandip Shete, Keith A. Smith

Now... if you are running SQL Server or Oracle on NetApp, I encourage you to keep track of the pace of large sequential operations that always execute with the same number of threads.  If you see a significant slowdown in the pace, consider that SRARW and low level fragmentation may be one of the contributors.  NetApp has jobs that can be scheduled periodically to reallocate data... re-sequence data that has been made "outta order" due to small scattered interior writes.
There is also a NetApp "read reallocate" attribute that should be considered for some workloads and systems.
These items are better described at this location.
http://www.getshifting.com/wiki/reallocate

If you are using ZFS and SRARW performance degrades... unfortunately at this time your options are limited.
 

Friday, April 4, 2014

SQLServer transaction log writes - 64k aligned?



I spend a lot of time thinking about high speed ETL.  I also spend a lot of time thinking about DR solutions and backups.

Below you can read details on how I came to the following question (to which I don't yet know the answer and will update when I do): are SQL Server 60k writes 64k aligned?

*****
Aha!  I don't think I'll have to bust out procmon for this after all.  Just get a Windows striped volume, put my txlog (and only the txlog) on the striped volume, start perfmon monitoring the physical disks/LUNs in the striped volume with perfmon (writes per second, current disk queue depth, write bytes/second) and spin up a workload that pushes the logwriter to as many in-flight 60k writes as possible.

If the average write size on the physical volumes is ~60k and the current queue length is 16 or less - awesome! That would mean striping is keeping each write intact and not spitting it, and that the queue depth on each LUN is lower (so that replication readers, etc have room in the queue depth of 32 to do their stuff without pushing anything into the OS wait queue.)

But if the average write size is ~32k... that would mean that most 60k writes by the log writer are being split into smaller pieces because they are not aligned with the 64k stripes used by the Windows LVM.

I guess even if the writes aren't 64k aligned, Windows striping may still be useful for my scenarios... but would have to stripe 4 LUNs together into a striped volume in order to lower the queue length for burdened log writer activity from 32 (with a single LUN) to 15.

*****

Each SQL Server transaction log can sustain up to 32 concurrent in-flight writes, with each write up to 60k.  To get the fastest ETL, fast transaction log writes at queue length 32 are a necessity.  That means... put such a transaction log on its own Windows drive/mounted partition, since typically the HBA LUN service queue depth is 32.  Put other files on there, too, and the log writer in-flight writes might end up in the OS wait queue.  If writes wait on a full service queue of reads, they'll be ESPECIALLY slow.  There are other ways to make them especially slow - for example to serialize the inflight writes to to a synchronous SAN replication strategy.  Anyhooo...

In massive ETL its not unusual for the transaction log writer to wait on completion of 32 writes, each 60k, and not issue the next write until one of them completes.

Writes are usually to write SAN cache, and should be acked on receipt to write cache.  As such, as long as the write is in the HBA service queue (rather than in OS wait queue), front end port queue depth isn't saturated, front end CPU isn't saturated, and SAN write cache isn't saturated - writes should be doggone fast already. (The overhead of wire time - or 'wait for wire time' - for synchronous SAN replication also shouldn't be overlooked when evaluating write latency.) So what can be done to improve these writes that are already typically pretty fast?

I'm not a fan of using Windows striped volumes for SQL Server data files - there's a fixed 64k stripe size.  That will circumvent large readahead attempts by SQL Server.  But for speeding up transaction log access, striped volumes may be just the thing I need.  (Robert Davis - @sqlsoldier - pointed out that unless there is underlying data protection a Windows striped volume offers no data redundancy or protection. I'm only going down this path because underneath the Windows basic disks, whether in striped Windows volume or not, SAN storage is providing RAID10, RAID5, or RAID-DP protection.)

So... this is where the question of 64k alignment of the 60k writes comes in.

Assume 32 inflight writes at 60k each issued by SQLServer logwriter to a txlog all by itself on a Windows striped volume composed of two equally sized basic disks.  If the writes are not 64k aligned, the sames as the Windows stripes, the write activity passed down through the HBA will break down like the chart below.  Its painful to look at, I know.  Haven't figured out a less confusing way to represent it yet.  Basically, each 64k stripe on either basic disk will contain either a full 60k transaction log write and only 4k of the next transaction log write, or two partial transaction log writes.  All tolled, 32 writes gets broken down into 60 writes!  (By the way, this same idea is why its important to have Windows drives formatted so that their start aligns with expected striping.)
Basic Disk A        Basic Disk B
 1 - 60k 
 2 - 4k              2 - 56k  
 3 - 52k             3 - 8k
 4 - 12k             4 - 48k
 5 - 44k             5 - 16k
 6 - 20k             6 - 40k
 7 - 36k             7 - 24k
 8 - 28k             8 - 32k
 9 - 28k             9 - 32k
10 - 36k            10 - 24k
11 - 20k            11 - 40k
12 - 44k            12 - 16k
13 - 12k            13 - 48k
14 - 52k            14 -  8k
15 - 60k
                    16 - 60k
17 - 60k

18 - 4k             18 - 56k  
19 - 52k            19 - 8k
20 - 12k            20 - 48k
21 - 44k            21 - 16k
22 - 20k            22 - 40k
23 - 36k            23 - 24k
24 - 28k            24 - 32k
25 - 28k            25 - 32k
26 - 36k            26 - 24k
27 - 20k            27 - 40k
28 - 44k            28 - 16k
29 - 12k            29 - 48k
30 - 52k            30 -  8k
31 - 60k

                    32 - 60k


So by striping the txlog, instead of 1 LUN with 32 writes and 1920 write bytes inflight… its 2 LUNs, each with 30 writes & 960 total write bytes outstanding.  50% reduction in write bytes per volume, 6% reduction in concurrent write IOs per LUN (from 32 to 30).

On the other hand, if the writes are 64k aligned, it'd be an even split: 16 writes and 920 write bytes outstanding to each LUN, a 50% reduction in both outstanding writes and outstanding write bytes.

So unless someone knows the answer, I guess we'll be busting out procmon and tracking transaction log write offsets once we crank the workload up to consistently hit 60k writes.  If they are 64k aligned, I'll be happy - I can blog in the near future about Windows striped volumes getting me out of a few jams.  If not... it'll probably be back to the drawing board. 

Thursday, April 3, 2014

Oracle on AIX - cio filesystem for redo logs; demoted IO and vmstat

The 'w' column in the vmstat output below... has something to do with demoted IO.  When Oracle redo logs with default block size of 512 bytes are put on a JFS2 cio mounted filesystem, for example, with the default 4096 byte filesystem block size.

Jaqui Lynch referenced this vmstat option in her March 2014 presentation, page 41.
https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/61ad9cf2-c6a3-4d2c-b779-61ff0266d32a/page/1cb956e8-4160-4bea-a956-e51490c2b920/attachment/52ed9996-7561-42e8-a446-09f5f1414521/media/VUG-AIXPerformanceTuning-Part2-mar2414.pdf 

But I haven't found an explanation of what the conditions are for the w column to count a thread.  Not yet, anyway.  The second variation below is used in perfpmr memdetails.sh script.

# vmstat -IW

System configuration: lcpu=12 mem=16384MB ent=1.00

   kthr       memory              page              faults              cpu
----------- ----------- ------------------------ ------------ -----------------------
 r  b  p  w   avm   fre  fi  fo  pi  po  fr   sr  in   sy  cs us sy id wa    pc    ec
 1  1  0  0 1835258 1779800   0   2   0   0   0    0  24 1991 1625  0  0 99  0  0.01   1.0


# vmstat -W -h -t -w -I 2 2

System configuration: lcpu=12 mem=16384MB ent=1.00

     kthr              memory                         page                       faults                 cpu                   hypv-page           time
--------------- --------------------- ------------------------------------ ------------------ ----------------------- ------------------------- --------
  r   b   p   w        avm        fre    fi    fo    pi    po    fr     sr    in     sy    cs us sy id wa    pc    ec   hpi  hpit   pmem   loan hr mi se
  2   0   0   0    1824787    1788933     0    15     0     0     0      0    11   2874  1698  3  3 94  0  0.10  10.0     0     0  16.00   0.00 16:34:26
  2   0   0   0    1824802    1788916     0     0     0     0     0      0     6   1793  1680  1  1 98  0  0.05   4.9     0     0  16.00   0.00 16:34:28
 

Tuesday, April 1, 2014

IBMPower AIX: perfpmr pipes "pile" to kdb

The topas utility in AIX has some information, specifically in 'topas -M' that I'd like to be able to log over time.  Namely, the amount of filesystem cache associated with each SRAD (the logical CPUs and memory within an LPAR from the same socket) and the local/near/far dispatch ratio for threads on that logical core.

Check it out on Nigel Griffiths' blog
https://www.ibm.com/developerworks/community/blogs/aixpert/entry/local_near_far_memory_part_2_virtual_machine_cpu_memory_lay_out3?lang=en


Logging topas from an unattended script is painful.  But most of the work I fret over should be monitored unattended.

Nigel mentioned in his post that he's seen information similar to 'topas -M' in perfstat API programming.  He's got a great page on that, linked below.  Very promising, but after investing a significant amount of time today I still wasn't able to get what I wanted.  I'll come back to perfstat in the future, I'm sure.  Especially because if I can write something using the perfstat API, at least it won't need root access like most of the kdb work I end up doing.
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Power%20Systems/page/Roll-Your-Own-Performance-Tool


So I'll take a look at what perfpmr is doing these days that might be relevant.

Saw this, wanted to make sure I jotted it down before it gets lost in the sea of my life.
 
[root@sasquatch_mtn: /root]
# echo pile | kdb
           START              END <name>
0000000000001000 00000000058A0000 start+000FD8
F00000002FF47600 F00000002FFDF9C8 __ublock+000000
000000002FF22FF4 000000002FF22FF8 environ+000000
000000002FF22FF8 000000002FF22FFC errno+000000
F1000F0A00000000 F1000F0A10000000 pvproc+000000
F1000F0A10000000 F1000F0A18000000 pvthread+000000
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C0208380
(0)> pile
ADDRESS                NAME             cur_total_pages
0xF100010020990800     NLC64            0x00000000000000FC
0xF100010020990900     NLC128           0x0000000000000014
0xF100010020990A00     NLC256           0x0000000000000000
0xF10001002BBD0800     iCache           0x0000000000002080
0xF10001002BBD0900     iCache           0x0000000000002080
0xF10001002BBD0A00     iCache           0x0000000000002080
0xF10001002BBD0B00     iCache           0x0000000000002080
0xF10001002BBD0C00     iCache           0x0000000000002080
0xF10001002BBD0D00     iCache           0x0000000000002080
0xF10001002BBD0E00     iCache           0x0000000000002080
0xF10001002BBD0F00     iCache           0x0000000000002080
0xF10001002BBD0000     iCache           0x0000000000002080
0xF10001002BBD8100     iCache           0x0000000000002080
0xF10001002BBD8200     iCache           0x0000000000002080
0xF10001002BBD8300     iCache           0x0000000000002080
0xF10001002BBD8400     iCache           0x0000000000002080
0xF10001002BBD8500     iCache           0x0000000000002080
0xF10001002BBD8600     iCache           0x0000000000002080
0xF10001002BBD8700     iCache           0x0000000000002080
0xF10001002BBD8800     iCache           0x0000000000002080
0xF10001002BBD8900     iCache           0x0000000000002080
0xF10001002BBD8A00     iCache           0x0000000000002080
0xF10001002BBD8B00     iCache           0x0000000000002080
0xF10001002BBD8C00     iCache           0x0000000000002080
0xF10001002BBD8D00     iCache           0x0000000000002080
0xF10001002BBD8E00     iCache           0x0000000000002080
0xF10001002BBD8F00     iCache           0x0000000000002080
0xF10001002BBD8000     iCache           0x0000000000002080
0xF10001002BCF3100     bmIOBufPile      0x0000000000000000
0xF10001002BCF3200     bmXBufPile       0x0000000000000FEC
0xF10001002BCF3300     j2SnapBufPool    0x0000000000000000
0xF10001002BCF3500     logxFreePile     0x0000000000000004
0xF10001002BCF3600     txLockPile       0x000000000000151C
0xF10001002BCF3700     j2VCBufferPool   0x0000000000000080
0xF10001002BCF3900     j2VCBufferPool   0x0000000000000078
0xF10001002BCF3D00     j2VCBufferPool   0x000000000000007C
0xF100010020990B00     j2VCBufferPool   0x0000000000000078
0xF1000100358AD500     j2VCBufferPool   0x00000000000000EC
0xF1000100358AD600     j2VCBufferPool   0x0000000000000078
0xF1000100358AD700     j2VCBufferPool   0x00000000000000F4
0xF1000100358ADB00     j2VCBufferPool   0x00000000000000F8
0xF1000100E0D28200     j2VCBufferPool   0x0000000000000104
0xF1000100E1BC5500     vmmBufferPool    0x0000000000000800



Yeah... that one I probably won't get to until sometime next year.


# echo "mempsum -psx" | kdb
           START              END <name>
0000000000001000 00000000058A0000 start+000FD8
F00000002FF47600 F00000002FFDF9C8 __ublock+000000
000000002FF22FF4 000000002FF22FF8 environ+000000
000000002FF22FF8 000000002FF22FFC errno+000000
F1000F0A00000000 F1000F0A10000000 pvproc+000000
F1000F0A10000000 F1000F0A18000000 pvthread+000000
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C0208380
(0)> mempsum -psx
MEMP VMP SRAD PSZ NB_PAGES  MEMP%   SYS% LRUPAGES   NUMFRB    NRSVD  PERM%

 000  00   0  ---    7.5GB ------  50.0%    7.5GB    3.5GB    0.0MB  11.3%
 001  00   0  ---    7.5GB ------  49.9%    7.5GB    3.5GB    0.0MB  11.3%  


# echo "lrustate 1" | kdb
           START              END <name>
0000000000001000 00000000058A0000 start+000FD8
F00000002FF47600 F00000002FFDF9C8 __ublock+000000
000000002FF22FF4 000000002FF22FF8 environ+000000
000000002FF22FF8 000000002FF22FFC errno+000000
F1000F0A00000000 F1000F0A10000000 pvproc+000000
F1000F0A10000000 F1000F0A18000000 pvthread+000000
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C0208380
(0)> lrustate 1

LRU State @F1000F0009540840 for mempool 1
> LFBLRU
> not fileonly mode
> first call to vcs (lru_firstvcs)
LRU Start nfr        (lru_start)        : 0000000000000000
mempools first nfr   (lru_firstnfr)     : 0000000000000000
numfrb this mempool  (lru_numfrb)       : 0000000000000000, 0
number of steals     (lru_steals)       : 0000000000000000, 0
page goal to steal   (lru_goal)         : 0000000000000000, 0
npages scanned       (lru_nbscan)       : 0000000000000000, 0
addr, head of cur pass list   (lru_hdr) : 0000000000000000
addr, head of alt pass list  (lru_hdrx) : 0000000000000000
current lru list          (lru_curlist) : 0000000000000000
current lru list object    (lru_curobj) : 0000000000000000
pgs togo p1 cur obj        (lru_p1_pgs) : 0000000000000000, 0
pages left this chunklet (lru_chunklet) : 0000000000000000, 0
scans of start nfr (lru_scan_start_cnt) : 00000000
lru revolutions      (lru_rev)          : 00000000
fault color          (lru_fault_col)    : 00000000, 0 BLUE
nbuckets scanned     (lru_nbucket)      : 00000000
lru mode             (lru_mode)         : 00000000 DEF_MODE
request type         (lru_rq)           : 00000000 LRU_NONE
list type to use     (lru_listidx)      : 00000000 WORKING
page size to find    (lru_psx)          : 0000
MPSS fs'es to skip   (lru_mpss_skip)    : 00000000
MPSS fs'es failed    (lru_mpss_fail)    : 00000000
numperm_global     (lru_numperm_global) : 00000000
global numperm%    (lru_global_numperm) : 00000000  0.0%
perm frames (lru_global_perm_n4kframes) : 0000000000000000
lruable frames   (lru_global_n4kframes) : 0000000000000000
16m mpss type        (lru_16m_type)     : 00 LRU16_IDLE
16m mpss seqn        (lru_16m_seqn)     : 00000000

Thursday, March 20, 2014

Perfmon was collecting stats... what happened next was...

whackadoodle!

I can't think of any other word to describe it.  Let me know if you've ever seen anything like this from perfmon on a physical windows server.

Perfmon was running locally on a 4 socket, six core per socket server with no Hyper-Threading, logging to tsv file in 15 second intervals.  The results were sent to me for review.

The 24 columns below are for "\Processor(x)\% Processor Time" for 0<=x<=23.

See the zany for yourself below. 


Duplicate times.  Missing per-cpu numbers.  Suspect per-cpu CPU busy reported. And suddenly, 10 minutes later, a return to normal.  I'm thinking major fault at CPU level or maybe a significant memory error.  So I'm hoping to dig details or at least a clue from the system or event log.  But, I'm open to any ideas... because other than that I don't have any.  And I usually have lots of ideas, even if they aren't very helpful :-)

This isn't what I was expecting to see at all.  I thought I was just going to see some of the disk IO and memory management issues I normally deal with on SQL Server configs.  Maybe some spinlocks.  Nothing that would completely throw me for a loop.  Maybe I just need more coffee.

**** Updated with Exhibit 1 for some context.  SQL Server is expected to be the only significant consumer of CPU, memory, or IO resources from this physical Windows server.  Note the way that the two extraordinary timeperiods are the two deviations from SQL Server CPU consumption strongly correlating with "total server CPU consumption." ****



cpu0 cpu1 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9 cpu10 cpu11 cpu12 cpu13 cpu14 cpu15 cpu16 cpu17 cpu18 cpu19 cpu20 cpu21 cpu22 cpu23
16:47:14 20 18 11 6 4 2 20 12 6 2 1 2 13 15 6 38 17 30 12 13 20 16 11 13
16:47:29 19 18 11 4 2 1 19 11 8 2 1 0 14 14 5 39 14 31 13 16 18 16 11 10
16:47:44 20 16 10 5 3 1 19 11 6 2 2 0 13 14 6 36 17 26 11 12 27 14 9 12
16:47:44 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
16:47:59 21 17 9 4 2 1 22 12 5 3 1 0 13 17 5 37 15 31 17 14 23 12 6 6
16:47:59 100 46 46 46 46 46 100 46 46 46 46 46 100 100 46 46 46 46 46 46 46 46 46 46
16:48:14 22 20 11 4 2 1 7 16 8 34 21 13 12 15 6 39 19 32 13 14 26 12 8 9
16:48:14 44 100 44
16:48:29 19 19 8 6 4 1 21 14 5 1 1 0 11 14 5 39 17 29 12 14 22 13 5 7
16:48:29 46 46 0 0 0 0 0 46 0 0 0 0 0 46 0 0 46 46 46 46 0 100 0 46
16:48:44 20 19 9 5 2 1 19 12 8 4 2 1 12 14 5 40 16 29 14 10 26 13 9 11
16:48:44 44 100
16:48:59 18 17 9 6 3 1 18 15 7 3 1 0 13 14 8 35 14 22 13 12 24 17 8 8
16:48:59 44 44 100 44
16:49:14 20 18 9 6 4 2 22 13 8 3 0 2 11 12 6 37 19 27 12 13 24 14 5 9
16:49:14 46 46 46 46 46 46 46 100 100 46 46 46 46 46 46 100 46 46 46 46 46 46 46 46
16:49:29 18 16 10 5 2 1 20 12 7 2 2 0 11 14 6 38 18 29 14 12 27 15 7 10
16:49:29
16:49:44 20 15 10 5 2 1 19 9 6 2 3 0 12 14 6 39 20 25 13 13 23 14 5 9
16:49:44 0 0 0 0 0 0 0 0 0 0 0 0 46 46 0 0 0 100 0 46 100 0 0 46
16:49:59 20 14 11 7 2 1 20 15 6 1 0 0 12 14 4 40 14 29 12 14 29 17 8 9
16:49:59 44 44 44 44 44 44 100 44 100 100
16:50:14 25 17 11 5 5 3 19 15 8 3 2 1 12 15 8 37 17 28 13 14 26 16 12 11
16:50:14 46 46 100 0 0 0 100 100 0 0 0 0 0 0 0 46 46 46 100 46 0 0 0 46
16:50:29 20 18 6 6 2 1 23 13 4 2 2 0 13 14 7 38 17 28 13 13 27 13 9 9
16:50:29 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44
16:50:44 21 15 10 3 2 2 20 13 7 2 1 0 13 13 3 38 17 36 11 12 25 16 7 12
16:50:44 0 46 0 0 0 0 0 0 0 0 0 0 0 46 0 100 0 46 46 0 46 46 0 0
16:50:59 14 20 12 10 5 1 18 17 9 4 2 1 6 12 1 52 19 45 8 13 32 25 7 10
16:50:59 44 44 100 44 44 44 44 100 44 44 44 44 44 100 44 44 44 100 44 44 100 44 44 44
16:51:14 15 22 16 12 4 2 18 16 12 6 2 2 7 14 2 52 23 46 3 11 25 32 21 17
16:51:14 44 44 100 44 44 100 44 44
16:51:29 12 24 15 10 4 1 19 17 11 3 2 0 7 14 1 52 22 47 4 11 29 26 12 13
16:51:29 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 46 0 0 46 46 0 0 46
16:51:44 16 21 15 13 4 2 20 17 10 3 1 0 7 14 1 52 20 47 5 13 28 30 17 14
16:51:44 44 44 100
16:51:59 16 20 14 7 5 1 20 16 10 4 2 0 6 15 2 52 22 46 5 14 27 28 13 11
16:51:59 0 0 46 0 0 0 0 0 0 0 0 0 46 0 0 0 46 46 0 0 46 46 0 46
16:52:14 15 22 13 10 4 2 19 18 11 3 1 0 7 15 2 51 23 49 3 9 31 31 16 13
16:52:14 44 44 44 44 44 44 44 44 44 44 44 44 44 44 44 100 44 44 44 44 44 44 44 100
16:52:29 14 22 14 10 4 2 19 16 11 5 3 0 6 14 2 51 23 48 3 11 24 28 17 13
16:52:29 0 0 46 0 0 0 46 0 0 0 0 0 0 0 0 100 0 0 0 0 0 0 0 0
16:52:44 15 23 13 14 8 2 21 18 10 4 2 0 5 12 1 52 20 50 5 15 27 30 15 15
16:52:59 13 21 14 10 6 2 20 16 9 4 4 2 7 12 2 49 25 50 6 13 20 29 19 12
16:52:59 100 100 100 100 100
16:53:14 14 20 16 13 7 4 18 18 11 4 1 1 5 14 1 49 22 50 4 11 24 32 24 20
16:53:14 100 100 100 100 100
16:53:29 13 22 17 11 7 2 20 17 13 6 3 2 5 13 2 51 20 50 3 11 26 30 18 13
16:53:29 100 100 100 100 100
16:53:44 14 25 14 15 9 5 20 14 7 4 1 0 5 15 2 53 24 48 5 13 25 27 18 20
16:53:44 100 100 100 100 100 100 100
16:53:59 12 23 13 13 8 4 20 18 14 6 2 0 4 14 2 51 22 50 5 12 23 31 17 13
16:53:59 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:54:14 19 23 14 9 6 3 20 16 10 6 2 0 7 13 1 50 21 48 6 17 28 28 16 18
16:54:14 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:54:29 14 20 16 10 5 2 19 19 11 5 1 0 6 13 1 53 23 44 4 10 25 32 15 14
16:54:29 100 100 100
16:54:44 15 22 13 9 4 1 19 17 13 7 1 1 8 15 2 51 23 48 7 12 29 30 15 15
16:54:44 100 100
16:54:59 16 23 13 7 3 1 17 16 13 11 3 0 7 14 1 51 24 47 4 10 22 32 15 19
16:54:59 100 100
16:55:14 19 22 19 12 9 8 20 17 12 8 3 1 7 15 2 54 25 45 3 11 25 34 27 17
16:55:14 100 100 100
16:55:29 12 21 16 10 3 1 17 18 15 6 2 1 7 15 2 53 22 48 3 8 24 33 20 20
16:55:29 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:55:44 16 21 14 8 4 2 19 17 13 6 1 1 6 14 4 52 19 50 5 8 28 27 17 18
16:55:44 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:55:59 14 19 15 12 4 1 17 17 13 6 3 1 5 15 2 52 19 48 2 9 24 34 24 21
16:55:59 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:56:14 13 20 17 13 5 3 19 17 13 4 1 1 8 13 2 53 23 47 3 10 28 33 20 14
16:56:14 100 100 100
16:56:29 17 21 13 6 3 1 19 17 9 3 1 0 6 16 2 50 23 45 3 11 29 29 15 12
16:56:29 100 100 100 100 100
16:56:44 17 22 16 8 4 1 17 16 13 4 2 0 9 16 2 53 20 47 4 9 24 29 12 15
16:56:44 100 100
16:56:59 12 21 13 8 5 1 17 19 10 6 2 1 6 13 2 50 21 49 6 13 29 24 9 9
16:56:59 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:57:14 15 21 15 10 4 2 18 17 10 5 2 0 5 16 2 56 19 45 6 12 27 26 13 13
16:57:14 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:57:29 12 21 16 12 5 1 20 18 10 4 1 0 6 15 1 53 24 47 4 9 28 31 16 17
16:57:29 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
16:57:44 16 22 15 9 5 2 18 19 9 5 1 1 7 16 1 52 23 44 3 10 25 31 17 19
16:57:59 13 21 16 9 5 2 21 15 9 4 1 1 5 12 2 49 24 50 4 9 29 31 18 16
1:53:14 21 25 22 3 5 4 14 3 2 4 4 7 45 3 20 18 63 16 12 10 17 10 23 10
1:53:29 20 15 4 2 2 2 7 2 2 3 0 0 22 12 14 6 45 12 14 5 18 5 2 3
1:53:44 16 34 25 19 25 19 28 20 7 21 9 4 19 30 44 15 45 25 11 15 16 29 21 8
1:53:59 15 6 2 0 0 0 2 1 0 0 0 0 30 8 9 13 39 10 12 4 16 0 1 0
1:53:59 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 100 100 46 46 46 46 46 46 46
1:54:14 17 5 1 1 2 0 6 3 0 0 0 0 10 15 27 23 67 11 14 6 14 2 2 4
1:54:14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 0 0 0 100 0 0 100
1:54:29 14 6 2 1 0 1 4 1 0 0 0 0 38 15 12 5 28 8 19 4 10 2 1 1
1:54:29 44 44 44 44 44 44 44 44 44 44 44 44 44 100 44 44 44 44 44 44 44 44 44 44
1:54:44 10 6 1 8 1 0 2 1 0 1 0 0 28 5 10 6 33 3 9 4 16 1 0 0
1:54:44 0 0 0 0 0 0 0 0 0 0 0 0 46 0 0 0 0 0 0 0 0 0 0 0
1:54:59 24 3 5 2 7 0 3 0 0 0 0 0 27 17 16 14 28 2 14 9 14 7 3 0
1:54:59 44 100 100 100 44 44
1:55:14 17 3 12 3 22 0 1 0 0 0 0 0 28 24 18 2 18 1 11 3 16 2 2 4
1:55:14 0 0 0 0 0 0 46 0 0 0 0 0 100 0 0 0 0 0 0 0 0 0 0 0
1:55:29 11 4 2 3 4 1 5 1 4 4 2 0 37 15 16 6 29 4 10 12 19 9 11 4
1:55:29 44 100 100
1:55:44 9 2 1 0 0 0 4 1 1 5 0 0 31 9 11 1 34 0 9 2 12 7 1 1
1:55:44 100 46 46 46 46 46 46 46 46 46 46 46 46 46 100 46 46 46 100 46 46 46 46 46
1:55:59 15 7 6 5 6 1 17 3 5 8 0 0 12 30 15 7 61 16 6 16 22 46 10 12
1:55:59 100 100 100
1:56:14 20 12 21 8 15 13 18 6 30 23 4 2 19 32 28 22 52 27 21 18 26 20 22 9
1:56:14 46 0 0 0 0 0 46 0 0 0 0 0 0 46 0 0 100 0 0 46 100 0 0 46
1:56:29 24 16 19 17 15 8 16 15 10 12 2 2 10 34 23 18 61 25 9 19 33 10 15 7
1:56:29 0 0 46 0 0 0 0 100 0 0 0 0 0 0 0 0 100 100 0 46 0 100 0 0
1:56:44 26 5 8 5 6 0 15 29 5 5 1 0 17 18 31 6 58 7 13 11 16 6 0 3
1:56:44 100 2 2 2 2 2 2 2 2 2 2 2 2 2 100 2 100 2 2 2 100 2 2 2
1:56:59 22 12 21 5 6 9 7 0 0 4 0 0 7 13 52 10 62 18 15 13 14 4 3 4
1:56:59 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 46 100 46 46 100 46 46 46 46
1:57:14 26 5 7 1 0 0 9 1 0 4 0 0 10 14 17 14 62 10 10 11 16 5 3 8
1:57:14 100 44 100 100 44
1:57:29 38 16 13 3 10 2 14 7 6 11 1 0 8 18 16 8 65 38 16 15 16 3 1 2
1:57:29 46 46 46 100 46 46 46 46 46 46 46 46 46 46 46 46 100 46 46 100 100 46 46 46
1:57:44 30 10 9 2 9 0 9 4 0 8 0 0 31 13 6 7 42 2 9 14 16 1 1 2
1:57:44 44 44 100 100
1:57:59 23 26 9 10 10 12 28 2 9 4 3 2 32 7 18 6 46 20 13 20 17 12 7 4
1:57:59 0 0 0 0 0 0 0 0 0 0 0 0 0 0 48 0 100 0 0 0 100 0 100 48
1:58:14 27 29 16 22 20 24 33 12 20 11 7 4 26 21 26 26 60 29 32 20 32 9 23 10
1:58:14 44 44 100 44 44 44 44 100 44 100 44 44 44 100 44 44 44 44 44 44 44 44 44 44
1:58:29 27 8 10 1 3 1 29 3 1 7 1 1 2 24 17 29 55 26 17 5 15 3 7 4
1:58:29 0 0 0 0 0 0 100 0 0 0 0 0 0 0 46 0 46 46 46 0 0 0 0 0
1:58:44 25 31 19 13 17 19 23 26 15 25 15 11 27 25 16 44 51 27 21 15 35 8 14 13
1:58:44 38 38 38 38 100 100 38 38 100 38 38 38 38 38 38 38 100 38 38 38 100 38 38 38
1:58:59 16 13 16 1 5 1 42 4 5 6 0 3 24 22 18 12 52 6 45 7 16 2 4 9
1:58:59 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
1:59:14 21 9 5 1 1 0 44 6 2 15 2 0 13 12 9 12 55 13 16 6 18 4 5 5
1:59:14 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
1:59:29 16 4 7 1 1 0 20 3 4 5 0 6 19 7 6 16 65 1 15 5 16 6 2 2
1:59:29 100 100
1:59:44 22 7 4 1 0 0 27 4 2 4 4 1 23 10 8 3 61 5 14 5 16 2 2 4
1:59:44 100 100
1:59:59 20 14 5 1 1 0 30 1 0 2 0 0 18 10 19 6 61 5 18 2 11 1 2 1
1:59:59 100 100 100
2:00:14 28 8 16 12 11 12 29 20 16 19 3 5 24 26 16 37 56 16 13 11 33 11 16 29
2:00:14 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:00:29 7 5 37 35 34 40 49 57 55 57 7 13 6 11 7 59 72 3 13 15 24 5 4 4
2:00:29 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:00:44 5 7 38 44 39 44 29 66 63 62 3 4 11 3 6 54 29 2 20 11 15 3 58 8
2:00:44 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:00:59 12 23 43 35 19 19 32 53 50 47 8 8 14 17 9 37 14 3 26 12 20 5 9 5
2:00:59 100 100 100 100 100 100 100
2:01:14 21 32 13 6 6 22 27 7 32 7 22 23 7 42 29 9 3 0 7 28 35 9 15 9
2:01:14 100
2:01:29 23 7 5 1 1 0 24 1 10 0 0 0 5 1 0 0 0 0 8 3 15 8 1 2
2:01:44 28 12 2 0 2 0 3 1 0 0 0 0 5 1 0 2 0 0 8 3 9 2 0 1
2:01:44 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:01:59 21 6 3 0 2 0 4 0 0 0 0 0 6 1 0 0 0 0 15 4 22 8 13 2
2:01:59 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:02:14 10 2 1 1 0 0 1 0 0 0 0 0 2 1 0 0 0 0 10 2 14 2 3 3
2:02:14 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:02:29 6 16 18 7 14 19 12 1 19 13 0 12 3 14 17 5 12 14 7 16 24 19 51 14
2:02:29
2:02:44 7 4 0 1 0 1 2 0 0 0 0 0 4 3 1 0 0 0 16 12 16 13 24 6
2:02:44 100
2:02:59 5 7 2 1 0 0 3 1 0 1 0 0 6 2 0 3 0 0 18 7 11 2 6 6
2:02:59 100
2:03:14 6 0 0 0 0 0 1 0 0 0 0 0 4 2 0 0 0 0 22 15 16 2 13 2
2:03:14 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:03:29 5 1 0 0 0 0 3 1 0 0 0 0 4 1 0 0 0 0 22 13 18 2 8 3
2:03:29 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:03:44 16 11 3 1 6 0 6 1 6 0 0 6 15 2 3 5 0 0 14 27 33 3 24 14
2:03:44 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
2:03:59 19 3 1 0 0 0 12 0 0 0 0 0 14 3 0 0 0 0 1 50 39 4 45 3
2:04:14 17 7 11 2 1 2 20 7 4 1 2 2 19 5 10 6 1 1 5 38 30 8 80 17
2:04:29 2 1 43 2 3 0 2 19 0 9 15 13 10 7 8 0 0 0 6 1 28 2 8 5
2:04:44 9 18 11 5 2 2 12 2 7 9 2 6 5 11 22 11 11 7 8 12 29 5 7 13
2:04:59 34 14 5 19 11 3 10 28 16 55 24 28 31 30 31 8 13 11 19 8 23 27 25 6