This post is just a tease, maybe I'll be able to add more details later. Oracle 11.2.0.3 database instance I've been watching, with a 7.5 gb SGA target, was performing over 2 million physical reads in a 6 hour period against a single table that was only ~11,000 8k blocks/~100 mb. Yikes!
An important detail - this instance used three different database block sizes 8, 16, and 32 k. Oracle database allows the use of multiple database block sizes within a single database instance. That strategy is tricky, and most folks will warn against doing that. Its tricky to manage multiple database caches for each block size with priority and churn rate in mind.
It gets even trickier. Oracle 11gR2 plans are by default biased toward full table scans and fast full index scans. And the execution engine will decide whether a FTS or FFS will go to PGA or SGA.
Consider that multiple concurrent queries each performing the exact same FFS/FTS against concurrent IO (cio) files will EACH perform physical reads for the scan into PGA. In the case I'm thinking of, the talbe was only ~11,000 8k blocks - about 100 mb. WIth an mbrc of 128, about 100 reads should be enough to get it into one queries PGA, right? Well... unless the table is fairly fragmented... within a very fragmented file. The system I observed required nearly lots more than 100 reads for each full table scan of this table.
And, the system did a lot of full table scans requiring physical reads. There's a parameter called _small_table_threshold that the execution engine considers when making its decision for direct path read into PGA vs read into SGA database cache for a FTS/FFS. By default, _small_table_threshold is 2% of the database cache. Multiply that by 5 - or by default 10% of the database cache size. Only tables larger than that can qualify for FFS/FTS direct path read (DPR) to PGA.
But, the table was only ~100 mb - and the instance had 7.5 gb SGA target. Surely a 100 mb table would fit in the database cache and the frequency of access would keep it there - so logical IO would be high but physical IO would be low.
Well... except for the multiple database block sizes, and multiple database cache sizes, with a separate cache for each block size. As I continued observation, in one call I noted that I was in personally uncharted territory... I wasn't certain but I suspected Oracle would use ~10% of the relevant cache for that block size as the threshold for DPR=>PGA eligibility. The kicker: the 100 mb table was made of 8k blocks, and the 8k cache was 500 mb. So the 100 mb table qualified for DPR=>PGA!
We'll introduce a performance intervention over the weekend - move the table object and the single index (pk index) from an 8k table space to a 16k table space. The 16k database cache size is ~3.5 gb. So the 100 mb table will be under the DPR=>PGA qualifying threshold. I expect physical reads against that table to decrease from over 2 million in that 6 hour period to a couple thousand. If that works out, it'll build momentum for standardizing this instance on a single block size (among a number of other interventions). :)
Sorry I couldn't include lots of cool stuff in here like specific numbers, charts, graphs, a repro/demo example.
But I know that lots of systems are struggling under the physical read rate after transitioning to Oracle 11gR2. Sometimes the struggle begins only after a specific table reaches a given size... and then it can be a real puzzle to untangle. This might be an important piece of your puzzle so I wanted to get the word out.
No comments:
Post a Comment