Tuesday, April 27, 2021

SQL Server: Perfmon active parallel threads vs active requests vs DOP

This question came across my desk today. And I sent an answer I hope was helpful.  Honestly the second part of the answer, still forthcoming, might be more what the question was after.  But first I'll try to prevent wrong conclusions/evaluarion, later I'll give some ideas how to evaluate in an actionable way.


~~~~

How does the perfmon active parallel threads counter related to the DOP settings? I guess I assumed it would be something like active threads x DOP = active parallel threads but that doesn’t seem to be right.

~~~~

My lengthy but not (yet) graph-y response 😊

The short answer: there isn’t a direct, predictable relationship for active parallel threads to workload group DOP or active requests at the workload group level(or even for sum of all workload group active px threads in a resource pool vs active memory grants in the pool – which is tighter but still unpredictable*).  I often add those measures for a workload group (as a stacked graph) and put CPU % for the workload group as a line graph on the other Y axis for trending. 

When a parallel plan is prepared, it may contain multiple zones or branches which can operate independently after they branch off and until they are combined. So when the plan is selected, a reservation for DOP * branches parallel workers is made(the session already has an execution context ID 0 worker which is compiling the plan and will do all the work for DPO 1 queries).  If the current outstanding reservations plus all execution context ID 0 workers (whether coordinators for parallel queries, DOP 1 queries, or in the process of plan compilation) *plus* the sought reservation exceeds [Max Worker Threads Count] for the instance, DOP will be adjusted downward until it fits.  If there is a specific memory grant necessary to support the DOP and that memory grant is unavailable, DOP could be downgraded for memory reasons, too. (I’ve only seen that happen for CCI inserts but I guess it could happen elsewhere)  Worker thread pressure can result in parallel queries getting downgraded all the way to DOP 1.  (In sys.query_store_runtime_stats you can see last_dop = 1 and if look at linked sys.query_store_plan is_parallel_plan = 1). 

I’ve seen  a single ***!!REDACTED!!*** query at DOP 8 reserve 200 parallel workers or more!

Now, the trick with parallel workers is that no matter how many branches in the plan and how many total parallel workers, they will *all* go on the same SQL Server schedulers (so all on the same vcpus) and the count of those schedulers/vcpus will be equal to the DOP. (Execution context ID 0 thread can co-locate with parallel threads for the same query, or not.  Even if all px threads for a query are within a given autosoftNUMA node or SQLOS memory node, the execution context ID 0 thread can be elsewhere.)

So DOP doesn’t directly govern how many workers a parallel plan will reserve. But it does determine how many vcpus the workers for that query will occupy.  The parallel workers for a DOP 8 query on a 16 vcpu system cannot get the vm to more than 50% cpu busy no matter how many of them there are. Because they will be on no more than 8 of the 16 vcpus.

OK, final trick with this: the initial parallel worker thread reservation is similar to a memory grant in that its based on initial understanding and estimates of the plan by the optimizer, while “active parallel worker threads” are determined by current runtime activity.

It’s possible (even likely, really) that a query which reserves 200 parallel worker threads doesn’t actually use them all.  If one branch finishes before another branch starts, those workers might be reused. So the reservation may be higher than “active parallel worker threads” in perfmon ever gets.

All of these details can be seen in sys.dm_exec_query_memory_grants. AFAIK every parallel query will have a memory grant. In that DMV, can see reserved parallel threads, active threads at the time, and max px threads used by the query till that point in time.

I’ll create another post about tuning DOP based on perfmon measures later today.

Some additional details from Pedro Lopes of Microsoft in the 2020 July 7 post linked below.

What is MaxDOP controlling? - Microsoft Tech Community

 *The exceptions to the unpredictability

- if a workload is composed solely of queries for which the plans have solely batch mode operators. Then each query will have DOP active parallel workers + 1 execution context ID 0 worker (or occasionally a single active worker for certain plan operators which may force such)

- if the workload is comprised fully of parallel checkdb/checktable workers. In this case the max active parallel workers will be 2*DOP (and the session still has an execution context ID 0 thread).  Beware that as of SQL Server 2019 CU9 large scale parallel CHECKDB/CHECKTABLE operations spend a considerable, irreducible amount of time effectively operating at DOP 1. 

Monday, April 26, 2021

A Very Circuitous Answer to a Question about #SQLServer PLE

Usually if I use PLE at all, rather than using PLE as a first-tier indicator of an issue, I use it to confirm issues I’ve detected or suspected based on other evidence.  Below I give some of the reasons.  And ideas to look at rather than PLE.

~~

If the vm has multiple vNUMA nodes, in addition to the [\NUMA Node Memory(*)\*] counters which describe memory utilization at the vNUMA node level, you’ll see [\SQLServer:Buffer Node(*)\*] and [SQLServer:Memory Node(*)\*] counters to describe activity within the SQLOS memory nodes (unless an instance has trace flag 8015 enabled, which disables #sqlserver NUMA detection).

From the Buffer Nodes counters, the counter I use most frequently is [\SQLServer:Buffer Node(*)\Page life expectancy].

[\SQLServer:Buffer Manager\Page life expectancy] is equal to  [\SQLServer:Buffer Node(000)\Page life expectancy] if there is only 1 vNUMA node (or if NUMA detection disabled with T8015).  Otherwise,  [\SQLServer:Buffer Manager\Page life expectancy] is equal to the harmonic mean of [\SQLServer:Buffer Node(*)\Page life expectancy] values.

Here’s a blog post from 4 ½ years ago where I try to derive overall PLE as harmonic mean of node PLEs. Plus you get to see what my graphs looked like 4 ½ years ago 😊

Harmonic mean of SQLOS Buffer Node PLEs on NUMA servers 

http://sql-sasquatch.blogspot.com/2016/10/harmonic-mean-of-sqlos-buffer-node-ples.html

The main thing is, cache churn on a single vNUMA node can have a pretty significant impact on instance-wide PLE.

And the formula for determining PLE at the SQLOS memory node/vNUMA node level isn’t public.

But, let’s think through the things that would have to go into a page life expectancy calculation.

First would be size of the database cache.  The larger the database cache with a fixed workload, the longer we’d expect a given page to last in database cache.

Then we’d also need a database page insertion rate.  Database pages can be inserted due to page reads, or due to the first row insert to a previously empty page.  Improving performance of a given workload means increasing the pace of page reads, or increasing the pace of first row inserts, or both.  For a given database cache size, increasing the pace of work decreases PLE.  For a given database cache size, decreasing the pace of work will increase PLE.   That’s about as short of an explanation as I an provide about why PLE isn’t a direct measure of performance or resource utilization.

Then there’s the change in database cache size.  The database cache isn’t a fixed size.  It has a minimum size of 2% of [Target Node Memory] on each node, once it’s crossed that minimum size.  And SQL Server tries to keep it at least 20% of [Target Server Memory].  But for a given workload, the larger the database cache, the higher the PLE.

Now, the formula for PLE isn’t publicly documented.  So this is speculation on my part. But I believe that not only are observed values for sizes and rates used to calculate PLE.  I believe “velocity” or rate of change is used as well.  I’m pretty sure PLE is not only adaptive but predictive.

OK. Well, if I don’t look at PLE until I look at other stuff, what other stuff would I look at first?

I’d start with a stacked graph of [Database Cache] + [Free Memory] + [Stolen Memory], first. Put in [Target Memory], too, as a line graph.

I’ll use some graphs from some work last year.

Especially if there is no use of paging space, I’d move on to memory grants. (If there is use of paging space like our colleague noticed, investigation to see if MSM is too high, or if another memory-consuming app is on the VM can be valuable.  The memory-consumer could even be sending a SQL Server backup to a backup manager like Data Domain, Commvault, etc if using a method that doesn’t bypass file cache aka not using unbuffered IO. Let's talk 'bout paging space another day.)

Let’s add in CPU for the resource pool.

The part in the blue box looks a bit suspicious. It’s got fewer active memory grants than other “busy” times.  But CPU utilization is higher.

 

Huh. If we look at overall SQL Server memory and this time bring in PLE as a line graph, that period looks pretty exceptional, too. High CPU, and growing PLE.

 

Oh. A huge amount of backup traffic.  (SQL Server backups can use a lot of CPU if they are compressed or encrypted.)   So backup traffic, some active memory grants, some pending memory grants.

And if I look at granted memory vs the maximum allowed workspace memory, I see the dynamic that is leading to pending grants.


Max Server Memory on this instance is 960 GB.  But memory grants for a given resource pool are not calculated directly against MSM.  Rather, they are calculated against the maximum target for the relevant resource semaphore for the resource pool.  In this SQL Server instance, no resource pools have been added beyond default (which contains all of the user workload sessions) and the internal resource pool.  So the graph above works out and makes sense.  (If there are multiple non-internal resource pools - eg default + ORM_unrealistic - that simultaneously have memory grants, a graph like this might not make sense and info from the DMVs may be needed.  Because [Reserved Memory] in perfmon is global across all resource pools.)

Notice how when [Reserved Memory] is layered in front of [Granted Memory], only a small portion of [Granted Memory] is visible at the top.  Looking across all the queries and workload groups, as a whole, the queries are asking for wayyy more workspace memory than they need.  Lots of room for tuning.

Now there’s one more unusual thing about the area in the blue box.  That’s a lot of room between the granted memory and the maximum workspace memory. Why are there still so many pending memory grants for so long?  It has to do with what query is at the front of the waiting line, and how big of a grant it wants.

At least one of the workload groups at this time still had a 25% maximum grant per query.  But that difference between max workspace memory and granted memory is way more than 25%, isn’t it?

Here’s a rule about memory grants that matter when start getting close to the max: for a grant to succeed, 1.5x the grant must be available for granting.  That way, at least half the size of the most recent grant is still available for grants afterward.  It’s a way to make sure a small number of very grant hungry queries don’t permanently squeeze out queries that request smaller grants.    

OK, that’s probably more than anyone needed to see today 😊

Just remember – PLE by itself doesn’t tell me much of what’s going on in SQL Server.

In the box below PLE is constantly rising.  But the workloads suffer from the same performance problem inside the box as outside the box – really long resource_semaphore waits for memory grants.