Its been a while since I looked at 'dbcc dbtable' lc_state values to see how close a database is to experiencing log buffer waits. I wrote a stored procedure to grab the info at 5 minute periods, investigating the effects of synchronous SAN replication for transaction log LUNs. But that was back in January 2014, and the instance was SQL Server 2008 R2.
Today I was poking around in SQL Server 2016 and noticed that lc_state values are more friendly! I'll capture all 7 of the new(er) text values and then create a whole new blog post about this. A colleague that was working with my January 2014 SP to capture this information told me that even in SQL Server 2012 lc_state was assigned a text value rather than numeric 0-6. Oh well.
Don't pay too much attention :) I'm just trying to work this out. I've mentioned a high write latency SQL Server instance before, and captured two LUNs for that instance that stay pretty close to pegged at "current disk queue length" of 32.
You can see some of that fun here.
I was able to find a couple of sources on the web that mentioned both the maximum 32 outstanding async writes per txlog, and also mentioned that the total number of in-mem buffers was higher (128 total).
Paul Randall was nice enough to tell me that I could check on free txlog buffers by getting tableresults from 'dbcc dbtable(db_name()) with tableresults' (if you do it
So I put together a stored procedure to get the table results, count the rows with field=lc_state grouped by value. Now I've got more questions :)
SQL Server 2008 R2. Here's what I think is happening.
lc_state=0 is a zeroed, free, ready to become current buffer. Under pressure there may not be a buffer of state 0 at all. But, in a database idle since SQL Server startup, I've seen 127 buffers of state 0 and a single state 1(all of the buffers initialized and zeroed at startup). At startup all buffers are initialized and zeroed... but once they've been written once (lc_state=5) they are marked free (lc_state=6) and only zeroed one at a time on demand. That would be similar to how Windows and other OS handle RAM which needs to go on a "zeroable" list, and then zeroed, before being allocated to another use.
lc_state=1 is "buffer currently accepting txlog records". In one of my 5 minute interval captures, there was no buffer with lc_state=1. In one capture, with the txlog buffers under pressure (at approximately 5:55 am) there were two buffers with lc_state=1. It would make sense that under pressure there may be no "current buffer" and in a race condition dbcc dbtable may report two current buffers (ie a 'nolock' type result?)
lc_state=2 are full buffers. At the next trigger, they'll queue to be written and when the log writer can write them it will.
lc_state=5 are currently outstanding writes. At a trigger (such as commit or checkpoint) the logwriter will call for up to 32 buffers to be written asynchronously. Due to the total limit of 3840 kb of outstanding writes for the logwriter, there may be less than 32 writes sent even if there are 32 or more ready to be written. Or, there may be less than 32 buffers ready for write when the logwriter trigger (commit, checkpoint, etc) occurs.
lc_state=6 is write complete, free, ready to be zeroed. But these are only zeroed on demand - that's why there's never more than 1 lc_state=0 (free, zeroed, ready to become current).
So I don't have a clue yet about lc_state=3 or lc_state=4.
If I've correctly deduced the meaning of the txlog buffers, and the "on demand" zeroing of one buffer at a time from the "zeroable" list of previously written buffers... zeroing only 1 buffer at a time could be a bottleneck during heavy (optimized) ETL or other high change rate activity.
In this case, I think I may also have to monitor the 3840kb total limit for tx log writer async writes. There's two busy user databases, tempdb is getting some activity... and I think there's one more database generating some activity. So at some times, the 3840kb total txlog async write limit may also be throttling activity.