Saturday, April 13, 2013

All SQL Server database files on a single LUN

OK... I admit, I'm horrible at the 'making a nice, readable blog thing'. But, it doesn't have to be pretty.

{*UPDATE* 4/18/2013 --Well, if I knew you were comin', I'da baked a cake and cleaned up a bit 'round here :)  Honestly, this post was mainly to see if I could figure out how to post a graph from a spreadsheet.  I got more :)  In fact, I have about f more graphs from the same data to illustrate additional aspects of the "too much on one LUN" condition.  I'll put all 6 graphs together in a followup post. --Sasquatch}

I've read SQL Server test results from storage vendors with all database files on a single LUN, or even a single database file on a single LUN. There are consequences to doing this, even if the single LUN is solid state storage. Regardless of the ultra-low latency of the underlying media, there are various elements in the storage architecture that have maximum queue depth, maximum bandwidth... or sometimes just get tired :)

SSD is outside of the scope of this particular post, though. The graph below represents 24 hours of 15 second perfmon excerpts from a fairly busy SQL Server system. The workload is a heavy ETL, and a batch report window. All of the report source database files are on a single Windows drive/LUN. The SAN hosting this lonely LUN is often referred to as "one of the biggest, baddest SANs on the planet". But the SAN reputation can't avoid the mechanics of the situation. The underlying media in this case is 15k large form factor disks, RAID5. The comparative benefits of RAID5, RAID10, RAIDDP, RAID6, RAIDZ are also outside the scope of this particular post, weisenheimers. :)

As read queue depth rises due to increased read IOPs, read latency increases. Really low read latency can be very important for OLTP systems. The target average read latency for an OLTP may be 15 ms or lower.  But the system I am observing is not an OLTP.  Its consumed with batch ETL and scheduled report batches. For this type of workload, maintaining a maximum target latency at full target bandwidth is the important idea. The target latency is 100 ms. These results are from less than 50% of the target bandwidth. Keep driving the IOPs and bandwidth up, and crazy things start to happen that make the relationship between current queue depth and latency less clear. And, other residents of the SAN might get mad at you, too, if you don't have the luxury of working on a SAN dedicated to your own evil schemes.

There are other reasons to have the database files spread across more than one LUN, database level stuff: dbcc checkdb IO, etc. But from my perspective, the mechanics of external shared storage for large scale, high traffic databases mean there better be a doggone good reason to try to cram a whole user database onto a single LUN.

Over the next year, I expect to assist a number of transition to more advantageous database data layouts: several equally sized LUNs for the main filegroup of the database, each with equally sized database files on them, each with equal data footprints within them. I usually start the negotiations at 8 and let folks talk me up or down :)

How will we get there? Two options, and it will depend on the priority of personnel time, project elapsed time, and hardware resources in each case:
1. bcp the contents out of the current layout, bcp into the nice, new layout.
2. Use a series of index rebuild operations, capitalizing on SQL Server's proportional fill algorithm, to redistribute contents across the new layout which incorporates the currently existing LUN and file(s). With this option, the end state will either have a significant amount of free space in each database file... or we might just toy with the idea of a file shrink operation. The shrink would of course be followed by another round of index rebuilds to eliminate the fragmentation injected by the shrink. Without the shrink, I'm confident in this method. With the shrink... we'd definitely be testing in a test environment with a full copy of the data before attempting in a production context.










3 comments:

  1. Rats! I forgot to point out that database file expansions and filesystem fragmentation make multiple database files on a single LUN potentially worse than all data in a single file on a single LUN.

    ReplyDelete
  2. Drat!! I also didn't mention that transaction log files on the same LUN as database files for a mission critical database are even worse than multiple database files on the same LUN. Not only are is the risk for performance degradation due to file interleaving/filesystem fragmentation present, but the risk for system failure/recovery scenarios has increased.

    ReplyDelete
  3. By the way, this is my new strategy for drumming up interest in my blog. Why edit my posts when I can comment on them, and make it seem like more people are interested. Shoot! I've been commenting as myself - I forgot to make up other personas for my comments, and start including anonymous comments.
    I'm such a rookie.

    ReplyDelete