Tuesday, January 14, 2014

The Database Tuning Advisor - Schroedinger's Sledgehammer

The Database Tuning Advisor - Schroedinger's Sledgehammer

Database performance tuning is bit of a dark art to most people. Hell, even the best of use gets it wrong sometimes, but that’s usually mitigated by experience. Experience of several late nights and busy days trying to bring the production database back to life, but that’s a whole other story.

We were talking the other day about index fragmentation, and the impact of clustered index inserts on non-clustered indexes and database snapshots. (I know, it doesn’t get much more rock’n’roll than that, right ?)

Trawling through a vendor’s database, we happened to take a look at the indexes on certain tables and we could see that most of them are prefixed ‘_dta_’. Oh, the humanity ! The dread hand of the Database Tuning Advisor (DTA) was witnessed, and it sent a shudder though all those present. Either that, or the air-con went dappy again.

The DTA is a VERY useful tool when developing and tuning your databases. You get a database load, run it through the DTA, and it’ll recommend some indexes and perhaps some statistics that may help when you go live. Hoopy, right ? I mean, you’d be pretty excited if there was a tool that can solve all your performance problems (real or imagined) BEFORE you unleash it on your estate, right ?

The answer, as always, is… (altogether now) ‘It Depends’. Not all recommendations are to be taken as gospel, and the DTA output should be taken as a starting point for your eventual implementation. The DTA doesn’t know how your data will change, how many users will be hitting it, and, most of all, it doesn’t know about those last minute ‘enhancements’ requested by the business at the last minute.

In this way, the DTA could possibly be described as Schroedinger’s Sledgehammer.

We’re all familiar with the famous cat that was used in the theoretical experiment. Wait, you’re not ? Well, lookee here :'s_cat

TL:DR – You can’t tell what’s happened to the cat without opening the box. Sure, you can guess, but you can’t actually know. Theorise all you like, the cat’s state remains undefined until you observe the system (comprised of the box, the radioactive material, the vial, the hammer, and that poor kitty), and you can’t observe the system until you alter it (ironically, by taking a look to see what’s happened).

It is my contention that the DTA could be seen as a quantum thought experiment, in that you have some knowns (the data / the workload / the design), some suppositions (IO performance / CPU speed / constant workload), but you never really know what will happen until you observe those details in the live system.

(The Copenhagen Interpretation (q.v.) would seem to suggest that your database will be both performant and non-performant at the same time, but we know that not to be the case.)

What we DO know is that there is a performance impact on updating multiple non-clustered indexes for every clustered index, and that there are ways of identifying indexes that haven’t been used at all and are just sitting there taking up your storage and IO (sys.dm_db_index_usage_stats  and a plethora of others)

So, there is a risk, as often happens, that the DTA will recommend indexes and stats that turn out to be sub-optimal when implemented in production. Kind of like using a sledgehammer to crack a nut.

By all means, use the DTA to guide you in the right direction. But use it more like a divining rod that points the way than a big stick with which to beat your database into performance.

And if you ever see indexes in a production database that start with ‘_dta_’ spare a thought for the poor developer they were probably only trying their best, after all !

Back soon…

Thursday, January 9, 2014

Fun things to do in 2014

Fun things to do in 2014

It's that time of year, now that spring is in the air….

Scratch that – must have been a hangover from an earlier time. But, it *is* a New Year, and I thought I'd jot down some thinks I'll be thinking about over the next 12 months. Because I can.

1.      SQL 2014 (Hekaton)

      There's been a whole lot of buzz over 2013 about the next release of SQL Server, now commonly called SQL 2014, but sometimes referred to as Hekaton.

      The word Hekaton is derived from the greek word for ten-fold. Actually, I think it's a hundred-fold, but that doesn't seem to matter that much to those happy guys and gals at Microsoft. The inference from the name is that you get a *massive* performance boost using Hekaton as your core engine.

      Not surprisingly, this is utter cobblers, and there's been quite a bit of back pedalling from MS on the subject. What you get in Hekaton is 'memory-optimised' tables, column-store indexes, Vertipaq, and X-Velocity structures right in the database. You also get natively compiled stored procedures, but more on that later, I think.

      But it's certainly something we all need to be aware of – Hekaton is coming, whether we like it or not. I'll write a session up for sometime later in January. No, I WILL. I promise….


2.      Big Data

      Another buzzword of 2013 was 'Big Data'. Yeah, right. Big Data isn't about having huge amounts of data that you need to do something with. It's more about finding out what data you currently capture but have no use for, and finding out what that data may be able to tell you.

      For example, you know that new account openings are at an all time high, and that the average balance in Cash ISA is £1500. What we also capture behind these numbers are demographic and geographic data that can be plotted in any number of software packages and analysed to death, such as if you live in Surrey, your Cash ISA is likely to have a balance greater than one held in Sunderland.

      Big Data can also be used to let you know there are things you don't know, and lets you ask business questions. Like *why* is there a low take-up of FlexPlus in Hampstead, yet the highest average Credit Card available spend is on the Wirral.

      Again, there's a whole lot more to this, but I ask you to bear the following in mind:

          What can our data actually tell us, and how can we design our systems to support Big Data type analysis.

      It's all in the design – get that right, and your Big Data headache is less of a problem


3.      Small Data

       You know when you get a spreadsheet from someone that details some 2,500 servers with something wrong, but there's little or no supporting information to let you act quickly ? Don't you just hate that, to have to filter / sort / arrange the data before you can start working on what you need to do ?

       Well, when you provide data to anyone, be that a project / support contact or a huge distribution list, have a think about what you're actually giving them, and try to imagine the way in which they'll use the data. Sometimes, formatting an Excel sheet as a table is the simplest yet most time saving thing you can do for others.


4.      Information Flow

      I know I've not posted much on the blog recently. There's hods of useful stuff in our heads (presumably), so let's get it out there. I had an interesting discussion with a colleague the other day about deleting records through a view having a different, seemingly more efficient, execution plan.

      Just writing stuff down helps you remember it as well !


5.      Be nice to people. Sometimes.

       OK, so I'm not getting fluffy over this. It's hard to remain smiley all the time (for some people), and sometimes someone needs a slap. But I'm trying to make the former my default response, not the latter.


So, Happy New Year, and watch this space for stuff. There's a lot of stuff about, and it's getting stuffier all the time. And if that's not your thing, try holding a chicken in the air and sticking a deckchair up your nose.


Back soon…

Tuesday, May 7, 2013

Process, process, process...

Process, process, process...

I'm a fan of both documentation and agreed processes. Really. A big fan. There's nothing quite like it. OK, so I'm a DBA and we don't normally document stuff or follow due process, but I really wish we could. It would make things so much easier.

There is a major challenge with this - it's not easy. We all WANT to have everything nice, tidy and available, but we generally rail against processes that don't do what we want, when we want them to. It does seem that a lot of processes and standards exist because then someone can place a tick in a box at audit time (and those timese are becoming more and more frequent, I can tell you).

A business process is usually aligned with a methodology. ITIL, COBIT, PRINCE, TOGAF, etc are all great tools to help you get a handle on what you're doing, how to evidence that you're adhering to to process, and provide auditors with warm feelings inside.

However, when the process just doesn’t make sense (even if it ticks all the boxes), you're less likely to collect good data about your project's performance against key deliverables and KPIs. If the process only exists for it's own sake, then you should really take a long hard look at that process and see if it could be done another way or removed altogether. This, I fear, is a stumbling block for many. Wouldn't it be great if we could just do the job, and the process takes care if itself ? I know I would. I'd also buy-in to a process that was designed with my needs in mind, and if the business needs could also be met from the same data, then all the better.

Got a couple of quotes for you here. One from Albert Einstein, and one from a good friend:

Einstein : "Things should be simple, not simpler."

Friend : "The trick with PRINCE2 is knowing which bits NOT to implement."

Einstein was right on the money that. If the process isn't simple, change the process. All well and good, but try not to get hung up on simplifying EVERY process. There's an iterative approach to evaluating what you're doing, and that's really important. It's not enough to say that you have always done it that way, so that's how it'll be done every time in the future. If you always do what you've always done, you'll always get what you've always got.

My friend was also spot on in his assesment of methodologies - why would you need to run every project to strict principles, when not all of the those principles can be applied to every project ? To quote Monty Python, "Universal affirmatives can only be partially converted : All of Alma Cogan is dead, but only some of the class of dead people are Alma Cogan".

I'll give you a quick example, if I may ?

Today I was passed a document for review. It had a RACI matrix, scope, executive summary, all the things that you need keep your auditors happy. Except content.

That document only contained a link to a spreadsheet that actually had all the data in. The data that was collected and confirmed by those in the RACI matrix.

The problem here is that there's no process for expressing the useful data in the methodology being applied, so the first document only exists to complete the process, and nothing else. It has no other useful meaning. Further, the REAL data has no project metadata associated with it, and as such is considered by the project as 'junk data'. If it's junk, why does it exist ? If it's not junk, how could it be properly classified ?

Perhaps the solution may be to revisit the project processes and standards to make them simple, or perhaps not apply all principles to all projects ?

I'm not sure, but reviewing one appropriate document has to be more efficient that reviewing two that have no actual value.

And it's easier.

Back soon…

Wednesday, April 10, 2013

Multiple Transaction Log Physical Files - Myth punctured

Multiple Transaction Log Physical Files - Myth punctured

I always thought that there was no really good reason for having multiple transaction log physical files.

OK, so that's a bit of a broad statement, but humor me a while.

The transaction log is cyclical in nature, as we all know. It's well documented in loads of places, not the least of which is

There's a common understanding in the wrap-around nature of the transaction log, in that when one Virtual Log File (VLF) has no free space, the database engine will go a'huntin' for a VLF that’s next in line, and will go back to the first VLF once transactions have been flushed from it. Hence 'wrap-around'.

It follows, therefore, that when you have multiple transaction log files, then the first one will fill up with VLFs until it hits a buffer (be that disk space or auto-growth limits), and the the log will then spill into the next available physical file.

This is actually NOT the case. I know, because I've just seen it.

The scenario was thus:

We're creating a new Sharepoint Content Database, and then doing a stsadm -o mergecontentdbs to shift the content across to the new database. A side issue to this is that the new content DB have multiple files in the PRIMARY filegroup, but that's for another time.

As usual, there's a lack of disk space on the drives that we want to use, and the current content is such that the whole database move operation is contained in a single transaction (again, one for another time, I think).

So, being a diligent DBA, I go hunting for space on another drive to hold the second transaction log file, taking the spill once the primary one hits the 35GB cap I put on it to ensure that there's room on the disk for the new content DB in it's entirety. I find some space on another drive, once I've deleted a developer's PST backups and old Service Pack installers (<sigh>), so that's where I create my second physical log file. I fully expected this to remain at the size I created it until it was required to take up the spill.

What I ACTUALLY saw is that from the moment I created the secondary log file, it started being used even though the primary was yet to hit the buffers. Odd, I thought. OK, so maybe it'll just use that one until it fills up and go back to the primary.

Wrong again.

It seems that what the database engine is doing is using one physical file until it hits the next auto-growth point, then it switches to the secondary. While using the secondary, it then expands the primary by the auto-growth increment. When the secondary hits the next boundary, transactions revert to the primary in the same way as before, so you get this flip / flopping of transaction log file usage, and growth rates are split 50/50 between the two physical files.

Of course, once the operation is complete, I'll remove the secondary log file, as I don't need it, and it's on a drive it shouldn't be on, but the observed behaviour certainly seems at odds with the accepted wisdom that one will fill, and then spill to the second.

See, you really do learn something every day.

Back soon…

Tuesday, February 12, 2013

Change Data Capture: Keep up...

Change Data Capture: Keep up...


It would appear that in the great tumult of development, some things take a while to surface. While people are being all 'agile' and 'waterfall', it's easy not to keep up with what's new in the database world.

One thing I'm being asked about a lot these days is Change Data Capture (CDC), which came in in SQL2008 (I think). The questions generally go along the lines of:

"Can we use CDC as our database audit ?"

To which, of course, the answer has to be "No." Not because your DBA doesn't want you to use new stuff, but because it seems that people are missing the point of CDC. It's 'Capture', not 'History' that kind of gives the game away here.

A cursory glance at documentation would tell you that SQL Server can now track changes to your tables, and preserve those changes for later inspection. All good stuff. Honest.

But there are some important points to note about CDC, and you should probably be aware of a few things before you decide which way to jump.

1.      CDC is a log-reader, and uses the same mechanism as Replication. This means that your transactions are committed to the database but cannot be flushed from the transaction log until CDC has processed them. And that's asynchronously. On a high-transaction database, this may lead to a larger transaction log than you would otherwise have enjoyed. Change Tracking is, however, synchronous. Go figure !

2.      CDC can cause index fragementation if the retention value is high. OK, I'll admit to having to go with the consensus on this one, as I've not proved it myself, but it's fairly easy to see how that would happen, especially when you consider that the ChangeTable is a function, not an actual table.

3.      CDC has a default retention of 2 days. This allows you 2 days to extract your audit information from the ChangeTable, and put it in a proper audit table. I can see that perhaps 7 days might not be a bad idea as a safety net in case your extract falls over in a heap, but any more than that could be considered excessive.

4.      CDC is NOT the same as Change Tracking. CDC captures the actual value changed, Change Tracking only tells you that a value was changed. Think of the difference between 'The price has changed from $5.00 to $4.99' and 'The price is different now'.

5.      Implementing CDC is a non-trivial task to get right. And it's probably going to be painful if you get it wrong :-)

6.      Preserving CDC when restoring a database is complex. Someone else wrote a nice blog about this. You can read it here:

I think I've got the salient points outlined here, and I'm sure that there's a debate to be had on these topics and more. Please remember that this is only in my own personal experience. You or your DBA may have your own opinion.

But I hope I've made a case for NOT using CDC as your only audit option.

Back soon..