Our SQL Server “Denali” experiences – Part 2

Where has the week gone? I finally got round to continuing my SQL Denali testing today – with very promising results in fact!

To give you an idea of the environment (which I brushed over in the previous post) and a little more context, we have a number of SQL servers which sit between our web servers, and our main core databases. These act as “read-only” search, browse and caching servers for the Trade Me website. They each hold the same data, have the same configuration, and run on the same hardware, which makes it relatively easy to add new servers (i.e. scale horizontally) when necessary.

Add to the mix some clever networking kit, and a few in-house developed bits & pieces, and we can finely control the amount (and type) of load on each individual server. Very handy for testing new hardware and software, service packs, etc.

Wow! This is pretty awesome!So, just after lunch time this afternoon, I very gingerly put 1% search load on the Denali test box. To cut a long story short, over the course of the next few hours I increased load until we were running 50% of all search load on the 1 Denali test server. Trust me when I say that this is pretty damn awesome. 🙂

The Denali test server is exactly the same hardware-wise as the existing SQL 2005 servers. We’ve tried on a few occasions to upgrade to SQL 2008, but a pretty well documented full-text related performance bottleneck meant we could never get more than 5-10% search load on a single box. The issue was with the full-text merge process causing excessive locking on the underlying indexed table – which in our environment is constantly changing, as well as being constantly queried – this meant we’ve just stuck with 2005 thus far.

Although everything went very well, there were a few weird things I ran into while getting the server ready; firstly I needed to shrink a tempdb data file that had blown out, but the shrinkfile operation never completed (and actually timed out once or twice after around 5 minutes which I’ve never come across before). Because I couldn’t shrink the data file I thought I’d just restart the SQL service, which would recreate the tempdb at its original (i.e. smaller) size, but this wasn’t the case. The initial size was correctly set, but after a restart of the SQL service the tempdb file size was unchanged. I’ll need to investigate this further next week, so will post any new findings.

That’s all for this week’s testing.
DB Dave

2 thoughts on “Our SQL Server “Denali” experiences – Part 2”

  1. Pingback: Identifying unique indexes that aren’t defined as unique « databasedave

  2. Pingback: Our SQL Server “Denali” experiences – Part 1 | David Curlewis

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top