Tag: sql

  • SQLSaturday #136: Done & Dusted!

    sqlsat136_webPhew! It’s Sunday afternoon and I’m starting to feel relatively normal again after a few hectic days doing the final prep work for yesterday’s SQLSaturday #136 here in Wellington, New Zealand.

    In the Beginning

    Preparation for the event started 3 months ago with Rob Farley asking whether I’d be interested in organising a SQLSaturday event in Wellington, as part of a “SQLSaturday ANZ Tour” that he was keen to co-ordinate.

    There were 5 sisterevents planned across Australia, and over the following weeks and months all the various organisers had several catch-ups (voice conferences, online meetings, etc) to figure out things like branding, sponsorship levels, speakers, etc. It was quite useful having a range of opinions and view-points to draw from.

    Whitireia PolytechnicThe Venue

    My first issue was finding a venue that wasn’t going to break the bank, and that would comfortably accommodate the 100-or-so attendees I was hoping to attract.

    I’ve attended a few Code Camps and past PASS events at the Whitireia Polytechnic campus in Porirua, and a colleague of mine, Moana – who was my right-hand-(wo)man in organising this thing – was a former student of the Polytech, and so had some good connections to tap into.

    We quickly found out that they were happy to have us back, and wouldn’t charge us a thing! Very nice. I honestly don’t think we could’ve found a better venue regardless of how much we had paid.  Sure, it’s not in the central city, but the combination of spacious facilities, technical know-how, super-star staff willing to bend over backwards to help set everything up, and the free price tag made it a no-brainer.

    Rob Farley entertaining the roomThe Speakers

    The venue sorted, my next concern was being able to find enough quality speakers, and my fears seemed justified as the call-for-speakers went out and just 2 or 3 sessions dribbled in over the next week or two.

    In fact, I think the speakers were just screwing with my sanity, because most seemed to wait until the call-for-speakers period was nearly over before submitting sessions. Smile

    I was pleasantly surprised at not just the number of sessions submitted for our humble little Wellington event, but also the quality.

    I knew quite a few of the speakers, and had seen most of them present at previous PASS, Tech Ed or user group events, and so knew we were well sorted for quality sessions. My biggest concern now was choosing which sessions to cut (fortunately I could squeeze in at least 1 session from most of the speakers who submitted).

    In the end we had (in alphabetical order):

    Massive thanks again to these folks for making the effort to prepare, and in some cases travel long distances, to present at our SQLSaturday event.

    Registration areaOn the Day

    It was an early start, with me wondering why I thought it was a good idea to catch up with some of the speakers (and a couple of my colleagues) for a few drinks the night before. Next time I’ll definitely wait until the Saturday night before “unwinding”. Winking smile

    I had my 2 older daughters running around with me sticking up signs around the venue, while Moana and my wife organised the registration area. Before I knew it there was a queue of attendees lined up and collecting their name tags, raffle tickets, etc.

    It was all on!

    Moana very kindly did a quick welcome and house-keeping session outside in the atrium area, after which we jumped straight into the 1st sessions. We ran 2 tracks of 7 sessions each, one slightly more focused on BI-related topics, although there was some overlap. We didn’t hold back, and opened up with the big guns; Peter Ward and Rob Farley going head-to-head, with both getting good turn outs.

    I ran around like a blue-arsed-fly (as they say) for most of the day, so didn’t spend more than 10 minutes in any one session, which was a pity since there were at least half a dozen that I would’ve loved to have seen!

    By the end of the day we’d lost probably half of our attendees – it was a long day and perhaps I tried to fit in too much in too little time, but all good to know for next time.  So apologies to Reza and Peter who were speaking in that final slot, and so had less of an audience than earlier sessions.

    IMG_1383After the Event

    About 20 of us (attendees, volunteers, & speakers) headed to a local Asian Restaurant for a meal and a few drinks to celebrate a busy day.

    I can’t report on any misbehavior because I called it quits by 9:30pm (my bed was calling me) – but I know that at least a few speakers were planning an all-nighter due to an early flight making any sleep pointless.  I’ll need to make enquiries as to how they got on. Winking smile

    IMG_1385

    Thanks Again!

    Yet another big thanks to everyone who made this possible; those who presented, attended, or helped in any other way.  Thanks to my employer (Trade Me) for not only being a sponsor, but also for giving me the time to organise things – and apologies to anyone in the office who’s print jobs were held up by my many runs to the printer for name badges, tickets, signs, etc. Smile with tongue out

    All in all an enjoyable event, and something I’d (probably) be happy to do again in future – if the attendees got value from it (guess I’ll need to review the feedback forms). If you are reading this and attended (or presented), and have any comments, feel free to leave them below or email them directly to me.

    Cheers,
    Dave

    PS: To see more of my (crappy iPhone) photos of the event, check out our user group Facebook or Flickr pages. Pro-tip for anyone else organising an event like this; have a decent camera on hand. Smile

     

  • CURSORs and WHILE-Loops

    can of wormsYes, I know this is a giant (some might say pointless, and I wouldn’t totally disagree) can of worms I’m opening here, but stay with me for a minute

    Why?

    We were discussing this at work a few months ago, and I was one of those people who followed “conventional wisdom” which says that SQL cursors are evil, and should be replaced with WHILE-loops if a set-based alternative isn’t possible (or practical).

    So I thought I’d do some testing for myself.  I know this has been done more than once before (I won’t mention any articles specifically, but they’re easy to find using your search engine of choice) – but I like to figure things out for myself rather than read a blog and take the author’s word as infallible.  Some of the other tests I found online also only concentrated on one specific metric when measuring performance (e.g. physical I/O), or were too obviously biased for my taste.

    Living in the Real World

    There are also what I refer to as “non-real-world” arguments against one or the other, for example:

    • Cursors are best because they’re easier/quicker to write (usually less variables and/or temp tables required).
    • While-loops are best because they’re inherently faster, since they don’t have to make use any of the internal mechanisms that cursors use to allow backwards and forwards data-set traversal.

    The reason I call these “non-real-world” problems, is that I don’t think any DBA or developer worth their salt is going to worry too much about how many more characters one takes to type than the other, and I’ve very seldom (if ever) needed to use anything but a simple forward-only cursor – so these problems aren’t representative of the majority of use-cases in my experience.

    Obviously if you take something like this super-simplified while-loop below, it would be difficult to write a cursor that would perform faster – but why the hell would you?

    DECLARE @i INT = 0;
    WHILE @i <= 1000000 SET @i += 1; -- 405ms on my laptop

    What?

    Alright, down to business. Using the AdventureWorks database on my laptop (running 2008R2 build 10.50.2500.0), I wrote a simple test script which iterates through each SalesOrderHeader row (for online orders) and counts the number of SalesOrderDetail rows for each SalesOrderID.  I know… for someone who was just talking about “non-real-world” examples, this is pretty unrealistic – but the point is that it generates a measureable amount of I/O and CPU load, and I can reproduce the logic easily using both a while-loop and a cursor.

    Here’s the code for the while-loop test:

    ------------------------------------------------
    -- WHILE LOOP (using counter)
    ------------------------------------------------
    CHECKPOINT;
    DBCC DROPCLEANBUFFERS;
    GO
    SET NOCOUNT ON;
    GO
    IF OBJECT_ID('tempdb..#SOID') IS NOT NULL DROP TABLE #SOID;
    CREATE TABLE #SOID (ID INT IDENTITY, SalesOrderID INT NOT NULL);
    DECLARE @i INT = 1, @ii INT, @SalesOrderID INT, @LineCount INT;
    
    INSERT #SOID (SalesOrderID)
    SELECT SalesOrderID
    FROM Sales.SalesOrderHeader
    WHERE OnlineOrderFlag = 1;
    
    SET @ii = @@ROWCOUNT;
    
    WHILE @i &amp;lt;= @ii
    BEGIN
    SET @SalesOrderID = (SELECT SalesOrderID FROM #SOID WHERE ID = @i);
    
    SET @LineCount = ISNULL(@LineCount, 0) + ISNULL((
    SELECT COUNT(*)
    FROM Sales.SalesOrderDetail
    WHERE SalesOrderID = @SalesOrderID
    ),0);
    
    SET @i += 1;
    END
    IF @LineCount &amp;lt;&amp;gt; 60398
    BEGIN
    RAISERROR('Bad Total',16,1);
    PRINT @LineCount;
    END
    GO

    Even for a simple while-loop there are multiple ways to write the logic – I’ve chosen a simple counter here, but didn’t see massive variances in the performance of the 2 or 3 methods I tested.

    Here’s the code I used to test cursors:

    ------------------------------------------------
    -- CURSOR
    ------------------------------------------------
    --DECLARE cursor_name CURSOR [ LOCAL | GLOBAL ]
    -- [ FORWARD_ONLY | SCROLL ]
    -- [ STATIC | KEYSET | DYNAMIC | FAST_FORWARD ]
    -- [ READ_ONLY | SCROLL_LOCKS | OPTIMISTIC ]
    -- [ TYPE_WARNING ]
    CHECKPOINT;
    DBCC DROPCLEANBUFFERS;
    GO
    SET NOCOUNT ON;
    GO
    DECLARE @SalesOrderID INT, @LineCount INT;
    DECLARE SOID CURSOR TYPE_WARNING
    FOR
    SELECT SalesOrderID
    FROM Sales.SalesOrderHeader
    WHERE OnlineOrderFlag = 1;
    
    OPEN SOID;
    FETCH NEXT FROM SOID INTO @SalesOrderID;
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SET @LineCount = ISNULL(@LineCount,0) + ISNULL((
    SELECT COUNT(*)
    FROM Sales.SalesOrderDetail
    WHERE SalesOrderID = @SalesOrderID
    ), 0);
    
    FETCH NEXT FROM SOID INTO @SalesOrderID;
    END
    CLOSE SOID;
    DEALLOCATE SOID;
    IF @LineCount &lt;&gt; 60398
    BEGIN
    RAISERROR('Bad Total',16,1);
    PRINT @LineCount;
    END
    GO

    This represents the simplest test (i.e. just a plain cursor declaration with no options other than TYPE_WARNING).  I used this as a baseline, to which I added various combinations of cursor options, such as STATIC vs DYNAMIC, LOCAL vs GLOBAL, etc.

    To measure performance I profiled the execution of the main batch (i.e. not including the checkpoint, clearing the caches, etc) and recorded CPU, Duration, Reads, & Writes over 5 executions per test run (I actually ran each more than just 5 times, but results were so consistent that I only recorded the last 5 executions of each test).

    Show me the money

    Cursor DurationThe surprising thing for me (in this test case) was how poorly the while-loop performed.  The graph to the right shows the various cursor-based test runs, all of which completed in less than 1 second (click the image to viewer a larger copy).

    The while-loop consistently took 30 seconds to run.

    By the way, please feel free to point out any mistakes I’ve made.

    Average ReadsThe reads show a similar story regarding the stark difference between the while-loop and any of the cursor tests.  The cursor read I/O figures are in the graph to the right, and the while-loop averaged well over 1.8 million reads (it dwarfed the others on the graph so I removed it).

    I guess what I find interesting is that many people have run tests, and look at logical or physical I/O, but don’t take into account CPU or overall duration.  If my data is sitting in memory anyway, I don’t really care about those logical reads – but the difference between a query taking 100 vs 1000 ms of my CPU’s time is very important to me.  Am I neglecting to take anything else (important) into account?

    Conclusion

    I don’t have one…

    These tests are too fickle to make any concrete observations – you could change just a few lines of code and get completely different results to what I’m seeing.

    Personally, I will continue to use both where appropriate.  Before anyone has a heart attack let me clarify that I would always first use a set-based solution before resorting to either.

    But… my rule of thumb is that I use CURSORS if I’m performing data manipulation, and a WHILE loop if I’m performing an action until some condition is met.  For cursors, I tend to use the “LOCAL STATIC FORWARD_ONLY READ_ONLY” options, just because they roll off the tongue so nicely. If you’re on the SQL Server team at Microsoft and you’re reading this – in the next version of SQL could you please consolidate these options down to a single “GO_BABY_GO” option instead? Winking smile

    As a final note, highlighting why a set-based alternative is always better; running this query “properly” (i.e. set-based, not RBR) gave me just over 900 reads, and completed in 29 milliseconds. No contest really.

    Cheers,

    Dave

  • What a turn-out!

    What a turn-out!

    I’ve been a little busy over the past few weeks; what with us taking a planned outage at work to perform a fairly major upgrade of both server hardware and clustering software, the Wellington SQL Server User Group keeping me on my toes as usual, and finally the up-coming Wellington SQLSaturday event we’re planning for April.

    That last point is what I ‘m getting all excited about (well, as excited as I get at 01:20 on a  Saturday morning).  Our call for speakers closed a few days ago, and we seriously hit the jackpot as far as sessions go. Smile

    Initially I was panicking that we wouldn’t have enough content for a single track (let alone the 2 I was hoping to have) – but now my biggest problem is that I have more than twice the number of sessions as timeslots available (yes, even across 2 tracks!).  This is a good problem to have.

    I’m going to try to finish off and publish the schedule by early next week – but until then here’s a sneak peak at the speaker list (this is by no means a complete list – just a few names I can remember off the top of my head):

    Between the various speakers, we’ve got a nice mix of topics covering everything from core SQL Server internals through to a range of Business Intelligence related sessions.  There should definitely be something for all skill levels.

    Don’t miss out – head on over to the SQLSaturday website to get more info about the event, when and where it is, how to register, etc.  See you there!

    Cheers,
    Dave

  • SQL Saturday is coming to Wellington

    The title pretty much says it all; we’ve got a SQL Saturday event confirmed for Wellington, on the 14th of April 2012! Rob Farley and a few other folks from PASS are organising 6 SQL Saturday events in a row across Australia and New Zealand (well, 1 in New Zealand), and are calling it the “ANZ SQL Saturday Tour”.  Visit the website to register your attendance, or to find more information regarding the venue, sessions, etc.

    I’m helping to organise things from this end, and the thing that’ll make this an awesome event is getting top-notch speakers lined up to present.  So whether you’re a seasoned veteran, or you’re nervously thinking about getting into speaking, get in touch (or go directly to the SQL Saturday website to register your intended session’s details).

    If you’re new to speaking, give me a shout and we can talk about organising a “trial run” at the local SQL user group, or we could look at putting a “lightning talks” session together where each speaker only presents for 5-10 minutes.

    If you need help deciding on a topic to present – keep in mind that we’re hoping to get a good spread of “traditional” DBA and DB dev topics, as well as a range of BI-related stuff – so pretty much anything (SQL-related) goes. 🙂  There’s also a whole raft of SQL 2012 topics that would be very relevant.

    Keep an eye on the Twitter hash tags #sqlsat136 (specific to this event) or #SQLSatANZ (specific to the tour) for more information as it unfolds.

    See you there!

    Cheers
    DB Dave

     

  • Wellington SQL Server User Group

    A few weeks ago I was asked to take over the running of the Wellington SQL Server User Group by the original founder of the group.  I had been wanting to get more involved in the local SQL community, and last year kicked that off by speaking at 3 different user groups and Tech Ed New Zealand. So I was pretty keen to take the reins – even though it’s going to add to an already hectic work life.

    In case you’re wondering what this user group thing is; its the Wellington chapter of PASS, and we’re a volunteer group who organise monthly presentations or training sessions on topics related to Microsoft SQL Server – from database administration and development, to business intelligence and data mining.

    I’ve been meeting with a handful of the other volunteers over the past 2 weeks to discuss new ideas, how to freshen things up, etc; and now the job begins of lining up speakers. We usually take a break over December & January, so the first scheduled meeting is the 3rd Thursday in February – which is approaching faster than I’d like (so if you’re keen to present on a topic related to SQL Server in some way, or database development in general, then please get in touch with me!).

    Otherwise, if you live in or around Wellington, New Zealand, then please sign up for our mailing list and come along to our monthly sessions. I’m sure you’ll meet enough new people and learn enough new things to make it worth your while (plus there’s pizza and beer, so what’s not to like?).  You can keep an eye on either our PASS or MS Communities websites for updates on speakers, dates, and venues.

    Cheers
    DB Dave

  • Preventing identity overflow

    Here’s a stored procedure I put together a week or so ago that I thought others might find useful.

    Facepalm

    We had an issue which should really never be allowed to happen (but as they say, “shit happens”); a rather important table ran out of integer values for the identity column.  Luckily the actual value of the ID column on this table isn’t important because the table only stores the most recent month’s data, and everything before that is archived – so we could simply reseed the identity value back to 0 (which had long since been purged from this table).

    To prevent this in future I put together a pretty simple query to check all identity values, and check them against the maximum value for their given data types. This eventually became a procedure which is now just part of my “utility” database which I deploy to every production server.

    IF OBJECT_ID('dbo.usp_check_max_identity_value') IS NULL
    EXEC('CREATE PROC dbo.usp_check_max_identity_value AS ');
    GO
    ALTER PROC dbo.usp_check_max_identity_value (
    @alert_threshold TINYINT = 60, -- The percentage of the max value over which we will alert.
    @send_alert_as_email BIT = 0, -- Do you want to send any alerts as an email? (Well... do ya, punk?)
    @alert_email_recipients NVARCHAR(MAX) = NULL -- Semicolon-delimited list of email recipients.
    ) AS
    BEGIN
    SET NOCOUNT ON;
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    
    DECLARE @srv_name VARCHAR(200),
    @db_name VARCHAR(200),
    @table_name VARCHAR(200),
    @column_name VARCHAR(200),
    @data_type VARCHAR(200),
    @max_value BIGINT,
    @max_tinyint TINYINT,
    @max_smallint SMALLINT,
    @max_int INT,
    @max_bigint BIGINT,
    @cmd NVARCHAR(4000),
    @percent_of_max INT,
    @alert_subject NVARCHAR(255),
    @alert_body NVARCHAR(MAX),
    @alert_counter INT;
    
    DECLARE @tt_alert_data TABLE (
    row_id INT IDENTITY,
    [db_name] VARCHAR(200),
    table_name VARCHAR(200),
    column_name VARCHAR(200),
    data_type VARCHAR(200),
    percent_of_max INT
    );
    
    -- These are the max values of each integer data type
    SELECT @max_tinyint = 255,
    @max_smallint = 32767,
    @max_int = 2147483647,
    @max_bigint = 9223372036854775807;
    
    -- Get a list of all available user databases to loop through
    DECLARE db_cursor CURSOR LOCAL STATIC FORWARD_ONLY READ_ONLY FOR
    SELECT DISTINCT
    QUOTENAME(name)
    FROM sys.databases
    WHERE database_id &gt; 4
    AND is_read_only = 0
    AND state_desc = 'ONLINE'
    
    OPEN db_cursor;
    FETCH NEXT FROM db_cursor INTO @db_name;
    WHILE @@FETCH_STATUS = 0
    BEGIN
    
    -- Get a list of all user tables in the current database
    SET @cmd = N'
    DECLARE tables_cursor CURSOR GLOBAL STATIC FORWARD_ONLY READ_ONLY FOR
    SELECT DISTINCT
    QUOTENAME(SCHEMA_NAME(t.[schema_id])) + ''.'' + QUOTENAME(t.name) AS table_name,
    QUOTENAME(c.name) AS column_name,
    typ.name AS data_type
    FROM ' + @db_name + '.sys.tables t
    JOIN ' + @db_name + '.sys.columns c ON t.[object_id] = c.[object_id]
    JOIN ' + @db_name + '.sys.types typ ON c.system_type_id = typ.system_type_id
    WHERE t.[type] = ''U''
    AND t.is_ms_shipped = 0
    AND c.is_identity = 1
    AND typ.name IN (''tinyint'',''smallint'',''int'',''bigint'')
    ORDER BY QUOTENAME(SCHEMA_NAME(t.[schema_id])) + ''.'' + QUOTENAME(t.name);
    ';
    
    EXEC sp_executesql @cmd, N'';
    
    -- Loop through the list of tables
    OPEN tables_cursor;
    FETCH NEXT FROM tables_cursor INTO @table_name, @column_name, @data_type;
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SELECT @max_value = NULL, @percent_of_max = NULL;
    SET @cmd = N'SELECT @max = MAX(' + @column_name + ') FROM ' + @db_name + '.' + @table_name + ' WITH (NOLOCK);';
    
    -- Run the SELECT to fetch the maximum value from the IDENTITY column
    EXEC sp_executesql @cmd,
    N'@max BIGINT OUTPUT',
    @max = @max_value OUTPUT;
    
    IF @max_value IS NULL GOTO FetchNext;
    
    -- Work out the percentage this represents of the maximum posible value
    SELECT @percent_of_max = CAST(CASE
    WHEN @data_type = 'tinyint' THEN ((CAST(@max_value AS FLOAT) / CAST(@max_tinyint AS FLOAT)) * 100.0)
    WHEN @data_type = 'smallint' THEN ((CAST(@max_value AS FLOAT) / CAST(@max_smallint AS FLOAT)) * 100.0)
    WHEN @data_type = 'int' THEN ((CAST(@max_value AS FLOAT) / CAST(@max_int AS FLOAT)) * 100.0)
    WHEN @data_type = 'bigint' THEN ((CAST(@max_value AS FLOAT) / CAST(@max_bigint AS FLOAT)) * 100.0)
    END AS INT)
    
    -- If the percentage is greater than our threshold, raise an alert
    IF (@percent_of_max &gt; @alert_threshold)
    BEGIN
    -- Insert the values for use later (if we're going to send an email)
    IF (@send_alert_as_email = 1)
    BEGIN
    INSERT @tt_alert_data ([db_name], table_name, column_name, data_type, percent_of_max )
    VALUES (@db_name, @table_name, @column_name, @data_type, @percent_of_max);
    END
    
    RAISERROR(
    '%s - Identity column %s on table %s is currently at %i%% of an %s''s maximum value.',
    10,
    1,
    @db_name,
    @column_name,
    @table_name,
    @percent_of_max,
    @data_type
    ) WITH LOG;
    
    SET @alert_counter = ISNULL(@alert_counter,0) + 1;
    END
    
    FetchNext:
    FETCH NEXT FROM tables_cursor INTO @table_name, @column_name, @data_type;
    END
    
    CLOSE tables_cursor;
    DEALLOCATE tables_cursor;
    
    FETCH NEXT FROM db_cursor INTO @db_name;
    END
    
    CLOSE db_cursor;
    DEALLOCATE db_cursor;
    
    -- If we want to send an alert email (and there is one to send), then let's do that...
    IF (@send_alert_as_email = 1) AND (ISNULL(@alert_counter, 0) &gt; 0)
    BEGIN
    -- If we haven't provided an email recipient, just carry on regardless
    IF @alert_email_recipients IS NULL RETURN;
    
    SET @srv_name = QUOTENAME(CAST(@@SERVERNAME AS VARCHAR(200)));
    SET @alert_subject = @srv_name + ' - Identity value overflow warning - ' + CAST(@alert_counter AS VARCHAR) + ' columns found.';
    
    -- Build the email body (HTML tags for the benefit of making the email pretty, but not necessary)
    set @alert_body = '&lt;html&gt;&lt;body&gt;&lt;style type="text/css"&gt; h3, p, table {font-family:verdana;}&lt;/style&gt;';
    set @alert_body = @alert_body +
    N'&lt;H3&gt;Identity value overflow warning&lt;/H3&gt;' +
    N'&lt;p&gt;This alert means that the following tables on the server ' + @srv_name
    + ' have integer-based IDENTITY column values which are currently greater than &lt;strong&gt;'
    + CAST(@alert_threshold AS VARCHAR) + '%&lt;/strong&gt; of their respective data-types'' maximum possible values.&lt;/p&gt;' +
    N'&lt;p&gt;This is just a warning, but you may want to consider increasing the size of the data-type you use (if possible) '
    + 'or reseeding the table (if the IDENTITY value itself is not important, other than it being unique).&lt;/p&gt;' +
    N'&lt;table border="1" cellspacing="0" cellpadding="3"&gt;' +
    N'&lt;tr style="background-color:#c9cfff;"&gt;&lt;th&gt;Database&lt;/th&gt;&lt;th&gt;Table&lt;/th&gt;&lt;th&gt;Column&lt;/th&gt;&lt;th&gt;Type&lt;/th&gt;&lt;th&gt;%&lt;/th&gt;&lt;/tr&gt;' +
    CAST ( ( SELECT td = [db_name], '',
    td = [table_name], '',
    td = [column_name], '',
    td = [data_type], '',
    td = [percent_of_max]
    FROM @tt_alert_data
    ORDER BY row_id
    FOR XML PATH('tr'), TYPE
    ) AS NVARCHAR(MAX)) +
    N'&lt;/table&gt;&lt;/body&gt;&lt;/html&gt;';
    
    -- Send the email (NB: This assumes that you have a default Database Mail profile configured)
    EXEC msdb.dbo.sp_send_dbmail
    @recipients = @alert_email_recipients,
    @subject = @alert_subject,
    @body = @alert_body,
    @body_format = 'HTML',
    @importance = 'High';
    END
    END
    GO

    It’s not very complex – it iterates through all user databases on the server, and then over all user tables that have an identity column, and simply selects the maximum values from each of those identity columns.  This max value is then compared to the max value for the respective data type of the column (i.e. tinyint, smallint, int, or bigint).

    What it does after that is up to you. In this case I send an email with a nicely formatted HTML email displaying the offending columns.  Obviously you can customise what action to take, and what your threshold is. And yes, the title of this post is a little misleading since this doesn’t actually prevent anything, but rather alerts you to impending doom.

    Warning: Before running this on a production system, make sure that you’re happy for it to be selecting from all of your (potentially large and/or busy) tables that have identity columns.  If you have parallel test/dev systems, then run them there instead to prevent any negative performance impact in production. I also take no responsibility for bugs or unexpected behaviour. Consult your GP before taking.

    Cheers

    DB Dave