This month’s it’s about “crap code”.
We’ve all seen it, and we’ve all written our fair share of it. I’m not going to concentrate too much on actual crappy code – like using deprecated syntax or functions, lots of GOTO logic within cursors (always bags of fun), etc. - but will rather look at some of the other steaming piles of “legacy” we leave behind us in the course of our busy DBA lives. None of these is necessarily going to take your production system down, or cause world war 3, but they’re bloody annoying at least.
Some of my favourite examples of such landmines include;
- “Temporary” logging tables
- Renamed or “backup” tables
- Hard-coded values in stored procedures
- Scheduled job sprawl (or bloat)
“I’ll just add some logging while I troubleshoot this bug.”
Take one stored procedure, add a handful of intermittent business rule bugs, and you’ve got the perfect excuse to add a new logging table, and some logging code to the procedure. Something similar to this perhaps:
create table dc_logging_tmp (dt datetime, m_id int, bal money);
IF @member_id IN (6763, 3884, 8734) AND @ledger_balance < 0 INSERT dc_logging_tmp (m_id, bal) VALUES(@member_id, @ledger_balance);
Of course the table name is generic, there are no indexes, and the logging code has no comments explaining it’s existence – but you’ll only need it for a few hours, maybe a day or 2 at most, and then you’ll rip it all out again. Right?
If you really only need something short-term, and there’s a chance you won’t remember to tidy up after yourself (and it won’t break anything if the table doesn’t exist), then consider creating your logging objects in TempDB. Next time you restart your SQL service: Poof!! All gone.
“Just to be safe I’ll create a copy of this table before we deploy this application update.”
You’re about to do a deploy to production which involves making some data changes. So, being the sensible, risk-averse DBA that you are, you decide to take a snapshot of the current table and do a simple “SELECT * INTO” before the update.
Of course you’ll remember to drop this temporary backup table tomorrow once you’re sure you won’t need to roll back the update, so calling it [zzz_table_name] seems reasonable. Right? Aaah, no. There’s nothing stopping you from creating table with names like [ledger_pre_Jul2011_v3_deploy_drop_after_3_weeks].
Okay, that might be extreme, but you get the idea. If you feel like being especially thorough you could even make use of extended properties to add comments, descriptions, or whatever else you like.
CREATE TABLE TableWithDescription (col INT) GO EXEC sp_addextendedproperty @name = N'Description', @value = 'This table is for XYZ, blah blah blah, etc...', @level0type = N'Schema', @level0name = 'dbo', @level1type = N'Table', @level1name = 'TableWithDescription'; GO SELECT name, value FROM fn_listextendedproperty ( 'Description', 'Schema', 'dbo', 'Table', 'TableWithDescription', NULL, NULL ); GO
“It’ll just be easier to hard-code these hourly rate values in this procedure.”
The thing about writing maintainable code is that it should be easy to maintain (the clue is in the name, you see). You shouldn’t need to alter a dozen stored procedures if your company’s rates change – things like that should be stored in a lookup table, or (an option I quite like) in table valued functions or views. You should obviously still test and choose what’s best suited to your particular situation.
The main advantage of using a TVF or view, in my opinion, is that you can make use of source control and versioning (which is more difficult to do with data stored in a table). This article by Joe Celko gave me a few new ideas in this regard.
Unfortunately I still find plenty of examples of developers including static values in stored procedures. Remember kids; every time you get lazy and hard-code something that you shouldn’t, a kitten loses it’s wings.
“I need this stored proc to run every day. Could you just quickly create a scheduled job for me?”
How many scheduled jobs do you have? Do you know exactly what every single one of them is doing? If you do, then I’m jealous.
Environments that have been around for a while tend to suffer from scheduled job build-up. This can rear its head in one of two ways; either you end up with a million separate jobs (i.e. job sprawl), or a few jobs, each with a hundred steps (i.e. job bloat).
My advice in combatting this is to decide how you’re going to group your jobs (e.g. by schedule, function, target database, etc), and then stick to it. Also remember to religiously maintain your job names and descriptions, schedule names, and step names – this makes your jobs essentially self documenting (you could even write a few simple queries to actually generate some documentation for you!).
Well, that’s about enough from me on the topic of crap. I’d love to hear your feedback on any of the examples I’ve covered here, and feel free to tell me about your own crap (or the crap you’ve had to deal with).