SQL Server Procedures to make DBA tasks easier and improve database quality
In this Microsoft SQL-Server related blog, I have included the source code for a few procedures that I have used to help make some common database administration / management a bit easier. Certain DBA tasks are simpler when you have these procedures that can produce a quick report to:- List all database tables containing data
- List the Unique Column Definitions and Tables Where Used (and/or Discrepancies in DataTypes for Column Name)
- List all DML INSERT/UPDATE/DELETE trigger names / definitions in Database
SQL-Server Stored Procedure (SP) Source Code — List all database tables containing data
Here is a useful management query to help you quickly see which tables in your database have data in them — i.e., show all tables with rowcount greater than a specified value. The procedure also makes use of parameterized ORDER BY that sorts results, by one of a few options (see source code for comments: you can choose table-name, rowcount, creation date), without using dynamic SQL.--********************************************************************************
--This source code is Copyright (c) 2007-2017
-- Author: Mike Eberhart
--
--I hereby release this code under the terms of the MIT License (for freeware).
--
--Permission is hereby granted, free of charge, to any person obtaining a copy
--of this software and associated documentation files (the "Software"), to deal
--in the Software without restriction, including without limitation the rights
--to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
--copies of the Software, and to permit persons to whom the Software is
--furnished to do so, subject to the following conditions:
--
--The above copyright notice and this permission notice shall be included in
--all copies or substantial portions of the Software.
--
--THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
--IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
--FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
--AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
--LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
--OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
--THE SOFTWARE.
--********************************************************************************
--**********************************************************************************************
-- This procedure outputs a list of all tables containing data (i.e., non-zero row count), and
-- includes these parameters for further flexibility:
--
-- @MinimumRowCountToInclude : include only those tables containing this minimum number of rows.
-- @SortColumn:
-- 1 = ascending by table name;
-- 2 = descending by row-count, then asc by table name therein.
-- 3 = descending by create-date (yyyy-mm-dd hh:mi:ss.mmm(24h)), then asc by tablename.
--
-- TESTS:
-- uspUtilities_ListTablesWithRows 1, 1 --by table name
-- uspUtilities_ListTablesWithRows 1, 2 --by rowcount
-- uspUtilities_ListTablesWithRows 0, 3 --by creation date
-- uspUtilities_ListTablesWithRows 1000, 1 --only large tables
--
--**********************************************************************************************
CREATE PROCEDURE uspUtilities_ListTablesWithRows
@MinimumRowCountToInclude INT = 1,
@SortColumn TINYINT = 1
AS
BEGIN
SELECT
o.name AS 'Table Name',
s.partition_number AS 'Partition#',
s.row_count AS 'Partition Row Count',
o.create_date AS 'Table Creation Date'
FROM
sys.objects o
JOIN sys.dm_db_partition_stats s
ON (s.object_id = o.object_id)
WHERE
o.type = 'U' --User tables
AND s.index_id IN (0,1) --heap or clustered only
AND s.row_count >= @MinimumRowCountToInclude
ORDER BY
--to obtain proper order of numeric values, we must LEFT-PAD any numerics, and
--all fields we ORDER BY must be seen by SQL-Server as the same DataType,
--which is why we CONVERT / PAD all the fields to the same length / type.
(CASE
WHEN @SortColumn = 1 THEN CONVERT(VARCHAR(100), o.name)
ELSE NULL
END),
(CASE
WHEN @SortColumn = 2 THEN dbo.[udfGetIntegerAsPaddedString](s.row_count, 100, '0')
ELSE NULL
END) DESC,
(CASE
WHEN @SortColumn = 2 THEN CONVERT(VARCHAR(100), o.name)
ELSE NULL
END) ASC,
(CASE
WHEN @SortColumn = 3 THEN CONVERT(VARCHAR(100), o.create_date, 121)
ELSE NULL
END) DESC,
(CASE
WHEN @SortColumn = 3 THEN CONVERT(VARCHAR(100), o.name)
ELSE NULL
END) ASC
END
SQL-Server Stored Procedure (SP) Source Code — List Unique Column Definitions and Tables Where Used (and/or Discrepancies in DataTypes for Column Name)
A Microsoft SQL-Server Transact-SQL (T-SQL) stored procedure (SP) query that can help with your DBA (Database Administrator) tasks by helping you quickly examine all the Column-Name definitions that exist in your database.If you are lucky enough to own a formal / professional data-modeling tool (like ERwin, PowerDesigner DataArchitect, Quest Toad Data Modeler, Embarcadero ER/Studio, etc), those tools often include handy graphical tables / reports that show column names, domains, tables, and their definitions. But, those tools are generally quite expensive and/or have a single-machine-install-license that may keep you from having access to such functionality when you need it in a hurry as you are managing your database(s). As such, queries like the one provided by this stored procedure are very helpful for analyzing your database structure / definition.
This query can list all column definitions and/or limit by column-name, and/or limit output to only those column-names where multiple (data type / specification) differences exist for the same column-name. This makes it very easy to quickly locate any column-names in your database that contain potentially conflicting data-types that could otherwise: hide potential errors, truncate data, force excess type-conversions (and thus impact performance), and so forth.
DISCUSSION:
This query / procedure makes use of modern SQL-Server features including the system object view and CTE (Common Table Expression) abilities. This could easily be adapted to older versions of SQL-Server by replacing the sys.* object references with older versions (e..g., "sysobjects"), and the CTE used in the last part of the procedure could be turned into a multi-step query using temp tables and such if needed.This procedure also uses a technique borrowed from my Running Subtotals Example 1 in order to create the comma-delimited list of tables that make use of each unique (data type) definition of a column-name (without using a CURSOR to do so).
--**********************************************************************************************
-- This procedure outputs a list of all "unique" column definitions in the current database,
-- where "unique" implies the same column-type, length, precision, and scale for a given
-- column-name.
--
-- PARAMETERS:
-- @ColumnNameToExamine : include only those column(s) with name(s) as specified, or all
-- columns if parameter is NULL. Not case-sensitive.
-- @IncludeOnlyVariedDefinitions: when 0, no further filtration of results; when 1, returns
-- only those column names where varied column-definitions (type/scale/etc) exist for
-- the same column name.
--
-- TESTS:
-- uspUtilities_ListUniqueColumnDefinitionsAndTablesWhereUsed --all
-- uspUtilities_ListUniqueColumnDefinitionsAndTablesWhereUsed 'a%' --any columns whose name begins with 'A'
-- uspUtilities_ListUniqueColumnDefinitionsAndTablesWhereUsed 'assetID' --just 'assetID' column
-- uspUtilities_ListUniqueColumnDefinitionsAndTablesWhereUsed NULL, 1 --any column-names wih conflicting definitions
--
--**********************************************************************************************
CREATE PROCEDURE uspUtilities_ListUniqueColumnDefinitionsAndTablesWhereUsed
@ColumnNameToExamine VARCHAR(128) = NULL,
@IncludeOnlyVariedDefinitions BIT = 0
AS
BEGIN
--**********************************************************************************************
--Our work-table, where we can ensure the ORDER with which we later access the data, being
--by default, the order of the PRIMARY KEY declared here in UniqueID column.
--*********************************************************************************************
DECLARE @column_defs TABLE
(
UniqueID INT IDENTITY NOT NULL PRIMARY KEY,
ColumnName VARCHAR(128),
ColumnTypeId INT,
ColumnMaxLength SMALLINT,
ColumnPrecision TINYINT,
ColumnScale TINYINT,
tablesList_using_col_def VARCHAR(MAX)
);
--**********************************************************************************************
--Get all the column-specification information to consider as "unique column definitions"
--as well as all the table names that use each unique definition of a column.
--Limit to column-name(s) per parameter value if value is specified at runtime.
--**********************************************************************************************
INSERT INTO @column_defs(
ColumnName,
ColumnTypeId,
ColumnMaxLength,
ColumnPrecision,
ColumnScale,
tablesList_using_col_def
)
SELECT DISTINCT
columns.name,
columns.system_type_id,
columns.max_length,
columns.precision,
columns.scale,
tables.name
FROM
sys.objects tables
JOIN sys.columns columns
ON (tables.object_id = columns.object_id)
WHERE
tables.type = 'u'
AND ((@ColumnNameToExamine IS NULL) OR (columns.name LIKE @ColumnNameToExamine));
--**********************************************************************************************
-- ALGORITHM EXPLANATION:
-- See SET-BASED RUNNING SUBTOTALS Example 1, which this builds upon.
--**********************************************************************************************
DECLARE @tablesList_using_col_def VARCHAR(MAX) = '';
DECLARE @GroupingBreakValue VARCHAR(MAX) = '';
UPDATE
@column_defs
SET
@tablesList_using_col_def = tablesList_using_col_def =
CASE
WHEN @GroupingBreakValue =
(
ColumnName +
dbo.[udfGetIntegerAsPaddedString](ColumnTypeId, 8,'0') +
dbo.[udfGetIntegerAsPaddedString](ColumnMaxLength, 6,'0') +
dbo.[udfGetIntegerAsPaddedString](ColumnPrecision, 3,'0') +
dbo.[udfGetIntegerAsPaddedString](ColumnScale, 3,'0')
)
THEN @tablesList_using_col_def +
CASE
WHEN DATALENGTH(@tablesList_using_col_def) > 0 THEN ', ' ELSE ''
END +
tablesList_using_col_def
ELSE tablesList_using_col_def
END,
@GroupingBreakValue =
ColumnName + --cast to CHAR for certainty in sort...
dbo.[udfGetIntegerAsPaddedString](ColumnTypeId, 8,'0') +
dbo.[udfGetIntegerAsPaddedString](ColumnMaxLength, 6,'0') +
dbo.[udfGetIntegerAsPaddedString](ColumnPrecision, 3,'0') +
dbo.[udfGetIntegerAsPaddedString](ColumnScale, 3,'0');
--**********************************************************************************************
-- Group the output by unique Column-definition-per-column-name now, choosing just the row
-- per column-def with the "max" tables-list (comma-delim list of tables using the column def)
-- and use this result for our CTE (common table expression).
-- Next, reference that CTE result set for our final output, and include only those column-names
-- that meet the conditions of our parameter that can be used to filter output to include only
-- those column definitions where *multiple* varied definitions exist for the same column-name.
--**********************************************************************************************
WITH ColumnInfo_CTE -- (ColumnName, ColumnType, MaxLength, Precision, Scale, TableList)
AS
-- Define the CTE query that includes one row per unique-column-definition (with list of tables using def).
(
SELECT
ColumnName,
SUBSTRING(types.name,1,15) AS 'ColumnType',
ColumnMaxLength AS 'MaxLength',
ColumnPrecision AS 'Precision',
ColumnScale AS 'Scale',
MAX(tablesList_using_col_def) AS 'TablesList'
FROM
@column_defs AS columns
JOIN sys.types types
ON (columns.ColumnTypeId = types.system_type_id )
GROUP BY
ColumnName,
SUBSTRING(types.name,1,15),
ColumnMaxLength,
ColumnPrecision,
ColumnScale
)
-- Define the outer query referencing the CTE name: use to potentially filter results.
SELECT
*
FROM
ColumnInfo_CTE
WHERE ColumnName IN (
SELECT ColumnName FROM ColumnInfo_CTE
GROUP BY
ColumnName
HAVING COUNT(ColumnName) > @IncludeOnlyVariedDefinitions
);
END
--EXAMPLE OUTPUT demonstrating how easy it is to spot discrepancies in column-name datatype definitions:
-- EXEC uspUtilities_ListUniqueColumnDefinitionsAndTablesWhereUsed NULL, 1
--
--DatePlacedOnWeb datetime 8 23 3 Assets, Assets-old, Assets-old2
--DatePlacedOnWeb smalldatetime 4 16 0 Recipes, Recipes2
--DateUpdated datetime 8 23 3 Assets, Assets-old, Assets-old2
--DateUpdated smalldatetime 4 16 0 Recipes, Recipes2
--LibraryID numeric 9 18 0 Libraries_BeforeColRemove
--LibraryID smallint 2 5 0 Libraries, LibraryCollections
SQL-Server User-Defined-Function (UDF) Source Code — Get Common Columns (column names common to two tables)
This UDF is a nice complement to the prior utility procedure, and its output can be used within various DBA / management queries to help with migrating data between tables and other similar tasks.This function uses metadata from some system catalog views in order to produce a list of column names that are common between two chosen tables. The output can be used in either dynamic SQL for various management tasks, or in hand-coded queries you require. In the future, if examples of how this is helpful are necessary, perhaps I will add further code here on my blog. But, hopefully it will be obvious how the output from this function is simply text (column names, separated by commas) that can be cut/paste into a query you are writing.
--****************************************************************************
-- This function will return a comma-delimited list of column-names that are
-- common to two tables (whose names are provided in function parameters).
--
-- The output can be quite useful when performing table-maintenance tasks that
-- require a data unload/reload (e.g., rearranging columns, changing data-types),
-- as it can be used to build SQL commands like INSERT statements, etc.
--
--Input Parms: (these are the two tables to compare)
-- @table_name_1
-- @table_name_2
--
--Returns:
-- comma delimitted list of column-names that are common to both tables
--
-- TESTS:
-- select dbo.udfUtility_GetCommonTableColumnNames('TableName1Here', 'TableName2Here')
--****************************************************************************
CREATE FUNCTION udfUtility_GetCommonTableColumnNames
(
@table_name_1 CHAR(128),
@table_name_2 CHAR(128)
)
RETURNS VARCHAR(MAX)
AS
BEGIN
--****************************************************************************
--Make sure the two tables to compare exists. If not, exit immediately.
--****************************************************************************
IF (
SELECT ISNULL(COUNT(1),0)
FROM
sys.objects
WHERE
(name = @table_name_1 OR name = @table_name_2)
AND (type = 'u' )
) <> 2
BEGIN
RETURN NULL
END
--****************************************************************************
-- Create the comma-delim list of shared column names now
--****************************************************************************
DECLARE @col_list VARCHAR(MAX) = '';
SELECT @col_list = @col_list +
CASE
WHEN DATALENGTH(@col_list) > 0 THEN ',' ELSE ''
END +
c1.name
FROM
sys.objects t1
JOIN sys.columns c1
ON (t1.object_id = c1.object_id),
sys.objects t2
JOIN sys.columns c2
ON (t2.object_id = c2.object_id)
WHERE
t1.name = @table_name_1
AND t1.type = 'u'
AND t2.name = @table_name_2
AND t2.type = 'u'
AND c1.name = c2.name --Get only those columns that are in both tables (??datatype??)
ORDER BY
c1.column_id ASC --Return in column# order
RETURN @col_list
END
SQL-Server Stored Procedure (SP) Source Code — List all DML INSERT/UPDATE/DELETE trigger names / definitions in Database
Another useful Microsoft SQL-Server Transact-SQL (T-SQL) query for DBA (Database Administration) / Management task of identifying tables with DML triggers defined, and presenting a report / list of tables with triggers and their source-code definitions. In addition, this procedure uses a CTE (Common Table Expression) to simplify sorting of the trigger information after it has been categorized into INSERT / UPDATE / DELETE columns.
--**********************************************************************************************
-- This query outputs a list of all DML INSERT/UPDATE/DELETE trigger names / definitions
-- in the current database. Output is grouped by Table Name with separate output columns for
-- each Trigger event-type (Insert, Update, Delete).
--
-- Also includes some logic to make use of "First" and "Last" trigger information,
-- as can be set with a command like this:
-- dbo.sp_settriggerorder @triggername= 'trig_sysmail_server', @order='Last', @stmttype = 'UPDATE';
--
-- That information further sorts "First" to the top in a group and "Last" to the bottom for a
-- bit of visual cue as to the potential order of trigger firing.
--**********************************************************************************************
--**********************************************************************************************
-- QUERY PARAMETER VARIABLES:
-- @TableNameToExamine : include only those table(s) with name(s) as specified, or all
-- columns if parameter is NULL. Not case-sensitive.
--
-- TEST VALUES FOR PARAMETERS BELOW (UN-COMMENT ONLY ONE DECLARE AT A TIME FOR GIVEN VARIBLE)
--**********************************************************************************************
DECLARE @TableNameToExamine VARCHAR(128) = NULL; --all tables with triggers
--DECLARE @TableNameToExamine VARCHAR(128) = 'S%'; --any tables whose name begins with 'A'
WITH TriggerInfo_CTE
AS
-- Define the CTE query that includes one row per unique-column-definition (with list of tables using def).
(
SELECT
@@servername AS ServerName,
db_name() AS DatabaseName,
ISNULL( s.name, '' ) AS SchemaName,
ISNULL( o.name, 'DDL Trigger' ) AS TableName,
-- NOTE: you can use use IIF(te.is_first = 1, '!First! ', '') + IIF(te.is_last = 1, '{LAST} ', '')
-- in place of the CHOOSE function (in next lines) if using earlier versions of SQL-Server
CASE WHEN te.type = 1
THEN CHOOSE (1 + te.is_first + te.is_last*2, '', '!First! ', '{LAST} ') + tr.name ELSE NULL
END AS InsertTrigger,
CASE WHEN te.type = 2
THEN CHOOSE (1 + te.is_first + te.is_last*2, '', '!First! ', '{LAST} ') + tr.name ELSE NULL
END AS UpdateTrigger,
CASE WHEN te.type = 3
THEN CHOOSE (1 + te.is_first + te.is_last*2, '', '!First! ', '{LAST} ') + tr.name ELSE NULL
END AS DeleteTrigger,
IIF(tr.type = 'TR', 'SQL', 'CLR') AS Type,
object_definition( tr.object_id ) AS TriggerDefininion
FROM
sys.triggers tr
LEFT OUTER JOIN sys.all_objects o
ON tr.parent_id = o.object_id
LEFT OUTER JOIN sys.schemas s
ON s.schema_id = o.schema_id
JOIN sys.trigger_events te
ON te.object_id = tr.object_id
WHERE
tr.type IN ('TR', 'TA') --TR = SQL trigger, TA = Assembly (CLR) trigger
AND O.type = 'U' --User Tables
AND ((@TableNameToExamine IS NULL) OR (o.name LIKE @TableNameToExamine))
)
-- Define the outer query referencing the CTE name: use to sort results AFTER triggers are categorized.
SELECT
*
FROM
TriggerInfo_CTE
--This will sort ASC by Insert, Update, Delete Trigger-Name order within TableName!
--(seriously). This is because NULLs in each column push those values downward in sort.
ORDER BY
SchemaName,
TableName,
DeleteTrigger,
UpdateTrigger,
InsertTrigger
;
--EXAMPLE OUTPUT WHEN RUN AGAINST 'ReportServer' DB on SQL2012 instance (Trigger Definitions omitted due to size):
--ServerName SchemaName DatabaseName TableName InsertTrigger UpdateTrigger DeleteTrigger Type
--SQL2012DEV dbo ReportServer History History_Notifications NULL NULL SQL
--SQL2012DEV dbo ReportServer History NULL NULL HistoryDelete_SnapshotRefcount SQL
--SQL2012DEV dbo ReportServer ReportSchedule NULL NULL ReportSchedule_Schedule SQL
--SQL2012DEV dbo ReportServer Schedule NULL Schedule_UpdateExpiration NULL SQL
--SQL2012DEV dbo ReportServer Schedule NULL NULL Schedule_DeleteAgentJob SQL
--SQL2012DEV dbo ReportServer Subscriptions NULL NULL Subscription_delete_DataSource SQL
--SQL2012DEV dbo ReportServer Subscriptions NULL NULL Subscription_delete_Schedule SQL
A very basic speed-testing loop...
This is nothing but a simple framework that can be useful when optimizing and fine-tuning the execution speed of various SQL Statements, procedure/function calls, and the like. The idea is to remove the effect of caching and such, at least prior to beginning a multi-iteration loop, whereby an average run-time can then be observed over a desired number of iterations.
/***************************************************************************************/
-- This is a test-jig useful when optimizing and fine-tuning the execution speed
-- of various SQL Statements, procedure/function calls, and the likes.
-- This is especially useful when there are multiple ways to accomplish the same result,
-- and you need to see which methods are the fastest under different iteration-counts.
-- Just insert code to test, set the Iterations, execute, and the resulting elapsed
-- time in Milliseconds (MS) will be displayed.
--
-- For optimal speed comparisons, the data-cache and procedure-cache are cleared
-- before running. Depending on your performance-measurement requirements, you may
-- actually want to move the first 3 lines here (the checkpoint / DBCC stuff) to
-- within the WHILE loop if you want to observer average run times of statement(s)
-- without the effect any caching has on them.
--
-- NOTE: you probably do not want to run this on a *production* system and/or should
-- consider the impact of flushing your procedure cache, etc in production.
/***************************************************************************************/
CHECKPOINT; --to produce a cold buffer cache
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS; -- Remove all buffers from the buffer pool / clear data cache
DBCC FREEPROCCACHE WITH NO_INFOMSGS; -- Clears the procedure (plan) cache
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DECLARE @starttime DATETIME = GetDate();
DECLARE @Counter INT = 0;
DECLARE @Iterations INT = 100; --Set to desired number of iterations
WHILE @Counter < @Iterations
BEGIN
--*********************************************************
--EXECUTE STATEMENT(S) YOU WANT TO SPEED-TEST HERE (Begin)
--*********************************************************
SELECT 'Placeholder for statements to test here instead';
--*********************************************************
--EXECUTE STATEMENT(S) YOU WANT TO SPEED-TEST HERE (End)
--*********************************************************
SELECT @Counter += 1;
END; --While loop
SELECT DateDiff(ms, @starttime, GetDate()) AS 'Elapsed Milliseconds';
User-Defined Function (UDF) — Elapsed Run-Time / Debug / Tracing Utility (simplified timing and tracing info; prints well-formatted too)
A common requirement when developing custom SQL-Server Transact-SQL Stored-Procedures, Queries, and Functions is to test performance (execution speed) and display such timing information for debugging and tuning purposes at various points throughout your source code. One option for displaying this type of run-time information is to include a simple SELECT and/or PRINT statement like this everywhere you need to display such information:DECLARE @starttime DATETIME; SELECT @starttime = GetDate(); ... ... the SQL code you wish to time here... ... SELECT DateDiff(ms, @starttime, GetDate()); --Display Milliseconds elapsed from value in @starttime
...or, alternatively and arguably better (aside from dependency on this function call), you can adopt a common timing / tracing / debugging-related function like the one presented here (SQL Source Code appears below) in order to standardize the look/feel of your tracing information and display execution elapsed-times in a more consistent and easily readable format than simply milliseconds.
This function will display:
- Start Time (DateTime)
- End Time (DateTime)
- Elapsed Time (as both ms — milliseconds — and in the equivalent hours, minutes, seconds
my_debugging_Label_value_will_show_here: Start Time: 2006-05-03 11:04:27.280 End Time: 2006-05-03 11:08:28.403 Elapsed Time: 241,123 Milliseconds ( i.e., 4 Minutes, 1 Seconds)
See my Running Subtotals Examples where this function being used within a multi-part query.
--**********************************************************************************************
--A function to help make tracing / timing simpler while producing clearer elapsed-time reports.
--
--PARAMETERS:
-- @vDebuggingLabel : an arbitrary label to display with elapsed-time information to help
-- with understanding the timing information context within your SQL execution.
-- @dtStartTime : the date/time (typically from GETDATE()) an operation you are timing began.
-- @@dtEndTime : the date/time an operation you are timing ended : if not specified, uses current date-time via GETDATE().
--
--RETURNS:
-- Nicely formatted run-time information ready to print (by way of SELECT or PRINT statement).
-- The output of this function is a string value whose content will PRINT like this:
--
-- my_debugging_Label_value_will_show_here:
-- Start Time: 2006-05-03 11:04:27.280
-- End Time: 2006-05-03 11:08:28.403
-- Elapsed Time: 241,123 Milliseconds ( i.e., 4 Minutes, 1 Seconds)
--
--EXAMPLES / TESTS:
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST1 - Unspecified Start/End Dates', DEFAULT, DEFAULT);
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST2 - Unspecified End Date', GETDATE(), DEFAULT);
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST3 - Round to 1 second', '2013-10-17 18:18:03.653', '2013-10-17 18:18:04.853');
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST4 - Round to 2 seconds', '2013-10-17 18:18:03.653', '2013-10-17 18:18:05.453');
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST5 - Round to 1 minute, 2 seconds', '2013-10-17 18:18:03.653', '2013-10-17 18:19:05.453');
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST6 - Round to 49 Hours, 2 Minutes, 2 Seconds', '2013-10-15 18:18:03.653', '2013-10-17 19:20:05.453');
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST7 - backwards start/end adjusted - 1min, 2sec ', '2013-10-17 18:19:05.453', '2013-10-17 18:18:03.653');
--PRINT [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]('TEST8 - BAD DATE PARM FORMAT ERROR', 'X', DEFAULT); --Will bomb.
--**********************************************************************************************
CREATE FUNCTION [dbo].[udfUtility_PrintElapsedRuntimeDebugInfo]
(@vDebuggingLabel VARCHAR(500) = '',
@dtStartTime DATETIME = NULL,
@dtEndTime DATETIME = NULL)
RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE @dtTemp DATETIME;
--If specific start and/or end date-time not provided, use current time.
SELECT @dtStartTime = ISNULL(@dtStartTime, GETDATE());
SELECT @dtEndTime = ISNULL(@dtEndTime, GETDATE());
--Handle "backwards" start/end date-times condition
IF (@dtEndTime < @dtStartTime)
BEGIN
SELECT @dtTemp = @dtStartTime;
SELECT @dtStartTime = @dtEndTime;
SELECT @dtEndTime = @dtTemp;
END;
DECLARE @biElapsedMS BIGINT;
SELECT @biElapsedMS = DATEDIFF(ms, @dtStartTime, @dtEndTime);
DECLARE @iElapsedHH INT;
SELECT @iElapsedHH = @biElapsedMS / 3600000;
DECLARE @iElapsedMM INT;
SELECT @iElapsedMM = (@biElapsedMS - (@iElapsedHH * 3600000)) / 60000;
DECLARE @iElapsedSS INT;
SELECT @iElapsedSS = ROUND( (@biElapsedMS - (@iElapsedHH * 3600000) - (@iElapsedMM * 60000) ), -3) / 1000;
DECLARE @vElapsedMS VARCHAR(24);
--use MONEY type to place commas every three digits
SELECT @vElapsedMS = CONVERT(VARCHAR(24), CONVERT(MONEY,DATEDIFF(ms, @dtStartTime, @dtEndTime)), 1);
--Remove the decimal and trailing zeroes
SELECT @vElapsedMS = LEFT(@vElapsedMS, LEN(@vElapsedMS) - 3);
--NOTE: if the following assignment does not copy/paste correctly, it is simply a CR (Carriage Return) within a string.
DECLARE @vCR VARCHAR(1)
SELECT @vCR = '
'
--This section removes useless output of hours/minutes (if runtime was less than hour, don't show hours, etc).
DECLARE @vHoursMsg VARCHAR(30);
IF @biElapsedMS >= 3600000
SELECT @vHoursMsg = CONVERT(VARCHAR(3), @iElapsedHH) + ' Hours, '
ELSE
SELECT @vHoursMsg = '';
DECLARE @vMinutesMsg VARCHAR(30)
IF @biElapsedMS >= 60000
SELECT @vMinutesMsg = CONVERT(VARCHAR(2), @iElapsedMM) + ' Minutes, '
ELSE
SELECT @vMinutesMsg = '';
--Send the resulting message back to the calling routine now in print-ready form
RETURN @vDebuggingLabel + ': ' + @vCR +
' Start Time: ' + CONVERT(CHAR(24), CONVERT(DATETIME, @dtStartTime) , 21) + @vCR +
' End Time: ' + CONVERT(CHAR(24), CONVERT(DATETIME, @dtEndTime) , 21) + @vCR +
'Elapsed Time: ' + @vElapsedMS + ' Milliseconds (i.e., ' +
@vHoursMsg +
@vMinutesMsg +
CONVERT(VARCHAR(2), @iElapsedSS) + ' Seconds)' + @vCR;
END
Continue to read this Software Development and Technology Blog for computer programming, software development, and technology Techniques, How-To's, Fixes, Reviews, and News — focused on Dart Language, SQL Server, Nvidia CUDA, VMware, TypeScript, SVG, other technology tips and how-to's, and my varied political and economic opinions.
No comments:
Post a Comment