:::: MENU ::::

Posts Categorized / T-SQL Scripts

  • May 07 / 2009
  • 0
dbDigger, Dynamic Management Views DMV, Monitoring and Analysis, T-SQL Scripts

Using sys.dm_exec_requests to monitor slow response of SQL Server

It is very important for a DBA to monitor the response time of SQL Server to connected users. Minute details like read, write and resource consumption may be analyzed later but first priority should be to point out slow response time. Several DMVs and other system data may be utilized to monitor the slow response of SQL Server. For this post I have

 
USE master   
 GO   
 SELECT start_Time,DB_NAME(database_id),st.TEXT,  
 [Status], command, wait_type ,wait_time,  
 wait_resource, last_wait_type,  
 CASE transaction_isolation_level  
 WHEN 0 THEN 'unspecified'   
 WHEN 1 THEN 'ReadUncomitted'  
 WHEN 2 THEN 'ReadCommitted'  
 WHEN 3 THEN 'Repeatable'  
 WHEN 4 THEN 'Serializable'  
 WHEN 5 THEN 'Snapshot'  
 END AS [TRANSACTION ISOLATION],executing_managed_code   
 FROM sys.dm_exec_requests CROSS APPLY sys.dm_exec_sql_text(sql_handle) st 
 ORDER BY wait_type   
 GO  

sys.dm_exec_requests result
The field [wait_type] is important if some wait type is given for any transaction. And it become critical if wait type is due to any lock. Such wait types will begin with LCK prefix. T-SQL statement being used is also given, so DBA may get a clue that which statement is experiencing slow response.
Above script just provides a clue. If you note any problem then further fields from sys.dm_exec_requests may be included and other ways to track down the problem may be used.
For details of fields in sys.dm_exec_requests please click here
As a DMV is being used in above script so it is obvious that above script will be used for SQL Server 2005 and later.

  • Apr 23 / 2009
  • 0
dbDigger, System Functions, T-SQL Interview Questions, T-SQL Scripts

Select random records from a table by using newID()

Normally SELECT TOP 1000 type approach is used while retrieving some sample data from a table with large number of records. But records retrieved through this approach are not from all over the table, so they can not be considered a random sample of data from that table. A simple approach to get sample data from all over the table will be to use newID() in following syntax

USE AdventureWorks
GO

SELECT TOP 100 FirstName, LastName, EmailAddress, Phone
FROM Person.Contact
ORDER BY NEWID()
GO

Above script will generate a random sample of data from Person.Contact table. You may also use where clause to further filter the records if required.
Also i have noted that using order by NewID() costs same as using order by for any other column. However the difference is there in case of clustered index. Order by any clustered index column is more economical than order by NewID().
Above was one of many uses of NewID(). We will look at other uses of NewID in some coming posts later.

  • Apr 21 / 2009
  • 0
Constraints and Keys, dbDigger, T-SQL Scripts

How to Add unique key constraint

Primary key is used to avoid duplication of records in a table. There may be business requirements where single column or group of columns other than primary key are required to be unique. Along with implementing this logic in application code it would be better to implement this constraint at database layer.
To go on with a little example create a test table as

 
USE AdventureWorks  
 GO  
 CREATE TABLE testTable  
 (RecordID SMALLINT IDENTITY(1,1) PRIMARY KEY,  
 courseName VARCHAR(30),  
 startDate DATETIME,  
 endDate DATETIME)  
 GO  

TestTable is now there with total four columns. RecordID is primary key with identity property. Now it is required that there should be unique course name and start date when combined. To implement this logic let us alter our table to add unique key constraint for two columns [recordID] and [startDate]

 
USE AdventureWorks  
 GO  
 ALTER TABLE testTable  
 ADD CONSTRAINT UK_testTable_testinguniquekeys  
 UNIQUE NONCLUSTERED (courseName,startDate)  
 GO  

Now let us check that our constraint is implementing the required logic of unique keys

 
USE AdventureWorks  
 GO  
 INSERT INTO TestTable (courseName,startdate,enddate)  
 VALUES ('Course1','2009-04-21','2009-06-15')  
 GO  
 INSERT INTO TestTable (courseName,startdate,enddate)  
 VALUES ('Course1','2009-04-21','2009-06-20')  
 GO  

First row is inserted successfully but there was following error for second row because courseName and startDate were same as first row.

(1 row(s) affected)
Msg 2627, Level 14, State 1, Line 2
Violation of UNIQUE KEY constraint ‘UK_testTable_testinguniquekeys’. Cannot insert duplicate key in object ‘dbo.TestTable’.
The statement has been terminated.

Similarly unique key constraint may be implemented for a single column or for group of columns. Click here for another example of creating unique key constraint through SSMS.

  • Apr 16 / 2009
  • 0
dbDigger, Monitoring and Analysis, System Stored Procedures, T-SQL Scripts

get number of rows and size of a table through sp_spaceUsed

sp_spaceused is system stored procedure. It provides number of rows and space used information. For example to get space used information for [Sales.SalesOrderDetail] in AdventureWorks database.

USE AdventureWorks
GO
EXEC sp_spaceused [Sales.SalesOrderDetail]
GO

sp_spaceused
As it returns 6 columns.

  • [Name] is name of table that we provided as parameter
  • [rows] is number of rows in that table
  • [reserved] is total space that is allocated to this table and its indexes
  • [data] is amount of space used by data in the table
  • [index_Size] is amount of space used by indexes in the table
  • [unused] is amount of space that is allocated but currently not in use by table

data + index_size + unused = reserved
If no parameter is provided to sp_spaceused, it will generate space statistics for whole database overall. To make more use we may use sp_spaceused with sp_MSforEachDB or sp_MSforEachTable.

  • Apr 15 / 2009
  • 1
dbDigger, T-SQL Enhancements, T-SQL Interview Questions, T-SQL Scripts, T-SQL Tips and Tricks

Use INSTERSECT to select common values between two tables

INTERSECT operation is introduced in SQL Server 2005 and onwards. It is used to get common rows between both sides in distinct form. So, It may be used to get common rows between two tables.
Consider a case that we want to select common rows containing name, designation, department from two tables named staff, officers’
In this scenario we can use the INTERSECT operands in following way

SELECT nameofStaff, designationofStaff, departmentofStaff  
 FROM staff  
 INTERSECT  
 SELECT nameofOfficer, designationofOfficer, departmentofOfficer  
 FROM Officers  

Above script will provide us distinct rows that are common on given columns. Number of columns should be same on both sides of INTERSECT. And data types of columns should also match. For more details please visit BOL

  • Mar 30 / 2009
  • 1
DBCC Commands, dbDigger, T-SQL Scripts, T-SQL Tips and Tricks, Transaction log files

How to truncate the transactional log

In one of previous posts about Dealing the growth of Transactional Log Files i discussed briefly about function of transactional log files in SQL Server and also there were some prevention measures to prevent growth of transactional log file on large amount.
There may be situation that we have to get rid of grown file. In that case obvious work around is to truncate the transactional log. To truncate the transaction log for AdventureWorks database

ALTER DATABASE AdvetureWorks  
 SET recovery SIMPLE  
 GO  
 DBCC shrinkfile (LogFileNameofAdventureWorks)  
 GO  
 ALTER DATABASE AdventureWorks  
 SET recovery FULL  
 GO  

As in case of simple recovery mode SQL Server transaction log is not maintained for that database. So the transactional log will be dropped for database when recovery model is set to simple.
It is important to mention that on going transactions should be considered before changing recovery models.

Consult us to explore the Databases. Contact us