:::: MENU ::::

Posts Categorized / Performance Tunning and Optimization

  • Dec 24 / 2013
  • 0
DBCC Commands, dbDigger, Maintenance plans, Performance Tunning and Optimization, SQL Server Agent scheduled Jobs

Optimal run of DBCC CHECKDB on production servers

DBCC CHECKDB is SQL Server built-in utility command to analyze and report the database integrity in various aspects. It checks the logical and physical integrity of all the objects in the specified database by performing the following operations:

  • Runs DBCC CHECKALLOC on the database
  • Runs DBCC CHECKTABLE on every table and view in the database
  • Runs DBCC CHECKCATALOG on the database
  • Validates the contents of every indexed view in the database

It is a useful command and should not be abandoned due to its resource hungry execution. However under default deployment it may take significant time on production databases. With increasing time window, risk of performance degradation increases. You may have faced these issues several times on your production servers. Following short comings may be noticed in default implementation.

  • Error or information messages are not stored any where when scheduled job is run
  • Check is required to exclude any databases other than ONLINE from the target databases list
  • A strategy is required to minimize the activity on server

Solution

We may take few steps to make the process optimized and log the output. Following are points that are recommended for it.

  • Save log file of DBCC CHECKDB scheduled job output to disk. It may have just error messages if any or also the informational messages.
  • Make sure to exclude the databases whose CHECKDB is not required. CHECKDB is performed for ONLINE databases, so make sure to include the check in the script that will filter out all the databases that are not ONLINE.
  • Use parameter PHYSICAL_ONLY. It will greatly reduce the process time and will only consider the data as it is stored on physical layer.
  • We can check the logical structure on any fresh restored copy of same databases on other server. Logical checks are not dependent on machine or disk. It will totally eliminate the load of logical checks from production server. This process will also make sure the validity check of backups.

Page Verification Settings and DBCC CheckDB

It seems here relevant to discuss the relationship between the Page verification check sum and DBCC CheckDB. PAGE VERIFICATION is a database level setting related to data recovery and integrity. Its value may be NULL, TORN_PAGE_DETECTION or CHECKSUM. For SQL Server 2005 and onwards CHECKSUM is the default option for new databases. CHECKSUM is more detailed and granular algorithm than TORN_PAGE_DETECTION. CHECKSUM covers all aspects of TORN_PAGE_DETECTION. However it may require more CPU resources as compared to TORN_PAGE_DETECTION. CHECKSUM Calculates a checksum over the contents of the whole page and stores the value in the page header when a page is written to disk.

A common misconception is that if we enable CHECKSUM for Page verification feature then we may skip the DBCC CheckDB command as CHECKSUM will be evaluating the data integrity.
It is not true at all. Page verification CHECKSUM is not alternate of DBCC CheckDB however it may effectively enhance the scope of DBCC CheckDB. Page verification CHECKSUM is a limited scope page level verification whereas DBCC CheckDB covers far more areas than that.
Page verification CHECKSUM is calculated/updated only when a data page is saved back to disk. If a data page gets corrupt after it has been saved to disk with CHECKSUM calculated on it then we will not get its report until it will be retrieved again. To verify through the calculated CHECKSUM we have to run DBCC CheckDB necessarily.
Technical detail of differences is not under scope of this statement. For better verification framework we should use both features.

  • Apr 11 / 2012
  • 0
DBA best practices, dbDigger, Performance Tunning and Optimization, Publications of Atif Shehzad on MSSQLTips.com, SQL Server Filegroups

Configuring the default filegroup for a database with multiple filegroups

By default a single file group exists for SQL Server databases.  It is primary and default file group for that database. However for better manageability and performance optimization we may be required to configure additional file groups for particular database. Data may be spread over multiple files in different file groups. SQL Server provides an option to change the default file group from primary file group to any other. Read here my article related to Using Multiple File groups for a Database and Changing the Default File group.

  • Mar 30 / 2012
  • 0
DBA best practices, dbDigger, Performance Tunning and Optimization, SSMS tips and tricks, T-SQL Tips and Tricks

Move the columns to required position in SSMS result pane

It is always good to have a clear idea of filtered data and order of columns for any required result. However, consider the scenario that you have executed a resource intensive query and got the data in SSMS result pane. And after having a look at the result set you perceive that moving a column prior to a specific one may improve the report.
A option may be to change the order of columns in query and re-generating the result set to required order. But this would again consume resources for that resource intensive query. Best solution for changing column order in result set would be by dragging the column to required position.
Here is an example, i have retrieved result from AdventureWorks database and later moved the column [Phone] from last to second last position.

Swaping the columns in SSMS results pane

This simple tip may prevent the re-execution of query for column re-ordering purpose.

  • Mar 07 / 2012
  • 0
Data Modeling and Database Design, DBA best practices, dbDigger, Identity Columns, Performance Tunning and Optimization

Utilize the negative half of identity column data type

Identity columns are auto generated sequences used for ensuring row uniqueness at table level. Following data types of Numeric family may be used as data type of identity column.

  • int
  • bigint
  • smallint
  • tinyint
  • decimal/numeric

Above mentioned data types of numeric family may store negative values except the TINYINT. It is important to note that half of data types storage capacity of these data types is assigned to negative values. For example SMALLINT can have values from -32768 to 32767. It can hold -32768 to -1 negative values, a zero and 1 to 32767 positive values. Please also have a look at following table (Microsoft Technet) to get an idea about all numeric data types

Utilize the negative part of data type for identity columns

We may analyze that except the TINYINT data type, all other numeric data types have half capacity reserved for non negative values.

Point to Consider

Point to consider here is that identity column capacity should be fully utilized by including the negative part of data type also. Otherwise you are going to leave unused almost half of data type capacity. Very simple point is to give the lowest negative end of data type as SEED value of identity column. Consider the following demo

 -- Create table for demo  
 IF EXISTS   
 (SELECT * FROM sys.objects where name = 'IdentityDemo')  
 DROP TABLE IdentityDemo  
 GO  
 CREATE TABLE IdentityDemo (id SMALLINT identity(1 ,1), EmpName varchar(50))  
 GO  
 -- Insert values more than +ive part of smallint  
 insert into IdentityDemo values ('Atif')  
 GO 32769  

In above demo we used SMALLINT data type with seed and increment both 1, while creating the table. So inserting the values it failed for values more than 32767 and error was generated.

Error when identity column is saturated

Same may be verified by selecting the data from table

 -- Verify the inserted rows  
 SELECT * FROM IdentityDemo order by ID DESC  

The identity column is saturated and while negative position of data type is there unused. Now following demo would utilize the negative portion also and would have double capacity of storage

 -- Create table for demo with negative seed  
 IF EXISTS  
 (SELECT * FROM sys.objects where name = 'IdentityDemo')  
 DROP TABLE IdentityDemo  
 GO  
 CREATE TABLE IdentityDemo (id SMALLINT identity(-32768 ,1), EmpName varchar(50))  
 GO  
 -- Insert values more than +ive part of smallint  
 insert into IdentityDemo values ('Atif')  
 GO 34769  

This time insertion is successful with 34769 values. Can be verified by

 -- Verify the inserted rows  
 SELECT * FROM IdentityDemo order by ID DESC  

Same is the case for int, bigint and decimal/numeric also. So use the identity column as valuable resource and fully utilize it to avoid early saturation.

  • Jun 18 / 2010
  • 0
Data Modeling and Database Design, dbDigger, Performance Tunning and Optimization

Better approach for storing images in SQL Server

Storing images in database rather than storing their path (on disk) is mostly avoided due to performance perspectives. The obvious benefit is that images are also backed up along with database and there is no need to take care of images backups separately.
On the other hand utilizing images path and getting images from disk storage system is optimal. However these are design considerations that appear rare as compared to take over of existing database.

You take over an existing database and find that instead of path of disk images are stored in tables. This situation would demand some extra consideration while working with images stored in binary format inside the tables.

Some examples of these limitations are

  • Images may not be included in distinct list
  • Images can not be used while using UNION, however you may use these with UNION ALL
  • Len() function may not be used with images, you may use DataLength() to get bytes used by image

My Suggestion:

If insert and update operations for images are rare in your scenario then it would be better option to store images in binary format when inserted along with inserting path for image on disk. Later image may be retrieved from disk through path stored in database. It would make sure that images are also backed up along with database.
Consult us to explore the Databases. Contact us