:::: MENU ::::

Posts Categorized / DBCC Commands

  • Dec 24 / 2013
  • 0
DBCC Commands, dbDigger, Maintenance plans, Performance Tunning and Optimization, SQL Server Agent scheduled Jobs

Optimal run of DBCC CHECKDB on production servers

DBCC CHECKDB is SQL Server built-in utility command to analyze and report the database integrity in various aspects. It checks the logical and physical integrity of all the objects in the specified database by performing the following operations:

  • Runs DBCC CHECKALLOC on the database
  • Runs DBCC CHECKTABLE on every table and view in the database
  • Runs DBCC CHECKCATALOG on the database
  • Validates the contents of every indexed view in the database

It is a useful command and should not be abandoned due to its resource hungry execution. However under default deployment it may take significant time on production databases. With increasing time window, risk of performance degradation increases. You may have faced these issues several times on your production servers. Following short comings may be noticed in default implementation.

  • Error or information messages are not stored any where when scheduled job is run
  • Check is required to exclude any databases other than ONLINE from the target databases list
  • A strategy is required to minimize the activity on server

Solution

We may take few steps to make the process optimized and log the output. Following are points that are recommended for it.

  • Save log file of DBCC CHECKDB scheduled job output to disk. It may have just error messages if any or also the informational messages.
  • Make sure to exclude the databases whose CHECKDB is not required. CHECKDB is performed for ONLINE databases, so make sure to include the check in the script that will filter out all the databases that are not ONLINE.
  • Use parameter PHYSICAL_ONLY. It will greatly reduce the process time and will only consider the data as it is stored on physical layer.
  • We can check the logical structure on any fresh restored copy of same databases on other server. Logical checks are not dependent on machine or disk. It will totally eliminate the load of logical checks from production server. This process will also make sure the validity check of backups.

Page Verification Settings and DBCC CheckDB

It seems here relevant to discuss the relationship between the Page verification check sum and DBCC CheckDB. PAGE VERIFICATION is a database level setting related to data recovery and integrity. Its value may be NULL, TORN_PAGE_DETECTION or CHECKSUM. For SQL Server 2005 and onwards CHECKSUM is the default option for new databases. CHECKSUM is more detailed and granular algorithm than TORN_PAGE_DETECTION. CHECKSUM covers all aspects of TORN_PAGE_DETECTION. However it may require more CPU resources as compared to TORN_PAGE_DETECTION. CHECKSUM Calculates a checksum over the contents of the whole page and stores the value in the page header when a page is written to disk.

A common misconception is that if we enable CHECKSUM for Page verification feature then we may skip the DBCC CheckDB command as CHECKSUM will be evaluating the data integrity.
It is not true at all. Page verification CHECKSUM is not alternate of DBCC CheckDB however it may effectively enhance the scope of DBCC CheckDB. Page verification CHECKSUM is a limited scope page level verification whereas DBCC CheckDB covers far more areas than that.
Page verification CHECKSUM is calculated/updated only when a data page is saved back to disk. If a data page gets corrupt after it has been saved to disk with CHECKSUM calculated on it then we will not get its report until it will be retrieved again. To verify through the calculated CHECKSUM we have to run DBCC CheckDB necessarily.
Technical detail of differences is not under scope of this statement. For better verification framework we should use both features.

  • Oct 04 / 2013
  • 0
DBCC Commands, dbDigger, Monitoring and Analysis, Transaction log files

Get total size of log file along with used space percentage for each SQL Server database

Always there exists a single log file for each SQL Server database. Log file keeps growing when in full or bulk logged recovery model. DBAs implement strategies to manage the log file size. There are two aspects of log file size.

  • One is log file size that it captures on disk and is shown at OS level.
  • Second aspect relates to filled part of over all log file.

Suppose there is a log file with size 20 GB on disk then it is quite possible that it is filled with only 1 GB log and rest of 19 GB is empty. SQL Server will keep populating the remaining part unless it is full and then will claim more size according to file growth settings.
It is a basic requirement to have an idea of log file and its used space. Following is a very simple way to get this information. You may also insert the information into a table on daily or weekly basis and may process alerts for these.

DBCC SQLPERF (LOGSPACE)
GO

Get total size of log file along with used space percentage for each SQL Server database

As a result we have Database name, total log file size as well as the filled log space percentage. Last column status is always zero.

  • May 21 / 2009
  • 0
DBCC Commands, dbDigger, Monitoring and Analysis

List SET options for current connection through DBCC USEROPTIONS

Often i am required to verify any of batch level options for my current connection. This task may be easily completed through use of informational DBCC command (DBCC USEROPTIONS) in following way

DBCC USEROPTIONS
GO

Options are listed in following format

List of SET options for current connection

Also it is relevant to mention here that options may be of

  • SQL Server instance level
  • SQL Server database level
  • T-SQL batch level
  • T-SQL Statement level

OLEDB applications can control some ANSI options through their connection options. Microsoft recommends not to change SET options through SET statements. Instead using the connection properties of ODBC/OLEDB or system stored procedure sp_configure is preferable.

  • A database level option overrides the instance option
  • Batch level option overrides database option
  • Statement level option overrides batch level options
  • May 21 / 2009
  • 0
reinvent the wheel
DBA thoughts, DBCC Commands, dbDigger, System Functions, System Stored Procedures

Choosing between system objects and customized scripts

Most of the times i am amazed to notice that there are several commands, stored procedures and functions provided by SQL Server that may be effectively used during several tasks. We know these objects as DBCC commands, system stored procedures, DMVs, and system functions. For efficiency and effectiveness it is important for a DBA to get update of all such system provided objects. It would prevent the reinvention of wheel.

  • Click here to get list of SQL Server T-SQL system stored procedures
  • Click here to get list of SQL Srever T-SQL system functions
  • Click here to get list of SQL Srever T-SQL DBCC commands
  • Mar 30 / 2009
  • 1
DBCC Commands, dbDigger, T-SQL Scripts, T-SQL Tips and Tricks, Transaction log files

How to truncate the transactional log

In one of previous posts about Dealing the growth of Transactional Log Files i discussed briefly about function of transactional log files in SQL Server and also there were some prevention measures to prevent growth of transactional log file on large amount.
There may be situation that we have to get rid of grown file. In that case obvious work around is to truncate the transactional log. To truncate the transaction log for AdventureWorks database

ALTER DATABASE AdvetureWorks  
 SET recovery SIMPLE  
 GO  
 DBCC shrinkfile (LogFileNameofAdventureWorks)  
 GO  
 ALTER DATABASE AdventureWorks  
 SET recovery FULL  
 GO  

As in case of simple recovery mode SQL Server transaction log is not maintained for that database. So the transactional log will be dropped for database when recovery model is set to simple.
It is important to mention that on going transactions should be considered before changing recovery models.

  • Mar 21 / 2009
  • 0
DBA best practices, DBA Interview questions, DBCC Commands, dbDigger, Transaction log files

How to Prevent the Unexpected Growth of transactional log files in SQL Server

To prevent the transaction log files from growing unexpectedly, consider using one of the following methods:

  • Set the size of the transaction log files to a large value to avoid the automatic expansion of the transaction log files.
  • Configure the automatic expansion of transaction log files by using memory units instead of a percentage after you thoroughly evaluate the optimum memory size.
  • Change the recovery model. By using the simple recovery model, you can recover your database to the most recent backup of your database. By using the full recovery model or the bulk-logged recovery model, you can recover your database to the point when the failure occurred by restoring your database with the transaction log file backups. By default, in SQL Server 2000 and in SQL Server 2005, the recovery model for a SQL Server database is set to the Full recovery model. With the full recovery model, regular backups of the transaction log are used to prevent the transaction log file size from growing out of proportion to the database size. However, if the regular backups of the transaction log are not performed, the transaction log file grows to fill the disk, and you may not be able to perform any data modification operations on the SQL Server database. You can change the recovery model from full to simple if you do not want to use the transaction log files during a disaster recovery operation.
  • Back up the transaction log files regularly to delete the inactive transactions in your transaction log.
  • Design the transactions to be small.
  • Make sure that no uncommitted transactions continue to run for an indefinite time.
  • Schedule the Update Statistics option to occur daily.
  • To defragment the indexes to benefit the workload performance in your production environment, use the DBCC INDEXDEFRAG Transact-SQL statement instead of the DBCC DBREINDEX Transact-SQL statement. If you run the DBCC DBREINDEX statement, the transaction log may expand significantly when your SQL Server database is in Full recovery mode. Additionally, the DBCC INDEXDEGRAG statement does not hold the locks for a long time, unlike the DBCC DBREINDEX statement.
Pages:12
Consult us to explore the Databases. Contact us