:::: MENU ::::

Posts Categorized / Performance Tunning and Optimization

  • Aug 06 / 2008
  • 1
Cursors, DBA Interview questions, dbDigger, Performance Tunning and Optimization

Some thing good about cursors performance

Mostly cursors are notorious for performance degradation and resource consumption. And are used as last option always. But there is situation when cursors may help you to boost the performance. If your application or procedure requires the repeated use of a set of records,it is faster to create a cursor once and reuse it several times than to repeatedly query the database. But do not forget to properly close the cursor at end and also deallocate the resources.
Follow these steps to create, use, and close a database cursor:

  • Create the cursor
  • Open the cursor for use within the procedure or application
  • Fetch a record’s data one row at a time until you have reached the end of the cursor’s records
  • Close the cursor when you are finished with it
  • Deallocate the cursor to completely discard it

Consider the following example of cursor created for pubs..authors

 
DECLARE @fName VARCHAR(40)  
 DECLARE @lName VARCHAR(40)  
 DECLARE @city VARCHAR(40)  
 DECLARE OAKLAND CURSOR FOR  
 SELECT au_lname, au_fname, city  
 FROM pubs..authors  
 OPEN OAKLAND  
 FETCH NEXT FROM OAKLAND INTO @fName, @lName, @city  
 WHILE (@@FETCH_STATUS <> -1)  
 BEGIN  
 FETCH NEXT FROM OAKLAND INTO @fName, @lName, @city  
 IF (@city = 'OAKLAND')  
 BEGIN  
 PRINT @city  
 END  
 END  
 GO  
 CLOSE OAKLAND  
 DEALLOCATE OAKLAND  
 GO  

You may find more detail use of cursors in online books.

  • Jul 09 / 2008
  • 0
Cursors, DBA Interview questions, dbDigger, Performance Tunning and Optimization

Cursor performance

Question: Of course, we all know that cursors are slow on SQL Server and should be avoided when possible. However, there are a few cases where they can’t be avoided, for instance because an external process has to be invoked for each row, or because you run into one of the very rare problems where a cursor actually runs faster than a set-based solution.

So, suppose you find yourself in a situation where you do need a cursor – which cursor option must be specified in order to assure maximum performance?

Answer: STATIC

Explanation: The default options are actually the slowest, since they create a dynamic cursor. In spite of its name, FAST_FORWARD does not result in the fastest performance; STATIC has been shown to be faster. Specifying FORWARD_ONLY or READ_ONLY on a static cursor will not affect performance.

Ref: DECLARE CURSOR

Note: The question of day is taken from SqlServerCentral.com. I want to compile selected QOD to share.

  • Jul 04 / 2008
  • 0
DBA best practices, dbDigger, Performance Tunning and Optimization

Considerations for using Truncate table command

While deleting bulk data from a table, TRUNCATE TABLE command is preferred than delete. Obvious reason is that it is unlogged and thus more efficient than logged operation delete.

The Syntax for this statement is:

TRUNCATE TABLE table_name

Table_Name: Is the name of the table to truncate or from which all rows are removed.

But while using this powerful alternate one should be aware of some more facts.

  • TRUNCATE TABLE is not fully unlogged but removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log. While delete removes the data row by row.
  • Besides deleting all of the records in a table, this command will also reset the seed of any IDENTITY column back to its original value.
  • After you have run the TRUNCATE TABLE statement, it is important then to manually update the statistics on this table using UPDATE STATISTICS. This is because running TRUNCATE TABLE will not reset the statistics for the table, which means that as you add data to the table, the statistics for that table will be incorrect for a time period. Of course, if you wait long enough, and if you have Auto Update Statistics turned on for the database, then the statistics will eventually catch up with themselves. But this may not happen quickly, resulting in slowly performing queries because the Query Optimizer is using out-of-date statistics.
  • TRUNCATE TABLE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE.
  • As TRUNCATE TABLE is unlogged operation (at row level) so it can not be rolled back.
  • As TRUNCATE TABLE is not logged, it cannot activate a trigger.
  • Truncate is categorized as DDL command.
  • You can not truncate a table that is participant in an indexed view.
  • You can not truncate a table that is published using transactional replication or merge replication.
  • You can not truncate a table when there are Foreign Key references.
  • To truncate a table, you need at least ALTER permissions on the table (which is granted by default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles).
  • TRUNCATE TABLE can not be used with where clause. So no condition could be applied while truncating table. So it turns to a lethal weapon, use it with care!
  • Jul 04 / 2008
  • 0
dbDigger, Performance Tunning and Optimization, Transaction log files

Dealing the growth of Transactional Log Files

Transactional log files are important in optimization of database servers. A good DBA should have comprehensive knowledge to optimize and use the transactional log files. This area is mostly ignored by DBAs with small size databases, because their whole logs are mostly minimal as compared to massive storage and also they do not face performance issues and bottle necks.

What is in log file?
When SQL Server is functioning and operating, the database engine keeps track of almost every change taking place within the database by making entries into the transaction log so that it can be used later if required. The location of the SQL Server transaction log is configured at the same time the database is created. In SQL Server each database contains at least one data file and one transaction log file. SQL Server stores the data physically in the data file. The transaction log file stores the details of all the modifications performed to SQL Server database and the details of the transactions that performed each modification. Due to fundamental and important requirement, logging the details of the transactions cannot be turned off in SQL Server. Transaction logs can be used in a backup situation so possibly putting them on a disk other than that occupied by the primary data files may be a good idea for future use.

Structure of transaction log file
The transaction log file is logically divided into smaller segments that are referred to as virtual log files. Modifications that SQL Server makes to the size of the transaction log file, such as truncating the transaction log files or growing the transaction log files, are performed in units of virtual log files. If the transaction log file that corresponds to a SQL Server database is filled and if you have set the option for the transaction log files to grow automatically, the transaction log file grows in units of virtual log files.

Challenges with size of transactional log files
Sometimes, the transaction log file may become very large and you may run out of disk space. When a transaction log file grows until the log file uses all the available disk space and cannot expand any more,

  • you can no longer perform any data modification operations on your database
  • SQL Server may mark your database as suspect because of the lack of space for the transaction log expansion

Transaction log expansion may occur because of the following reasons or scenarios:

  • Uncommitted transactions
  • Extremely large transactions
  • Operations: DBCC DBREINDEX and CREATE INDEX
  • While restoring from transaction log backups
  • Client applications do not process all results
  • Queries time out before a transaction log completes the expansion and you receive false
  • Log full’ error messages
  • Unreplicated transactions
  • Dealing the growing size problems

There are two main approaches to deal with the growing size problem of transactional log files.

  • Reduce the size of file
  • Reduce the size and shrink the file

Apparently both approaches seem same, but the importance difference is that when we just reduce the size through truncate or backup option, SQL Server reuses this truncated, inactive space in the transaction log instead of permitting the transaction log to continue to grow and to use more space. While in second approach after reducing the size when shrink option is used, evacuated space is shrinked and file size is reduced.

Reduce the size of the transaction logs:
To recover from a situation where the transaction logs grow to an unacceptable limit, you may truncate the inactive transactions in your transaction log. Or you may back up your transaction log file. While the backup of your transaction log files is created, SQL Server automatically truncates the inactive part of the transaction log. SQL Server reuses this truncated, inactive space in the transaction log instead of permitting the transaction log to continue to grow and to use more space. You can also delete the inactive transactions from a transaction log file by using the truncate method. In such case after you manually truncate the transaction log files, you must create a full database backup before you create a transaction log backup.

Shrink the transaction log file:
The backup operation or the Truncate method does not reduce the log file size. To reduce the size of the transaction log file, you must shrink the transaction log file. To shrink a transaction log file to the requested size and to remove the unused pages, you must use the DBCC SHRINKFILE operation. The DBCC SHRINKFILE Transact-SQL statement can only shrink the inactive part inside the log file. The DBCC SHRINKFILE Transact-SQL statement cannot truncate the log and shrink the used space inside the log file on its own.


Consult us to explore the Databases. Contact us