:::: MENU ::::

Posts Categorized / Performance Tunning and Optimization

  • Dec 03 / 2008
  • 0
DBA Interview questions, dbDigger, Performance Tunning and Optimization

Statistics cannot be created on object ‘%.*ls’ because the object is not a user table or view.

The following error message
Msg 1929, Level 16, State 1, Line 1
Statistics cannot be created on object ‘%.*ls’ because the object is not a user table or view.

is a run time error message of severity level 16. It can occur on all versions of SQL Server. This error message appears when you try to create a statistic for an object that is not a table or a view. As errors of Severity Level 16 are generated and can be corrected by the user. To remove the error make sure that statistics can only be created for table and view objects.

  • Dec 01 / 2008
  • 0
dbDigger, Performance Tunning and Optimization, Temporary Tables

Considering performance optimization for Temporary tables

Temporary tables are notorious for their intensive disk I/O to tempDB. Mostly these are avoided but when used following considerations may be used for optimizing performance.

  • By default these are created as heap so you must consider implementation of indexing
  • Rather than using SELECT INTO for creating temporary tables, use DDL statements
  • To avoid recomopiles of a stored procedure, create all temporary objects at start of procedure
  • Create temporary objects, outside the transactions
  • Fully try to not to create cursors on temporary tables
  • Nov 29 / 2008
  • 0
dbDigger, Locking Mechanism, Performance Tunning and Optimization

Reduce locks in SQL Server

To reduce locks in your database, encapsulate all transactions within stored procedures, including both the BEGIN TRANSACTION and COMMIT TRANSACTION statements in the procedure. This provides two benefits that help to reduce blocking locks.
First, it limits the client application and SQL Server to communications before and after the transaction, thus forcing any messages between the client and the server to occur at a time other than when the transaction is running (reducing transaction time).
Second, it prevents the user from leaving an open transaction (holding locks open) because the stored procedure forces any transactions that it starts to complete or abort.
  • Nov 28 / 2008
  • 2
dbDigger, Performance Tunning and Optimization, Reporting Services SSRS

Difference between .RDL and .RDLC

Reports that you create in SQL Server 2005 Reporting Services (saved as .rdl files) can be converted to the client report definition (.rdlc) file format used by the ReportViewer controls. This is called an RDL-to-RDLC conversion.
Reports that you create for ReportViewer controls (saved as .rdlc files) can be converted to the report definition (.rdl) file format used in SQL Server Reporting Services. This is called an RDLC-to-RDL conversion.
.RDL files are for Reporting Services and the .RDLC files are for the VS.NET ReportViewer Component. The “C” in .RDLC simply stands for Client and that is the main difference.
The .RDLC files do not store any query specific information (or in other words, how to get your data).

  • Nov 05 / 2008
  • 0
dbDigger, Indexes, Performance Tunning and Optimization

Rebuilding Indexes in SQL Server 2005

When data is added to or updated in the index, page splits occur. These page splits cause the physical structure of the index to become fragmented. In order to restore the structure of the index to an efficient state, the index needs to be rebuilt. The more fragmented the index, the more performance improvement will result from rebuilding the index.

With SQL Server 2005 you can view the fragmentation in an index via the sys.dm_db_index_physical_stats function.

If the percentage of index fragmentation is less than or equal to 30 percent, Microsoft recommends correcting this with the ALTER INDEX REORGANIZE statement. If the percentage of fragmentation is greater than 30 percent, Microsoft recommends using the ALTER INDEX REBUILD WITH (ONLINE = ON) statement.

With SQL Server 2005, there are additional methods of rebuilding and reorganizing indexes that have been added

  • Nov 05 / 2008
  • 0
dbDigger, Indexes, Performance Tunning and Optimization

Using the Index Fill Factor in SQL Server

If you know that your index will have constant updates or insertions, you can reduce the number of page splits that occur by leaving some extra space in the leaf pages of the index. This is done via the index fill factor. The fill factor specifies how full the index should be when it is created.
By specifying the fill factor, additional rows are left empty in the index pages, thus leaving room for new rows to be added. The fill factor specifies the percentage of the index to fill and can take an integer value from 1 to 100. Setting the fill factor to 75 fills the index 75 percent, leaving 25 percent of the rows available.
The disadvantage of setting the fill factor is that the indexes will be larger and less efficient than a tightly packed index.
The advantage of setting the fill factor is that page splits can be reduced. The fill factor setting appropriate for you really depends on your configuration and the load on the system. Remember that indexes, like all data, are stored both on disk and in memory in 8K pages. Setting the fill factor of an index to 50 causes the index to take up twice as many pages as a fill factor of 100. Keep this in mind when setting the fill factor.

Consult us to explore the Databases. Contact us