:::: MENU ::::

Posts Categorized / Performance Tunning and Optimization

  • Sep 30 / 2008
  • 0
DBA Interview questions, dbDigger, Indexes, Performance Tunning and Optimization

Why to create more than one spatial index on SQL Server table

Question: Why would you create more than one spatial index on the same spatial column in SQL Server 2008?

Answer: To set different tessellation parameters

Explanation: The primary reason for creating multiple spatial indexes on the same spatial column is to apply different tessellation parameters to the index. The tesselation process fits the data in the column into a grid area, allowing faster queries. Different parameters might result in grosser or finer grids applied.

Ref: Spatial Indexing Overview –

Note: The Question is taken from SQLServercentral.com.

  • Aug 12 / 2008
  • 0
DBA best practices, dbDigger, Performance Tunning and Optimization, T-SQL Tips and Tricks, User Defined Stored Procedures

DBA Best Practices for Queries and Stored Procedures

Queries and stored procedures are close companions of a DBA. So make your life easy by following the baseline given below, while writing queries and stored procedures by tomorrow.

  1. Maintain all code in a source control system.
  2. Keep transactions as short as possible. This reduces locking and increases application concurrency, which helps to boost performance.
  3. Avoid using query hints unless you know exactly what you are doing, and you have verified that the hint actually boosts performance.
  4. Encapsulate all transactions within stored procedures, including both the BEGIN TRANSACTION and COMMIT TRANSACTION statements in the procedure.
  5. Use the least restrictive transaction isolation level possible for your user connection, instead of always using the default READ COMMITTED.
  6. SET NOCOUNT ON at the beginning of each stored procedure you write. This statement should be included in every stored procedure you write.
  7. Before you are done with your stored procedure code, review it for any unused code, parameters, or variables that you may have forgotten to remove while you were making changes, and remove them.
  8. For best performance, all objects that are called within the same stored procedure should all be owned by the same object owner or schema, preferably dbo, and should also be referred to in the format of object_owner. object_name or schema_owner.object_ name.
  9. One way to help ensure that stored procedures query plans are reused from execution to execution of the same stored procedure is to ensure that any user connections information, SET options, database options, or SQL Server configuration options don’t change from execution to execution of the same stored procedure. If they do change, then SQL Server may consider these same stored procedures to be different, and not be able to reuse the current query plan stored in cache.
  10. Whenever a client application needs to send Transact-SQL to SQL Server, send it in the form of a stored procedure instead of a script or embedded Transact-SQL. Stored procedures offer many benefits, including:
  11. Reduced network traffic and latency, boosting application performance.
  12. Stored procedure execution plans can be reused, staying cached in SQL Server’s memory, reducing server overhead.
  13. Client execution requests are more efficient. For example, if an application needs to INSERT a large binary value into an image data column not using a stored procedure, it must convert the binary value to a character string (which doubles its size), and send it to SQL Server. When SQL Server receives it, it then must convert the character value back to the binary format. This is a lot of wasted overhead. A stored procedure eliminates this issue as parameter values stay in the binary format all the way from the application to SQL Server, reducing overhead and boosting performance.
  14. Stored procedures help promote code reuse. While this does not directly boost an application’s performance, it can boost the productivity of developers by reducing the amount of code required, along with reducing debugging time.
  15. Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients (assuming you keep the parameters the same and don’t remove any result sets columns). This saves developer time.
  16. Stored procedures provide better security to your data.

A part chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 11 / 2008
  • 1
Data Modeling and Database Design, DBA best practices, dbDigger, Performance Tunning and Optimization

DBA Best Practices for Database Design

Good database design will be a luxury for a life time. Follow these basic guide lines to give a good database design.

  1. Bad logical database design results in bad physical database design, and generally results in poor database performance. So, if it is your responsibility to design a database from scratch, be sure you take the necessary time and effort to get the logical database design right. Once the logical design is right, then you also need to take the time to get the physical design right.
  2. Normalize your data to ensure best performance.
  3. Take advantage of SQL Server’s built-in referential integrity. You don’t need to write your own.
  4. Always specify the narrowest columns you can. In addition, always choose the smallest data type you need to hold the data you need to store in a column. The narrower the column, the less amount of data SQL Server has to store, and the faster SQL Server is able to read and write data.
  5. Try to avoid performing both OLTP and OLAP transactions within the same database.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 11 / 2008
  • 0
DBA best practices, dbDigger, Indexes, Performance Tunning and Optimization

DBA Best Practices for Indexing

Indexing is like nervous system of DB optimization. It has terrific impact on performance as you configure it. Follow these use full guidelines to optimize the performance of a SQL Server database.

  1. Periodically, run the Index Wizard or Database Engine Tuning Advisor against current Profiler traces to identify potentially missing indexes.
  2. Remove indexes that are never used.
  3. Don’t accidentally create redundant indexes.
  4. As a rule of thumb, every table should have at least a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases — such as an identity column, or some other column where the value is increasing — and is unique. In many cases, the primary key is the ideal column for a clustered index.
  5. Since you can only create one clustered index per table, take extra time to carefully consider how it will be used. Consider the type of queries that will be used against the table, and make an educated guess as to which query (the most common one run against the table, perhaps) is the most critical, and if this query will benefit from having a clustered index.
  6. If a column in a table is not at least 95% unique, then most likely the query optimizer will not use a non-clustered index based on that column. Because of this, you generally don’t want to add non-clustered indexes to columns that aren’t at least 95% unique.
  7. Keep the “width” of your indexes as narrow as possible. This reduces the size of the index and reduces the number of disk I/O reads required to read the index, boosting performance.
  8. If possible, avoid adding a clustered index to a GUID column (uniqueidentifier data type). GUIDs take up 16-bytes of storage, more than an Identify column, which makes the index larger, which increases I/O reads, which can hurt performance.
  9. Indexes should be considered on all columns that are frequently accessed by the JOIN, WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.
  10. Don’t automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
  11. When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index because in a unique index, each row is unique, and once the needed record is found, SQL Server doesn’t have to look any further.
  12. If you perform regular joins between two or more tables in your queries, performance will be optimized if each of the joined columns have appropriate indexes.
  13. Don’t automatically accept the default value of 100 for the fill factor for your indexes. It may or may not best meet your needs. A high fill factor is good for seldom changed data, but highly modified data needs a lower fill factor to reduce page splitting.
  14. Don’t over index your OLTP tables, as every index you add increases the time it takes to perform INSERTS, UPDATES, and DELETES. There is a fine line between having the ideal number of indexes (for SELECTs) and the ideal number to minimize the overhead that occurs with indexes during data modifications.
  15. If you know that your application will be performing the same query over and over on the same table, consider creating a non-clustered covering index on the table. A covering index, which is a form of a composite index, includes all of the columns referenced in SELECT,JOIN, and WHERE clauses of a query. Because of this, the index contains the data you are looking for and SQL Server doesn’t have to look up the actual data in the table, reducing logical and/or physical I/O, and boosting performance.

A part chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 11 / 2008
  • 0
DBA best practices, dbDigger, Hardware and Platform, Performance Tunning and Optimization

DBA Best Practices for Hardware Performance Tuning

You have upgraded your hardware to new cutting edge technology server machine. Congratulations but do not forget that its the person on machine that matters more than machine. Its right time to tune your new machine to get most out of it. Poor tuning may lead to poor performance no matter how sophisticated or state of art is your server.

  1. Although heavy-duty hardware can help SQL Server’s performance, application and database design can play a greater part in overall performance than hardware. Keep this in mind, as throwing good money after bad on server hardware does not always fix SQL Server performance problems. Before getting faster hardware, be sure you have thoroughly tuned your applications, Transact-SQL, and database indexing.
  2. In many cases, adding RAM to a server is the cheapest and fastest way to boost hardware performance of a SQL Server. But before adding more RAM to a SQL Server, ensure first that it will be used by SQL Server. Adding more RAM doesn’t mean that SQL Server will always use it. If the current Buffer Hit Cache Ratio is consistently above 99% and you have well more than 10 MB of Available RAM, your server won’t benefit from adding additional RAM.
  3. If your SQL Server’s total CPU utilization is consistently above 80% or more, you need more CPUs, faster CPUs, or you need to find a way to reduce the load on the current server.
  4. If the Physical Disk Object: % Disk Time counter exceeds 55%, and the Physical Disk Object: Avg. Disk Queue Length exceeds a count of 2 for each individual disk drive in your disk storage subsystem, then you most likely experiencing a disk I/O performance issue and need to start looking for solutions.
  5. Don’t run any applications on your server other than SQL Server, with the exception of necessary utilities.
  6. NTFS-formatted partitions should not exceed 80% of their capacity. For example, if you have a 100GB logical drive, it should never be fuller than 80GB. Why? NTFS needs room to work, and when you exceed 80% capacity, NTFS become less efficient and I/O can suffer for it.
  7. If your SQL Server database is mostly reads, then a RAID 5 array offers good protection and adequate performance. If your SQL Server database is mostly writes, then use a RAID 10 array for best protection and performance.
  8. If your SQL Server’s tempdb database is heavily used by your application(s), consider locating it on an array of its own (such as RAID 1 or RAID 10). This will allow disk I/O to be more evenly distributed, reducing disk I/O contention issues, and speeding up SQL Server’s overall performance.
  9. The more spindles you have in an array, the faster disk I/O will be.
  10. Ensure that all hardware is running the latest, approved drivers.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 09 / 2008
  • 0
DBA best practices, dbDigger, Performance Tunning and Optimization

DBA Best Practices for Database Settings

If you are going to play with database settings then wait and read following base lines as best practices

  1. Unless you know exactly what you are doing and have already performed impartial experiments that prove that making SQL Server configuration changes helps you in your particular environment, do not change any of the SQL Server configuration settings.
  2. In almost all cases, leave the “auto create statistics” and “auto update statistics” options on for all user databases.
  3. In most cases, the settings for the “maximum server memory” and the “minimum server memory” should be left to their default values. This is because the default values allow SQL Server to dynamically allocate memory in the server for the best overall optimum performance. If you use AWE memory, then this recommendation is to be ignored, and maximum memory needs to be set manually.
  4. Many databases need to be shrunk periodically in order to free up disk space as older data is deleted from the database. But don’t be tempted to use the “auto shrink” database option, as it can waste SQL Server resources unnecessarily. Instead, shrink databases manually.
  5. Don’t rely on AUTOGROWTH to automatically manage the size of your databases. Instead, proactively monitor and alter database size as circumstances dictate. Only use AUTOGROWTH to deal with unexpected growth.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

Consult us to explore the Databases. Contact us