:::: MENU ::::

Posts Categorized / DBA best practices

  • Aug 11 / 2008
  • 1
Data Modeling and Database Design, DBA best practices, dbDigger, Performance Tunning and Optimization

DBA Best Practices for Database Design

Good database design will be a luxury for a life time. Follow these basic guide lines to give a good database design.

  1. Bad logical database design results in bad physical database design, and generally results in poor database performance. So, if it is your responsibility to design a database from scratch, be sure you take the necessary time and effort to get the logical database design right. Once the logical design is right, then you also need to take the time to get the physical design right.
  2. Normalize your data to ensure best performance.
  3. Take advantage of SQL Server’s built-in referential integrity. You don’t need to write your own.
  4. Always specify the narrowest columns you can. In addition, always choose the smallest data type you need to hold the data you need to store in a column. The narrower the column, the less amount of data SQL Server has to store, and the faster SQL Server is able to read and write data.
  5. Try to avoid performing both OLTP and OLAP transactions within the same database.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 11 / 2008
  • 0
DBA best practices, dbDigger, Indexes, Performance Tunning and Optimization

DBA Best Practices for Indexing

Indexing is like nervous system of DB optimization. It has terrific impact on performance as you configure it. Follow these use full guidelines to optimize the performance of a SQL Server database.

  1. Periodically, run the Index Wizard or Database Engine Tuning Advisor against current Profiler traces to identify potentially missing indexes.
  2. Remove indexes that are never used.
  3. Don’t accidentally create redundant indexes.
  4. As a rule of thumb, every table should have at least a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases — such as an identity column, or some other column where the value is increasing — and is unique. In many cases, the primary key is the ideal column for a clustered index.
  5. Since you can only create one clustered index per table, take extra time to carefully consider how it will be used. Consider the type of queries that will be used against the table, and make an educated guess as to which query (the most common one run against the table, perhaps) is the most critical, and if this query will benefit from having a clustered index.
  6. If a column in a table is not at least 95% unique, then most likely the query optimizer will not use a non-clustered index based on that column. Because of this, you generally don’t want to add non-clustered indexes to columns that aren’t at least 95% unique.
  7. Keep the “width” of your indexes as narrow as possible. This reduces the size of the index and reduces the number of disk I/O reads required to read the index, boosting performance.
  8. If possible, avoid adding a clustered index to a GUID column (uniqueidentifier data type). GUIDs take up 16-bytes of storage, more than an Identify column, which makes the index larger, which increases I/O reads, which can hurt performance.
  9. Indexes should be considered on all columns that are frequently accessed by the JOIN, WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.
  10. Don’t automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
  11. When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index because in a unique index, each row is unique, and once the needed record is found, SQL Server doesn’t have to look any further.
  12. If you perform regular joins between two or more tables in your queries, performance will be optimized if each of the joined columns have appropriate indexes.
  13. Don’t automatically accept the default value of 100 for the fill factor for your indexes. It may or may not best meet your needs. A high fill factor is good for seldom changed data, but highly modified data needs a lower fill factor to reduce page splitting.
  14. Don’t over index your OLTP tables, as every index you add increases the time it takes to perform INSERTS, UPDATES, and DELETES. There is a fine line between having the ideal number of indexes (for SELECTs) and the ideal number to minimize the overhead that occurs with indexes during data modifications.
  15. If you know that your application will be performing the same query over and over on the same table, consider creating a non-clustered covering index on the table. A covering index, which is a form of a composite index, includes all of the columns referenced in SELECT,JOIN, and WHERE clauses of a query. Because of this, the index contains the data you are looking for and SQL Server doesn’t have to look up the actual data in the table, reducing logical and/or physical I/O, and boosting performance.

A part chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 11 / 2008
  • 0
DBA best practices, dbDigger, Hardware and Platform, Performance Tunning and Optimization

DBA Best Practices for Hardware Performance Tuning

You have upgraded your hardware to new cutting edge technology server machine. Congratulations but do not forget that its the person on machine that matters more than machine. Its right time to tune your new machine to get most out of it. Poor tuning may lead to poor performance no matter how sophisticated or state of art is your server.

  1. Although heavy-duty hardware can help SQL Server’s performance, application and database design can play a greater part in overall performance than hardware. Keep this in mind, as throwing good money after bad on server hardware does not always fix SQL Server performance problems. Before getting faster hardware, be sure you have thoroughly tuned your applications, Transact-SQL, and database indexing.
  2. In many cases, adding RAM to a server is the cheapest and fastest way to boost hardware performance of a SQL Server. But before adding more RAM to a SQL Server, ensure first that it will be used by SQL Server. Adding more RAM doesn’t mean that SQL Server will always use it. If the current Buffer Hit Cache Ratio is consistently above 99% and you have well more than 10 MB of Available RAM, your server won’t benefit from adding additional RAM.
  3. If your SQL Server’s total CPU utilization is consistently above 80% or more, you need more CPUs, faster CPUs, or you need to find a way to reduce the load on the current server.
  4. If the Physical Disk Object: % Disk Time counter exceeds 55%, and the Physical Disk Object: Avg. Disk Queue Length exceeds a count of 2 for each individual disk drive in your disk storage subsystem, then you most likely experiencing a disk I/O performance issue and need to start looking for solutions.
  5. Don’t run any applications on your server other than SQL Server, with the exception of necessary utilities.
  6. NTFS-formatted partitions should not exceed 80% of their capacity. For example, if you have a 100GB logical drive, it should never be fuller than 80GB. Why? NTFS needs room to work, and when you exceed 80% capacity, NTFS become less efficient and I/O can suffer for it.
  7. If your SQL Server database is mostly reads, then a RAID 5 array offers good protection and adequate performance. If your SQL Server database is mostly writes, then use a RAID 10 array for best protection and performance.
  8. If your SQL Server’s tempdb database is heavily used by your application(s), consider locating it on an array of its own (such as RAID 1 or RAID 10). This will allow disk I/O to be more evenly distributed, reducing disk I/O contention issues, and speeding up SQL Server’s overall performance.
  9. The more spindles you have in an array, the faster disk I/O will be.
  10. Ensure that all hardware is running the latest, approved drivers.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 11 / 2008
  • 0
DBA best practices, dbDigger, Monitoring and Analysis

DBA Best Practices for Performance Monitoring

Not always hardware bottle necks are responsible for performance degradation. there are various possible reasons. And to detect the right culprit DBA should pay special attention to performance monitoring. For this purpose

  1. Regularly monitor your SQL Servers for blocked transactions.
  2. Regularly monitor system performance using System Monitor. Use System Monitor for both real-time analysis andfor historical/baseline analysis.
  3. If running SQL Server 2005, SP2 or later, install the free SQL ServerPerformance Dashboard. It can be usedfor real-time monitoring and performancetroubleshooting.
  4. Regularly monitor activity using Profiler.
  5. Be sure that traces are taken duringthe busiest times of the day so you get a more representative trace of what isgoing on in each server. When runningthe Profiler, do not collect more datathan you need to collect.
  6. Perform performance monitoring from a computer that is not the SQL Server you are monitoring. Run monitoring tools on a separate desktop or server.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 11 / 2008
  • 0
Backup and Restore, DBA best practices, dbDigger

DBA Best Practices for Backup

Backup is the skill that no one else in organization more than a DBA itself. It is ultimate shield and optimized back plan may be of great advantages. Following are some baselines for database backups

  1. All production databases should be set to use the full recovery model. This way, you can create transaction log backups on a periodic basis.
  2. Whenever possible, perform a daily full backup of all system and user databases.
  3. For all production databases, perform regular transaction log backups, at least once an hour.
  4. Perform full backups during periods of low user activity in order to minimize the impact of backups on users.
  5. Periodically test backups to ensure that they are good and can be restored.
  6. Backup first to disk, then move to tape or some other form of backup media.
  7. Store backups offsite.
  8. If using SQL Server 2005 encryption, be sure to backup the service master key, database master keys, and certificates.
  9. If you find that backup times take longer than your backup window, or if backup file sizes are taking up too much space on your storage device, consider a third party backup program, such as SQL Backup Pro or SQL tool belt.
  10. Document, step-by-step, the process to restore system and user databases onto the same, or a different server. You don’t want to be looking this information up during an emergency.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 09 / 2008
  • 0
DBA best practices, dbDigger, Disaster Recovery

DBA Best Practices for Disaster Recovery

keeping in view the sensitivity of topic, a DBA must pay full attention to be prepared for recovery from any type of disaster. Following are just very general guidelines, but the task requires planning and practice.

  1. You must create a disaster recovery plan and include every detail you will need to rebuild your servers.
  2. As your SQL Servers change over time, don’t forget to update your disaster recovery plan
  3. Write the disaster recovery plan so that any computer literate person will be able to read and follow it. Do not assume a DBA will be rebuilding the servers.
  4. Fully test your disaster recovery plan at least once a year.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

Consult us to explore the Databases. Contact us