:::: MENU ::::

Posts Categorized / DBA best practices

  • Aug 09 / 2008
  • 0
DBA best practices, dbDigger, Deployment

DBA Best Practices for Installation

For installation process following check list will be helpful

  1. Always fully document installs so that your SQL Server instances can easily be reproduced in an emergency.
  2. If possible, install and configure all of your SQL Server instances consistently, following an agreed-upon organization standard.
  3. Don’t install SQL Server services you don’t use, such as Microsoft Full- Text Search, Notification Services, or Analysis Services.
  4. For best performance of SQL Server running under Windows, turn off any operating system services that aren’t needed.
  5. For optimum SQL Server performance, you want to dedicate your physical servers to only running a single instance of SQL Server, no other applications.
  6. For best I/O performance, locate the database files (.mdf) and log files (.ldf) on separate arrays on your server to isolate potentially conflicting reads and writes.
  7. If tempdb will be used heavily, also put it on its own separate array.
  8. Do not install SQL Server on a domain controller.
  9. Be sure that SQL Server is installed on an NTFS partition.
  10. Don’t use NTFS data file encryption (EFS) and compression on SQL Server database and log files.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Aug 09 / 2008
  • 0
DBA best practices, DBA Interview questions, dbDigger

DBA Best Practices Day to Day

DBA may keep a defined set of practices that should be implemented daily to detect/overcome any anomaly. Following is a sketch of general daily practices for a DBA .

  1. Check OS Event Logs, SQL Server Logs, and Security Logs for unusual events.
  2. Verify that all scheduled jobs have run successfully.
  3. Confirm that backups have been made and successfully saved to a secure location.
  4. Monitor disk space to ensure your SQL Servers won’t run out of disk space.
  5. Throughout the day, periodically monitor performance using both System Monitor and Profiler.
  6. Use Enterprise Manager/Management Studio to monitor and identify blocking issues.
  7. Keep a log of any changes you make to servers, including documentation of any performance issues you identify and correct.
  8. Create SQL Server alerts to notify you of potential problems, and have them emailed to you. Take actions as needed.
  9. Run the SQL Server Best Practices Analyzer on each of your server’s instances on a periodic basis.
  10. Take some time to learn something new as a DBA to further your professional development.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • Jul 04 / 2008
  • 0
DBA best practices, dbDigger, Performance Tunning and Optimization

Considerations for using Truncate table command

While deleting bulk data from a table, TRUNCATE TABLE command is preferred than delete. Obvious reason is that it is unlogged and thus more efficient than logged operation delete.

The Syntax for this statement is:

TRUNCATE TABLE table_name

Table_Name: Is the name of the table to truncate or from which all rows are removed.

But while using this powerful alternate one should be aware of some more facts.

  • TRUNCATE TABLE is not fully unlogged but removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log. While delete removes the data row by row.
  • Besides deleting all of the records in a table, this command will also reset the seed of any IDENTITY column back to its original value.
  • After you have run the TRUNCATE TABLE statement, it is important then to manually update the statistics on this table using UPDATE STATISTICS. This is because running TRUNCATE TABLE will not reset the statistics for the table, which means that as you add data to the table, the statistics for that table will be incorrect for a time period. Of course, if you wait long enough, and if you have Auto Update Statistics turned on for the database, then the statistics will eventually catch up with themselves. But this may not happen quickly, resulting in slowly performing queries because the Query Optimizer is using out-of-date statistics.
  • TRUNCATE TABLE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE.
  • As TRUNCATE TABLE is unlogged operation (at row level) so it can not be rolled back.
  • As TRUNCATE TABLE is not logged, it cannot activate a trigger.
  • Truncate is categorized as DDL command.
  • You can not truncate a table that is participant in an indexed view.
  • You can not truncate a table that is published using transactional replication or merge replication.
  • You can not truncate a table when there are Foreign Key references.
  • To truncate a table, you need at least ALTER permissions on the table (which is granted by default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles).
  • TRUNCATE TABLE can not be used with where clause. So no condition could be applied while truncating table. So it turns to a lethal weapon, use it with care!
  • May 21 / 2008
  • 0
DBA best practices, dbDigger, Performance Tunning and Optimization

SQL Server T-SQL best practices

Sometimes abbreviated T-SQL, Transact-SQL is Microsoft’s and Sybase’s proprietary extension to the SQL language. In order to make it more powerful, SQL has been enhanced with additional features such as: Control-of-flow language and Local variables, Various support functions for string processing, date processing, mathematics, etc. Improvements to DELETE and UPDATE statements

Following are some tips for T-SQL scripts

  • Use the SQL Server Profiler Create Trace Wizard to run the “T-SQL By Duration” trace. You can specify the length of the long running queries you are trying to identify (such as over 1000 milliseconds), and then have these recorded in a log for you to investigate later.
  • In your queries, don’t return column data you don’t need. For example, you should not use SELECT * to return all the columns from a table if you don’t need all the data from each column. In addition, using SELECT * prevents the use of covered indexes, further potentially hurting query performance.
  • You may have heard of a SET command called SET ROWCOUNT. Like the TOP operator, it is designed to limit how many rows are returned from a SELECT statement. In effect, the SET ROWCOUNT and the TOP operator perform the same function. While is most cases, using either option works equally efficiently, there are some instances (such as rows returned from anunsorted heap) where the TOP operator is more efficient than using SET ROWCOUNT. Because of this, using the TOP operator is preferable to using SET ROWCOUNT to limit the number of rows returned by a query.
  • In a WHERE clause, the various operators used directly affect how fast a query is run. This is because some operators lend themselves to speed over other operators. Of course, you may not have any choice of which operator you use in your WHERE clauses, but sometimes you do.
  • Don’t be afraid to make liberal use of in-line and block comments in your Transact-SQL code, they will not affect the performance of your application and they will enhance your productivity when you or others come back to the code and try to modify it.
  • If possible, avoid using SQL Server cursors. They generally use a lot of SQL Server resources and reduce the performance and scalability of your applications. If you need to perform rowby- row operations, try to find another method to perform the task.
  • If you know that there are duplicate records, and this presents a problem for your application, then by all means use the UNION statement to eliminate the duplicate rows. But if not, use UNION ALL, which is less resource intensive.
  • Carefully evaluate whether your SELECT query needs the DISTINCT clause or not. Some developers automatically add this clause to every one of their SELECT statements, even when it is not necessary.
  • Always include a WHERE clause in your SELECT statement to narrow the number of rows returned. Only return those rows you need.
  • When you have a choice of using the IN or the BETWEEN clauses in your Transact-SQL, you will generally want to use the BETWEEN clause, as it is more efficient.
  • If you need to write a SELECT statement to retrieve data from a single table, don’t SELECT the data from a view that points to multiple tables. Instead, SELECT the data from the table directly, or from a view that only contains the table you are interested in. If you SELECT the data from the multi-table view, the query will experience unnecessary overhead, and performance will be hindered.
  • If your application allows users to run queries, but you are unable in your application to easily prevent users from returning hundreds, even thousands of unnecessary rows of data, consider using the TOP operator within the SELECT statement. This way, you can limit how many rows are returned, even if the user doesn’t enter any criteria to help reduce the number or rows returned to the client.

Chosen from SQL Server DBA best practices By Brad M.mcGehee

  • May 21 / 2008
  • 0
DBA best practices, dbDigger, T-SQL Interview Questions

ACID properties for Database design

A logical unit of work (TRANSACTION) must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction:
Atomicity: A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them is performed.
Consistency: When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction’s modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.
Isolation: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.

Durability: After a transaction has completed, its effects are permanently

Pages:1234567
Consult us to explore the Databases. Contact us