:::: MENU ::::

Posts Categorized / Performance Tunning and Optimization

  • Oct 27 / 2008
  • 0
dbDigger, Hardware and Platform, Performance Tunning and Optimization, Storage

Windows Server 2003 for SQL Server 2005 with 32-bit and 64-bit architecture

Windows Server 2003 is available for X86, X64 and Itanium platforms. SQL Server 2005 is available for Windows Server 2003 for 32-bit architecture, Windows Server 2003 for 64-bit architecture and Windows Server 2003 for Itanium. Clustering is greatly developed in Windows Server 2003.

Windows Server 2003 comes in a variety of editions, including

* Standard Edition
* Enterprise edition
* Datacenter Edition

The edition of window server 2003 that you choose depends upon number of required CPUs and amount of memory required. It is possible to use SQL Server 2005 32-bit on windows server 2003 64-bit, but you can not use SQL Server 2005 64-bit on windows server 2003 32-bit obviously.

Click here to read all posts related to Hardware and OS related to SQL Server

  • Oct 27 / 2008
  • 0
Buffer, dbDigger, Performance Tunning and Optimization, SQL Server IO

SQL Server I/O operations in buffer

SQL Server deals the log operations for read and write requests in an entirely different way. Here is a comparison for both types handling in buffer.

As a read request is submitted to SQL Server, first it is looked in buffer cache of SQL Server. If data is found in buffer cache then it is retrieved from there and if not found in buffer cache then it is retrieved from storage engine to buffer cache and provided to requester from buffer cache.
If requested data is found in buffer cache then it is called buffer hit.
If data is not found in buffer cache and storage engine is concerned then it is called buffer miss.
The ratio of buffer hits to total buffer requests is called buffer cache hit ratio.
As much as buffer cache hit ratio is close to 100 %, you have got as good performance.

In case of write operation for SQL Server database, data is always first written to buffer cache. If data is modified first it will be modified in buffer cache and if not present in buffer then it will be retrieved from disk for modification in buffer. When change is complete and COMMIT is performed a LOG WRITE process will write changes in log file and COMMIT is complete now. Till now the changes are not written to disk. Changes will be applied to data on disk at a later time either by LAZY WRITER process or CHECKPOINT process. Unlike read operation during all this process to write/modify data on disk user session will not wait.

So read performance is more important to consider than write performance.

Click here to read all posts related to SQL Server storage systems and I/O operations

  • Oct 27 / 2008
  • 0
dbDigger, Performance Tunning and Optimization, SQL Server IO, Storage

Sequential and Random I/O for Databases

Sequential I/O is operation in which adjacent data is accessed on a disk drive. Track to track seek is performed and thus greater throughput is provided.

And in case of random I/O, disk head reads data scattered on various parts of disk. Different tracks are read that are not adjacent, so random movement of head causes degrade in performance.

Keeping in view the major characteristic of sequential I/O and random I/O, it is important to design database disk storage in such a way that maximum sequential I/O may be performed. Transactional log is a major example of sequential I/O in SQL Server environment. You should place a heavily used log file on one disk. And it is good approach to use just only one default log file. If you place multiple log files on same disk the access type will be changed from sequential to random I/O. Practically it is not possible to achieve all I/O as sequential I/O in a server based environment. Clients request different operations for different pieces of data. So the only solution is to map your data on multiple disks so that random access should be minimized.


Click here
to read all posts related to SQL Server storage systems and I/O operations

  • Oct 17 / 2008
  • 0
dbDigger, Indexes, Monitoring and Analysis, Performance Tunning and Optimization

Clustered Indexes creates memory bottlenecks

Mostly indexes are encouraged to boost the performance. But in case of very large tables we should consider another dimension also. Incorrect use of indexing is most common root cause of memory pressure. Along with growing size of database, DBA should look forward to remove clustered indexes from large tables.The negative result is that by changing the clustered primary keys to nonclustered primary keys, you’ll increase the amount of load on the disk subsystem. But main achievement will be in case of memory.
As clustered index contains the entire table, when the clustered index is loaded into memory, much more memory is used than loading a non-clustered version of the index. This will dramatically reduce your memory requirements for this table object.
Consider a very wide table (average of 250 bytes per row) that has a total of 65 GB of data. It means that the clustered index is also 65 GB in size. By changing the primary key from a clustered index to a nonclustered index, the index space requirements drop from 65 GB to approx 1 GB (including padding). This greatly reduces the amount of storage required, and when doing index scans against the primary key of the table, only 1 GB of data has to be loaded into memory – which means the data can be left in memory much longer.In common case if a production server has only 16 GB of RAM installed, so having a 63 GB index in memory is not an effective use of memory. By making these changes to the primary key, disk load increased a small percentage. But memory load will decrease dramatically allowing SQL Server to keep data in cache much longer, while also allowing SQL Server to cache other objects into memory as well.

SQL Server Memory Bottleneck

Consult us to explore the Databases. Contact us