:::: MENU ::::

Blog

  • May 21 / 2008
  • 0
DBCC Commands, dbDigger

T-SQL Database Console Command statements

DBCC
The Transact-SQL programming language provides DBCC statements that act as Database Console Commands for Microsoft SQL Server. These statements check the physical and logical consistency of a database. Many DBCC statements can fix detected problems.
Database Console Command statements are grouped into these categories.
Maintenance Statements:
  • DBCC DBREINDEX
  • DBCC DBREPAIR
  • DBCC INDEXDEFRAG
  • DBCC SHRINKDATABASE
  • DBCC SHRINKFILE
  • DBCC UPDATEUSAGE
Miscellaneous Statements:
  • DBCC dllname (FREE)
  • DBCC HELP
  • DBCC PINTABLE
  • DBCC ROWLOCK
  • DBCC TRACEOFF
  • DBCC TRACEON
  • DBCC UNPINTABLE
Status Statements:
  • DBCC INPUTBUFFER
  • DBCC OPENTRAN
  • DBCC OUTPUTBUFFER
  • DBCC PROCCACHE
  • DBCC SHOWCONTIG
  • DBCC SHOW_STATISTICS
  • DBCC SQLPERF
  • DBCC TRACESTATUS
  • DBCC USEROPTIONS
Validation Statements:
  • DBCC CHECKALLOC
  • DBCC CHECKCATALOG
  • DBCC CHECKCONSTRAINTS
  • DBCC CHECKDB
  • DBCC CHECKFILEGROUP
  • DBCC CHECKIDENT
  • DBCC CHECKTABLE
  • DBCC NEWALLOC
  • May 21 / 2008
  • 0
Data Modeling and Database Design, DBA Interview questions, dbDigger, T-SQL Interview Questions

Codd’s 12 rules for relationsl database design

These rules are defined by Dr. Codd, the pioneer of relational databases. The rules primarily address implementation requirements for relational database management system (RDBMS) vendors.

1. Information Rule: All information in the database should be represented in one and only one way — as values in a table.

2. Guaranteed Access Rule: Each and every datum (atomic value) is guaranteed to be logically accessible by resorting to a combination of table name, primary key value, and column name.
3. Systematic Treatment of Null Values: Null values (distinct from empty character string or a string of blank characters and distinct from zero or any other number) are supported in the fully relational DBMS for representing missing information in a systematic way, independent of data type.
4. Dynamic Online Catalog Based on the Relational Model: The database description is represented at the logical level in the same way as ordinary data, so authorized users can apply the same relational language to its interrogation as they apply to regular data.
5. Comprehensive Data Sublanguage Rule: A relational system may support several languages and various modes of terminal use. However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and whose ability to support all of the following is comprehensible:
  • data definition
  • view definition
  • data manipulation (interactive and by program)
  • integrity constraints
  • authorization
  • transaction boundaries (begin, commit, and rollback).
6. View Updating Rule: All views that are theoretically update able are also update able by the system.
7. High-Level Insert, Update, and Delete: The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data, but also to the insertion, update, and deletion of data.
8. Physical Data Independence: Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representation or access methods.
9. Logical Data Independence: Application programs and terminal activities remain logically unimpaired when information preserving changes of any kind that theoretically permit un impairment are made to the base tables.
10. Integrity Independence: Integrity constraints specific to a particular relational database must be definable in the relational data sub language and storable in the catalog, not in the application programs.
11. Distribution Independence: The data manipulation sublanguage of a relational DBMS must enable application programs and terminal activities to remain logically unimpaired whether and whenever data are physically centralized or distributed.
12. Non subversion Rule: If a relational system has or supports a low-level (single-record-at-a-time) language, that low-level language cannot be used to subvert or bypass the integrity rules or constraints expressed in the higher-level (multiple-records-at-a-time) relational language.
  • May 21 / 2008
  • 0
DBA best practices, dbDigger, T-SQL Interview Questions

ACID properties for Database design

A logical unit of work (TRANSACTION) must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction:
Atomicity: A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them is performed.
Consistency: When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction’s modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.
Isolation: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.
Durability: After a transaction has completed, its effects are permanently

  • May 21 / 2008
  • 0
Analysis Services SSAS, dbDigger, Hardware and Platform, Performance Tunning and Optimization

Memory Settings for SQL Server Analysis Services

Processes (such as Analysis Services) running in Windows 2000 Server or Windows Server 2003 Standard Edition can address a maximum of 2 gigabytes (GB) of RAM in the main process space. If you are working with large or complex cubes, Analysis Services may require more than 2 GB to load dimensions into memory, process dimensions, load replica dimensions, and still have sufficient memory for an effective query results cache. To allow Analysis Services to address more than 2 GB of RAM in a single process, you must install Windows 2000 Advanced Server; Windows 2000 Datacenter; Windows Server 2003 Enterprise Edition; or Windows Server 2003 Datacenter Edition. Windows Server 2003 Enterprise Edition and Windows Server 2003 Datacenter Edition, are available in 32-bit and 64-bit versions. The 64-bit version supports the 64-bit version of Analysis Services. Because Windows 2000 Advanced Server and Windows 2000 Datacenter Server are 32-bit operating systems, only the 32-bit version of Analysis Services can be installed. The 64-bit version of Analysis Services can address all available memory in the main process space without any special configuration (up to 64 GB with the Enterprise Edition and up to 512 GB with the Datacenter Edition). The 32-bit version of Analysis Services can address up to 3 GB of memory in the main process space, if you enable Application Memory Tuning. Unless you enable Application Memory Tuning, no process can address more than 2 GB in the main process space. To enable Application Memory Tuning on the Analysis Services computer, set the /3 GB switch in the boot.ini file and then use Analysis Manager to set an appropriate Memory conservation threshold value for Analysis Services. If you set the /3GB switch in boot.ini, the computer on which Analysis Services is running should have at least 4 GB of memory to ensure that the Windows operating system has sufficient memory for system services. If you are running other applications on the same computer, you must factor in their memory requirements as well. For example, if the SQL Server service and Analysis Services are installed on the same computer, SQL Server can address memory above 4 GB because SQL Server supports Address Windowing Extensions (AWE). In this case, you could install and use 8 GB or more on the server. However, because Analysis Services does not support AWE, Analysis Services cannot access more the 3 GB of memory in the main process space unless the 64-bit version is used. Having sufficient memory for Analysis Services increases query responsiveness and processing performance. Properly configuring available memory will maximize the use of memory, limit the use of disk resources for processing, and prevent the cleaner thread from evicting cache entries too quickly. The amount of memory used by Analysis Services for various purposes is regulated by a number of memory settings:

  • High and low memory settings
  • Very Large Dimension Memory (VLDM) threshold setting
  • Process buffer settings

These settings are configured using default values or based on the amount of physical memory in the computer during installation. Changing some of these memory settings is generally recommended.

High and Low Memory Settings
Analysis Services employs a number of mechanisms to keep the amount of memory allocated to it within the range that is defined by two settings on the Environment tab of the Server Properties dialog box in Analysis Manager: the Memory conservation threshold and the Minimum allocated memory settings (the HighMemoryLimit and the LowMemoryLimit values in the registry). The default value for the Memory conservation threshold setting is the amount of physical memory on the computer at the time of installation. The default value for the Minimum allocated memory setting is half the amount of physical memory on the computer at the time of installation. If you change the amount of memory on the computer after installation, you must manually modify these values. Otherwise, Analysis Services will not properly utilize the actual amount of physical memory on the computer.
When the amount of memory used by Analysis Services exceeds the Memory conservation threshold setting, Analysis Services increases the priority of the cleaner thread to normal in order to quickly reduce the allocated memory to the Minimum allocated memory setting. If the total memory allocated to all Analysis Services tasks exceeds the memory conservation threshold by more than approximately 6.25 percent, Analysis Services immediately begins dropping the cache entries for entire cubes in order to quickly reduce the amount memory used by Analysis Services. In this scenario, because Analysis Services is shedding memory extremely quickly, the total amount of allocated memory may drop below the Minimum allocated memory setting. If you set the Minimum allocated memory setting too low, the cleaner thread removes too many cached entries from the query results cache.

  • May 20 / 2008
  • 0
dbDigger, Joins

T-SQL joins refresher for beginners

Joins are a mandatory concept for T-SQL skill. I have prepared following refresher for preliminary learners of T-SQL. Create Tables to use in refresher

 
USE pubs  
 GO  
 CREATE TABLE t1(f1 INT)  
 CREATE TABLE t2(f1 INT)  
 CREATE TABLE t3(f1 INT)  
 GO  

now populate the tables

 
--=========for t1==================  
 SET NOCOUNT ON  
 DECLARE @j INT  
 SET @j = 1  
 WHILE @j <= 10   
 BEGIN  
  INSERT INTO t1 VALUES(@j) SET @j = @j + 1   
 END   
 GO   
 --=========for t2==================   
 SET NOCOUNT ON   
 DECLARE @k INT   
 SET @k = 6   
 WHILE @k <= 15   
 BEGIN   
 INSERT INTO t2 VALUES(@k)   
 SET @k = @k + 1   
 END   
 GO  
  --==========for t3=================   
 INSERT INTO t3 VALUES(6)   
 INSERT INTO t3 VALUES(2)  
 INSERT INTO t3 VALUES(3)  
 INSERT INTO t3 VALUES(12)   
 INSERT INTO t3 VALUES(13)   
 GO   

Now t1, t2, t3 has following values

t1 t2 t3
1 6 6
2 7 7
3 8 8
4 9 9
5 10 10
6 11 11
7 12 12
8 13 13
9 14
10 15

  • Inner Join Query

Inner joins return rows only when there is at least one row from both tables, that matches the join condition.
Inner joins eliminate the rows that do not match with a row from the other table

 
SELECT * FROM  
 T1 JOIN t2  
 ON t1.f1 = t2.f1  
 GO  

Inner Join

 
SELECT * FROM  
 t1 INNER JOIN t2  
 ON t1.f1 = t2.f1  
 GO  

EQUI Join

 
SELECT * FROM  
 t1,t2  
 WHERE  
 t1.f1 = t2.f1  
 GO  
================================
  • Outer Join Query

Outer joins, return all rows from at least one of the tables or views mentioned in the FROM clause, as long as those rows meet any WHERE or HAVING search conditions.
All rows are retrieved from the left table referenced with a left outer join, and all rows from the right table referenced in a right outer join.
All rows from both tables are returned in a full outer join.
Both queries below will have same result, as both use left outer join

 
SELECT t1.f1, t2.f1  
 FROM t1 LEFT OUTER JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  
 --==========================  
 SELECT t1.f1, t2.f1  
 FROM t1 LEFT JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  

Right outer join is used to get all values from right side table in query, ignoring the fact that values match in table on left side or not.

Both queries below will have same result, as both use right outer join

 SELECT t1.f1, t2.f1  
 FROM t1 RIGHT OUTER JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  
 --==============================  
 SELECT t1.f1, t2.f1  
 FROM t1 RIGHT JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  

Full outer join is used to get values from both the table, either values are matched in both tables or not.

Both queries below will have same result, as both use full outer join

 
SELECT t1.f1, t2.f1  
 FROM t1 FULL OUTER JOIN t2  
 ON (t1.f1 = t2.f1)  
 ORDER BY t1.f1  
 GO  
 --======================  
 SELECT t1.f1, t2.f1  
 FROM t1 FULL JOIN t2  
 ON (t1.f1 = t2.f1)  
 ORDER BY t1.f1  
 GO  
  • Multiple Joins

JOINS make much sense between two tables, but can obviously be extended to more than two tables.
Full outer join in three tables will get values from all three tables

 
SELECT a.f1, b.f1, c.f1  
 FROM t1 a  
 FULL OUTER JOIN t2 b ON (a.f1 = b.f1)  
 FULL OUTER JOIN t3 c ON (a.f1 = c.f1)  
 GO  
  • Self Joins

When the same table is used in a query with two different aliases,
such a join is referred to as a self-join.

  • Cross Joins

A cross join is referred to as a Cartesian product,which means, for each element in set-A pick all the values from set-B

Consult us to explore the Databases. Contact us