:::: MENU ::::

Posts Categorized / dbDigger

  • May 21 / 2008
  • 0
Data Access, dbDigger

Java database connectivity (JDBC)

Call-level interfaces such as JDBC are programming interfaces allowing external access to SQL database manipulation and update commands. They allow the integration of SQL calls into a general programming environment by providing library routines which interface with the database. In particular, Java based JDBC has a rich collection of routines which make such an interface extremely simple and intuitive.
JDBC helps you to write java applications that manage these three programming activities:
  • Connect to a data source, like a database
  • Send queries and update statements to the database
  • Retrieve and process the results received from the database in answer to your query
The JDBC 1.0 API provided the basic framework for data access, consisting primarily of the following interfaces and classes:
  • Driver
  • DriverManager
  • Connection
  • Statement
  • PreparedStatement
  • CallableStatement
  • ResultSet
  • DatabaseMetaData
  • ResultSetMetaData
  • Types
The following simple code fragment gives a simple example of these three steps:
Connection con = DriverManager.getConnection
( “jdbc:myDriver:wombat”, “myLogin”,”myPassword”);Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(“SELECT a, b, c FROM Table1”);
while (rs.next()) {
int x = rs.getInt(“a”);
String s = rs.getString(“b”);
float f = rs.getFloat(“c”);
}

JDBC includes four components:
1. The JDBC API: The JDBC™ API provides programmatic access to relational data from the Java™ programming language. Using the JDBC API, applications can execute SQL statements, retrieve results, and propagate changes back to an underlying data source. The JDBC API can also interact with multiple data sources in a distributed, heterogeneous environment. The JDBC API is part of the Java platform, which includes the Java™ Standard Edition (Java™ SE ) and the Java™ Enterprise Edition (Java™ EE). The JDBC 4.0 API is divided into two packages: java.sql and javax.sql. Both packages are included in the Java SE and Java EE platforms.
2. JDBC Driver Manager: The JDBC DriverManager class defines objects which can connect Java applications to a JDBC driver. DriverManager has traditionally been the backbone of the JDBC architecture. It is quite small and simple. The Standard Extension packages javax.naming and javax.sql let you use a DataSource object registered with a Java Naming and Directory Interface™ (JNDI) naming service to establish a connection with a data source. You can use either connecting mechanism, but using a DataSource object is recommended whenever possible.
3. JDBC Test Suite: The JDBC driver test suite helps you to determine that JDBC drivers will run your program. These tests are not comprehensive or exhaustive, but they do exercise many of the important features in the JDBC API.
4. JDBC-ODBC Bridge: The Java Software bridge provides JDBC access via ODBC drivers. Note that you need to load ODBC binary code onto each client machine that uses this driver. As a result, the ODBC driver is most appropriate on a corporate network where client installations are not a major problem, or for application server code written in Java in a three-tier architecture.
Here is an easy way of visualizing what happens in a call level interface: You are writing a normal Java program. Somewhere in the program, you need to interact with a database. Using standard library routines, you open a connection to the database. You then use JDBC to send your SQL code to the database, and process the results that are returned. When you are done, you close the connection. Such an approach has to be contrasted with the precompilation route taken with Embedded SQL. The latter has a precompilation step, where the embedded SQL code is converted to the host language code (C/C++). Call-level interfaces do not require precompilation and thus avoid some of the problems of Embedded SQL. The result is increased portability and a cleaner client-server relationship.
The JDBC API supports both two-tier and three-tier processing models for database access.
In the two-tier model, a Java applet or application talks directly to the data source. This requires a JDBC driver that can communicate with the particular data source being accessed. A user’s commands are delivered to the database or other data source, and the results of those statements are sent back to the user. The data source may be located on another machine to which the user is connected via a network. This is referred to as a client/server configuration, with the user’s machine as the client, and the machine housing the data source as the server. The network can be an intranet, which, for example, connects employees within a corporation, or it can be the Internet.
In the three-tier model, commands are sent to a “middle tier” of services, which then sends the commands to the data source. The data source processes the commands and sends the results back to the middle tier, which then sends them to the user. MIS directors find the three-tier model very attractive because the middle tier makes it possible to maintain control over access and the kinds of updates that can be made to corporate data. Another advantage is that it simplifies the deployment of applications. Finally, in many cases, the three-tier architecture can provide performance advantages.
Result Sets and Cursors
The rows that satisfy the conditions of a query are called the result set. The number of rows returned in a result set can be zero, one, or many. A user can access the data in a result set one row at a time, and a cursor provides the means to do that. A cursor can be thought of as a pointer into a file that contains the rows of the result set, and that pointer has the ability to keep track of which row is currently being accessed. A cursor allows a user to process each row of a result set from top to bottom and consequently may be used for iterative processing. Most DBMSs create a cursor automatically when a result set is generated. Earlier JDBC API versions added new capabilities for a result set’s cursor, allowing it to move both forward and backward and also allowing it to move to a specified row or to a row whose position is relative to another row.
Stored Procedures
A stored procedure is a group of SQL statements that can be called by name. In other words, it is executable code, a mini-program, that performs a particular task that can be invoked the same way one can call a function or method. Traditionally, stored procedures have been written in a DBMS-specific programming language. The latest generation of database products allows stored procedures to be written using the Java programming language and the JDBC API. Stored procedures written in the Java programming language are bytecode portable between DBMSs. Once a stored procedure is written, it can be used and reused because a DBMS that supports stored procedures will, as its name implies, store it in the database. The following code is an example of how to create a very simple stored procedure using the Java programming language. Note that the stored procedure is just a static Java method that contains normal JDBC code. It accepts two input parameters and uses them to change an employee’s car number.
import java.sql.*;
public class UpdateCar {
public static void UpdateCarNum(int carNo, int empNo)
throws SQLException {
Connection con = null;
PreparedStatement pstmt = null;
try {
con = DriverManager.getConnection(“jdbc:default:connection”);
pstmt = con.prepareStatement(
“UPDATE EMPLOYEES SET CAR_NUMBER = ? ” +
“WHERE EMPLOYEE_NUMBER = ?”);
pstmt.setInt(1, carNo);
pstmt.setInt(2, empNo);
pstmt.executeUpdate();
}
finally {
if (pstmt != null) pstmt.close();
}
}}
A stored procedure is a group of SQL statements that form a logical unit and perform a particular task. Stored procedures are used to encapsulate a set of operations or queries to execute on a database server. For example, operations on an employee database (hire, fire, promote, lookup) could be coded as stored procedures executed by application code. Stored procedures can be compiled and executed with different parameters and results, and they may have any combination of input, output, and input/output parameters.
This simple stored procedure has no parameters. Even though most stored procedures do something more complex than this example, it serves to illustrate some basic points about them. As previously stated, the syntax for defining a stored procedure is different for each DBMS. For example, some use begin . . . end , or other keywords to indicate the beginning and ending of the procedure definition. In some DBMSs, the following SQL statement creates a stored procedure:
create procedure SHOW_SUPPLIERS
as
select SUPPLIERS.SUP_NAME, COFFEES.COF_NAME
from SUPPLIERS, COFFEES
where SUPPLIERS.SUP_ID = COFFEES.SUP_ID
order by SUP_NAME
The following code puts the SQL statement into a string and assigns it to the variable createProcedure, which we will use later:
String createProcedure = “create procedure SHOW_SUPPLIERS ” +
“as ” +
“select SUPPLIERS.SUP_NAME, COFFEES.COF_NAME ” +
“from SUPPLIERS, COFFEES ” +
“where SUPPLIERS.SUP_ID = COFFEES.SUP_ID ” +
“order by SUP_NAME”;
The following code fragment uses the Connection object con to create a Statement object, which is used to send the SQL statement creating the stored procedure to the database:
Statement stmt = con.createStatement();
stmt.executeUpdate(createProcedure);
The procedure SHOW_SUPPLIERS is compiled and stored in the database as a database object that can be called, similar to the way you would call a method.
Calling a Stored Procedure from JDBC. JDBC allows you to call a database stored procedure from an application written in the Java programming language. The first step is to create a CallableStatement object. As with Statement and PreparedStatement objects, this is done with an open Connection object. A callableStatement object contains a call to a stored procedure; it does not contain the stored procedure itself. The first line of code below creates a call to the stored procedure SHOW_SUPPLIERS using the connection con. The part that is enclosed in curly braces is the escape syntax for stored procedures. When the driver encounters “{call SHOW_SUPPLIERS}”, it will translate this escape syntax into the native SQL used by the database to call the stored procedure named SHOW_SUPPLIERS.
CallableStatement cs = con.prepareCall(“{call SHOW_SUPPLIERS}”);
ResultSet rs = cs.executeQuery();
The ResultSet rs will be similar to the following:
SUP_NAMECOF_NAME
—————————————
Acme, Inc.Colombian
Acme, Inc.Colombian_Decaf
Superior CoffeeFrench_Roast
Superior CoffeeFrench_Roast_Decaf
The High GroundEspresso
Note that the method used to execute cs is executeQuery because cs calls a stored procedure that contains one query and thus produces one result set. If the procedure had contained one update or one DDL statement, the method executeUpdate would have been the one to use. It is sometimes the case, however, that a stored procedure contains more than one SQL statement, in which case it will produce more than one result set, more than one update count, or some combination of result sets and update counts. In this case, where there are multiple results, the method execute should be used to execute the CallableStatement .
The class CallableStatement is a subclass of PreparedStatement, so a CallableStatement object can take input parameters just as a PreparedStatement object can. In addition, a CallableStatement object can take output parameters, or parameters that are for both input and output. INOUT parameters and the method execute are used rarely.
Metadata in jdbc
JDBC provides the interface DatabaseMetaData, which a driver writer must implement so that its methods return information about the driver and/or DBMS for which the driver is written. For example, a large number of methods return whether or not the driver supports a particular functionality. This interface gives users and tools a standardized way to get metadata. In general, developers writing tools and drivers are the ones most likely to be concerned with metadata.
Types of Drivers
There are many possible implementations of JDBC drivers. These implementations are categorized as follows:
  • Type 1 – drivers that implement the JDBC API as a mapping to another data access API, such as ODBC. Drivers of this type are generally dependent on a native library, which limits their portability. The JDBC-ODBC Bridge driver is an example of a Type 1 driver.
  • Type 2 – drivers that are written partly in the Java programming language and partly in native code. These drivers use a native client library specific to the data source to which they connect. Again, because of the native code, their portability is limited.
  • Type 3 – drivers that use a pure Java client and communicate with a middleware server using a database-independent protocol. The middleware server then communicates the client requests to the data source.
  • Type 4 – drivers that are pure Java and implement the network protocol for a specific data source. The client connects directly to the data source.
Establishing A Connection
This involves two steps:
1. Load the vendor specific driver
To ensure portability and code reuse, the API was designed to be as independent of the version or the vendor of a database as possible. Since different DBMS have different behavior, we need to tell the driver manager which DBMS we wish to use, so that it can invoke the correct driver.
An Oracle driver is loaded using the following code snippet:
Class.forName(“oracle.jdbc.driver.OracleDriver”)
SQL Server driver is loaded with following code
Class.forName( “sun.jdbc.odbc.JdbcOdbcDriver” )
2 Make the connection
Once the driver is loaded and ready for a connection to be made, you may create an instance of a Connection object using:
String url = “jdbc:odbc:CMS”;
con = DriverManager.getConnection(url,username, password);
The first string is the URL for the database including the protocol (jdbc), the vendor (odbc), The username and password are your username and password, the same as you would enter into SQL Server to access your account.
That’s it! The connection returned in the last step is an open connection which we will use to pass SQL statements to the database. In this code snippet, con is an open connection, and we will use it below.
Creating JDBC Statements
A JDBC Statement object is used to send your SQL statements to the DBMS, and should not to be confused with an SQL statement. A JDBC Statement object is associated with an open connection, and not any single SQL Statement. You can think of a JDBC Statement object as a channel sitting on a connection, and passing one or more of your SQL statements (which you ask it to execute) to the DBMS.
An active connection is needed to create a Statement object. The following code snippet, using our Connection object con, does it for you:
Statement stmt = con.createStatement() ;
At this point, a Statement object exists, but it does not have an SQL statement to pass on to the DBMS. We learn how to do that in a following section.
Creating JDBC PreparedStatement
The PreparedStatement object is a close relative of the Statement object. Both accomplish roughly the same thing: running SQL statements. PreparedStatement, however, allows you to precompile your SQL and run it repeatedly, adjusting specific parameters as necessary. Since processing SQL strings is a large part of a database’s overhead, getting compilation out of the way at the start can significantly improve performance. With proper use, it can also simplify otherwise tedious database tasks. Sometimes, it is more convenient or more efficient to use a PreparedStatement object for sending SQL statements to the DBMS. The main feature which distinguishes it from its superclass Statement, is that unlike Statement, it is given an SQL statement right when it is created. This SQL statement is then sent to the DBMS right away, where it is compiled. Thus, in effect, a PreparedStatement is associated as a channel with a connection and a compiled SQL statement.
The advantage offered is that if you need to use the same, or similar query with different parameters multiple times, the statement can be compiled and optimized by the DBMS just once. Contrast this with a use of a normal Statement where each use of the same SQL statement requires a compilation all over again.
PreparedStatements are also created with a Connection method. The following snippet shows how to create a parameterized SQL statement with three input parameters:
PreparedStatement prepareUpdatePrice = con.prepareStatement(
“UPDATE Sells SET price = ? WHERE bar = ? AND beer = ?”);
Before we can execute a PreparedStatement, we need to supply values for the parameters. This can be done by calling one of the setXXX methods defined in the class PreparedStatement. Most often used methods are setInt, setFloat, setDouble, setString etc.
You can set these values before each execution of the prepared statement.
Continuing the above example, we would write:
prepareUpdatePrice.setInt(1, 3);
prepareUpdatePrice.setString(2, “Bar Of Foo”);
prepareUpdatePrice.setString(3, “BudLite”);
Executing CREATE/INSERT/UPDATE Statements
Executing SQL statements in JDBC varies depending on the “intention” of the SQL statement. DDL (data definition language) statements such as table creation and table alteration statements, as well as statements to update the table contents, are all executed using the method executeUpdate. These commands change the state of the database, hence the name of the method contains “Update”.
The following snippet has examples of executeUpdate statements.
Statement stmt = con.createStatement();
stmt.executeUpdate(“CREATE TABLE Sells ” +
“(bar VARCHAR2(40), beer VARCHAR2(40), price REAL)” );
stmt.executeUpdate(“INSERT INTO Sells ” +
“VALUES (‘Bar Of Foo’, ‘BudLite’, 2.00)” );
String sqlString = “CREATE TABLE Bars ” +
“(name VARCHAR2(40), address VARCHAR2(80), license INT)” ;
stmt.executeUpdate(sqlString);
When executeUpdate is used to call DDL statements, the return value is always zero, while data modification statement executions will return a value greater than or equal to zero, which is the number of tuples affected in the relation.
Executing SELECT Statements
As opposed to the previous section statements, a query is expected to return a set of tuples as the result, and not change the state of the database. Not surprisingly, there is a corresponding method called executeQuery, which returns its results as a ResultSet object:
String bar, beer ; float price ;
ResultSet rs = stmt.executeQuery(“SELECT * FROM Sells”);
while ( rs.next() ) {
bar = rs.getString(“bar”);
beer = rs.getString(“beer”);
price = rs.getFloat(“price”);
System.out.println(bar + ” sells ” + beer + ” for ” + price + “
Dollars.”);
}
The bag of tuples resulting from the query are contained in the variable rs which is an instance of ResultSet. A set is of not much use to us unless we can access each row and the attributes in each row. The ResultSet provides a cursor to us, which can be used to access each row in turn. The cursor is initially set just before the first row. Each invocation of the method next causes it to move to the next row, if one exists and return true, or return false if there is no remaining row. We can use the getXXX method of the appropriate type to retrieve the attributes of a row. In the previous example, we used getString and getFloat methods to access the column values. Notice
that we provided the name of the column whose value is desired as a parameter to the method. Also note that the VARCHAR2 type bar, beer have been converted to Java String, and the REAL to Java float.
Equivalently, we could have specified the column number instead of the column name, with the same result. Thus the relevant statements would be:
bar = rs.getString(1);
price = rs.getFloat(3);
beer = rs.getString(2);
While working with a PreparedStatement, we would execute a query by first plugging in the values of the parameters, and then invoking the executeQuery on it.
ResultSet rs = prepareUpdatePrice.executeQuery() ;
Accessing ResultSet
JDBC also offers you a number of methods to find out where you are in the result set using getRow, isFirst, isBeforeFirst, isLast, isAfterLast.
There are means to make scroll-able cursors allow free access of any row in the result set. By default, cursors scroll forward only and are read only. When creating a Statement for a Connection, you can change the type of ResultSet to a more flexible scrolling or updatable model:
Statement stmt = con.createStatement(
ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
ResultSet rs = stmt.executeQuery(“SELECT * FROM Sells”);
The different options for types are TYPE_FORWARD_ONLY, TYPE_SCROLL_INSENSITIVE, and TYPE_SCROLL_SENSITIVE. You can choose whether the cursor is read-only or updatable using the options CONCUR_READ_ONLY, and CONCUR_UPDATABLE. With the default cursor, you can scroll forward using rs.next().
With scroll-able cursors you have more options:
rs.absolute(3); // moves to the third tuple
rs.previous(); // moves back one tuple (tuple 2)
rs.relative(2); // moves forward two tuples (tuple 4)
rs.relative(-3); // moves back three tuples (tuple 1)
Transactions
JDBC allows SQL statements to be grouped together into a single transaction. Thus, we can ensure the ACID properties using JDBC transactional features. Transaction control is performed by the Connection object. When a connection is created, by default it is in the auto-commit mode. This means that each individual SQL statement is treated as a transaction by itself, and will be committed as soon as it’s execution finished. (This is not exactly precise, but we can gloss over this subtlety for most purposes).
We can turn off auto-commit mode for an active connection with :
con.setAutoCommit(false) ;
and turn it on again with :
con.setAutoCommit(true) ;
Once auto-commit is off, no SQL statements will be committed (that is, the database will not be permanently updated) until you have explicitly told it to commit by invoking the commit() method: con.commit() ;
At any point before commit, we may invoke rollback() to rollback the transaction, and restore values to the last commit point (before the attempted updates).
Here is an example which ties these ideas together:
con.setAutoCommit(false);
Statement stmt = con.createStatement();
stmt.executeUpdate(“INSERT INTO Sells VALUES(‘Bar Of Foo’,
‘BudLite’, 1.00)” );
con.rollback();
stmt.executeUpdate(“INSERT INTO Sells VALUES(‘Bar Of Joe’,
‘Miller’, 2.00)” );
con.commit();
con.setAutoCommit(true);
We can also set transaction isolation levels as desired. For example, we can set the transaction isolation level to TRANSACTION_READ_COMMITTED, which will not allow a value to be accessed until after it has been committed, and forbid dirty reads. There are five such values for isolation levels provided in the Connection interface. By default, the isolation level is serializable. JDBC allows us to find out the transaction isolation level the database is set to (using the Connection method getTransactionIsolation) and set the appropriate level (using the Connection method setTransactionIsolation method).
Usually rollback will be used in combination with Java’s exception handling ability to recover from (un)predictable errors. Such a combination provides an excellent and easy mechanism for handling data integrity. We study error handling using JDBC in the next section.
Handling Errors with Exceptions
Any JDBC object that encounters an error serious enough to halt execution throws a SQLException. For example, database connection errors, malformed SQL statements, and insufficient database privileges all throw SQLException objects.
The SQLException class extends the normal java.lang.Exception class and defines an additional method called getNextException( ). This allows JDBC classes to chain a series of SQLException objects together. SQLException also defines the getSQLState( ) and getErrorCode( ) methods to provide additional information about an error. The value returned by getSQLState( ) is one of the ANSI-92 SQL state codes. An extremely conscientious application might have a catch block that looks something like this:
try {
// Actual database code
}
catch (SQLException e) {
while(e != null) {
System.out.println(“nSQL Exception:”);
System.out.println(e.getMessage( ));
System.out.println(“ANSI-92 SQL State: ” + e.getSQLState( ));
System.out.println(“Vendor Error Code: ” + e.getErrorCode( ));
e = e.getNextException( );
}
}
SQL Warnings
JDBC classes also have the option of generating (but not throwing) a SQLWarning exception when something is not quite right, but at the same time, not sufficiently serious to warrant halting the entire program. For example, attempting to set a transaction isolation mode that is not supported by the underlying database might generate a warning rather than an exception. Remember, exactly what qualifies as a warning condition varies by database.
SQLWarning encapsulates the same information as SQLException and is used in a similar fashion. However, unlike SQLException objects, which are caught in try/catch blocks, warnings are retrieved using the getWarnings( ) methods of the Connection, Statement, ResultSet, CallableStatement, and PreparedStatement interfaces. SQLWarning implements the getMessage( ), getSQLState( ), and getErrorCode( ) methods in the same manner as SQLException.
JDBC give access to such information by providing two levels of error conditions:
SQLException and SQLWarning.
SQLExceptions are Java exceptions which, if not handled, will terminate the application. SQLWarnings are subclasses of SQLException, but they represent nonfatal errors or unexpected conditions, and as such, can be ignored.
In Java, statements which are expected to “throw” an exception or a warning are enclosed in a try block. If a statement in the try block throws an exception or a warning, it can be “caught” in one of the corresponding catch statements.
Each catch statement specifies which exceptions it is ready to “catch”.
  • May 21 / 2008
  • 0
DBCC Commands, dbDigger

T-SQL Database Console Command statements

DBCC
The Transact-SQL programming language provides DBCC statements that act as Database Console Commands for Microsoft SQL Server. These statements check the physical and logical consistency of a database. Many DBCC statements can fix detected problems.
Database Console Command statements are grouped into these categories.
Maintenance Statements:
  • DBCC DBREINDEX
  • DBCC DBREPAIR
  • DBCC INDEXDEFRAG
  • DBCC SHRINKDATABASE
  • DBCC SHRINKFILE
  • DBCC UPDATEUSAGE
Miscellaneous Statements:
  • DBCC dllname (FREE)
  • DBCC HELP
  • DBCC PINTABLE
  • DBCC ROWLOCK
  • DBCC TRACEOFF
  • DBCC TRACEON
  • DBCC UNPINTABLE
Status Statements:
  • DBCC INPUTBUFFER
  • DBCC OPENTRAN
  • DBCC OUTPUTBUFFER
  • DBCC PROCCACHE
  • DBCC SHOWCONTIG
  • DBCC SHOW_STATISTICS
  • DBCC SQLPERF
  • DBCC TRACESTATUS
  • DBCC USEROPTIONS
Validation Statements:
  • DBCC CHECKALLOC
  • DBCC CHECKCATALOG
  • DBCC CHECKCONSTRAINTS
  • DBCC CHECKDB
  • DBCC CHECKFILEGROUP
  • DBCC CHECKIDENT
  • DBCC CHECKTABLE
  • DBCC NEWALLOC
  • May 21 / 2008
  • 0
Data Modeling and Database Design, DBA Interview questions, dbDigger, T-SQL Interview Questions

Codd’s 12 rules for relational database design

These rules are defined by Dr. Codd, the pioneer of relational databases. The rules primarily address implementation requirements for relational database management system (RDBMS) vendors.

1. Information Rule: All information in the database should be represented in one and only one way — as values in a table.

2. Guaranteed Access Rule: Each and every datum (atomic value) is guaranteed to be logically accessible by resorting to a combination of table name, primary key value, and column name.
3. Systematic Treatment of Null Values: Null values (distinct from empty character string or a string of blank characters and distinct from zero or any other number) are supported in the fully relational DBMS for representing missing information in a systematic way, independent of data type.
4. Dynamic Online Catalog Based on the Relational Model: The database description is represented at the logical level in the same way as ordinary data, so authorized users can apply the same relational language to its interrogation as they apply to regular data.
5. Comprehensive Data Sublanguage Rule: A relational system may support several languages and various modes of terminal use. However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and whose ability to support all of the following is comprehensible:
  • data definition
  • view definition
  • data manipulation (interactive and by program)
  • integrity constraints
  • authorization
  • transaction boundaries (begin, commit, and rollback).
6. View Updating Rule: All views that are theoretically update able are also update able by the system.
7. High-Level Insert, Update, and Delete: The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data, but also to the insertion, update, and deletion of data.
8. Physical Data Independence: Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representation or access methods.
9. Logical Data Independence: Application programs and terminal activities remain logically unimpaired when information preserving changes of any kind that theoretically permit un impairment are made to the base tables.
10. Integrity Independence: Integrity constraints specific to a particular relational database must be definable in the relational data sub language and storable in the catalog, not in the application programs.
11. Distribution Independence: The data manipulation sublanguage of a relational DBMS must enable application programs and terminal activities to remain logically unimpaired whether and whenever data are physically centralized or distributed.
12. Non subversion Rule: If a relational system has or supports a low-level (single-record-at-a-time) language, that low-level language cannot be used to subvert or bypass the integrity rules or constraints expressed in the higher-level (multiple-records-at-a-time) relational language.
  • May 21 / 2008
  • 0
DBA best practices, dbDigger, T-SQL Interview Questions

ACID properties for Database design

A logical unit of work (TRANSACTION) must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction:
Atomicity: A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them is performed.
Consistency: When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction’s modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.
Isolation: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.

Durability: After a transaction has completed, its effects are permanently

  • May 21 / 2008
  • 0
Analysis Services SSAS, dbDigger, Hardware and Platform, Performance Tunning and Optimization

Memory Settings for SQL Server Analysis Services

Processes (such as Analysis Services) running in Windows 2000 Server or Windows Server 2003 Standard Edition can address a maximum of 2 gigabytes (GB) of RAM in the main process space. If you are working with large or complex cubes, Analysis Services may require more than 2 GB to load dimensions into memory, process dimensions, load replica dimensions, and still have sufficient memory for an effective query results cache. To allow Analysis Services to address more than 2 GB of RAM in a single process, you must install Windows 2000 Advanced Server; Windows 2000 Datacenter; Windows Server 2003 Enterprise Edition; or Windows Server 2003 Datacenter Edition. Windows Server 2003 Enterprise Edition and Windows Server 2003 Datacenter Edition, are available in 32-bit and 64-bit versions. The 64-bit version supports the 64-bit version of Analysis Services. Because Windows 2000 Advanced Server and Windows 2000 Datacenter Server are 32-bit operating systems, only the 32-bit version of Analysis Services can be installed. The 64-bit version of Analysis Services can address all available memory in the main process space without any special configuration (up to 64 GB with the Enterprise Edition and up to 512 GB with the Datacenter Edition). The 32-bit version of Analysis Services can address up to 3 GB of memory in the main process space, if you enable Application Memory Tuning. Unless you enable Application Memory Tuning, no process can address more than 2 GB in the main process space. To enable Application Memory Tuning on the Analysis Services computer, set the /3 GB switch in the boot.ini file and then use Analysis Manager to set an appropriate Memory conservation threshold value for Analysis Services. If you set the /3GB switch in boot.ini, the computer on which Analysis Services is running should have at least 4 GB of memory to ensure that the Windows operating system has sufficient memory for system services. If you are running other applications on the same computer, you must factor in their memory requirements as well. For example, if the SQL Server service and Analysis Services are installed on the same computer, SQL Server can address memory above 4 GB because SQL Server supports Address Windowing Extensions (AWE). In this case, you could install and use 8 GB or more on the server. However, because Analysis Services does not support AWE, Analysis Services cannot access more the 3 GB of memory in the main process space unless the 64-bit version is used. Having sufficient memory for Analysis Services increases query responsiveness and processing performance. Properly configuring available memory will maximize the use of memory, limit the use of disk resources for processing, and prevent the cleaner thread from evicting cache entries too quickly. The amount of memory used by Analysis Services for various purposes is regulated by a number of memory settings:

  • High and low memory settings
  • Very Large Dimension Memory (VLDM) threshold setting
  • Process buffer settings

These settings are configured using default values or based on the amount of physical memory in the computer during installation. Changing some of these memory settings is generally recommended.

High and Low Memory Settings
Analysis Services employs a number of mechanisms to keep the amount of memory allocated to it within the range that is defined by two settings on the Environment tab of the Server Properties dialog box in Analysis Manager: the Memory conservation threshold and the Minimum allocated memory settings (the HighMemoryLimit and the LowMemoryLimit values in the registry). The default value for the Memory conservation threshold setting is the amount of physical memory on the computer at the time of installation. The default value for the Minimum allocated memory setting is half the amount of physical memory on the computer at the time of installation. If you change the amount of memory on the computer after installation, you must manually modify these values. Otherwise, Analysis Services will not properly utilize the actual amount of physical memory on the computer.
When the amount of memory used by Analysis Services exceeds the Memory conservation threshold setting, Analysis Services increases the priority of the cleaner thread to normal in order to quickly reduce the allocated memory to the Minimum allocated memory setting. If the total memory allocated to all Analysis Services tasks exceeds the memory conservation threshold by more than approximately 6.25 percent, Analysis Services immediately begins dropping the cache entries for entire cubes in order to quickly reduce the amount memory used by Analysis Services. In this scenario, because Analysis Services is shedding memory extremely quickly, the total amount of allocated memory may drop below the Minimum allocated memory setting. If you set the Minimum allocated memory setting too low, the cleaner thread removes too many cached entries from the query results cache.

  • May 20 / 2008
  • 0
dbDigger, Joins

T-SQL joins refresher for beginners

Joins are a mandatory concept for T-SQL skill. I have prepared following refresher for preliminary learners of T-SQL. Create Tables to use in refresher

 
USE pubs  
 GO  
 CREATE TABLE t1(f1 INT)  
 CREATE TABLE t2(f1 INT)  
 CREATE TABLE t3(f1 INT)  
 GO  

now populate the tables

 
--=========for t1==================  
 SET NOCOUNT ON  
 DECLARE @j INT  
 SET @j = 1  
 WHILE @j <= 10   
 BEGIN  
  INSERT INTO t1 VALUES(@j) SET @j = @j + 1   
 END   
 GO   
 --=========for t2==================   
 SET NOCOUNT ON   
 DECLARE @k INT   
 SET @k = 6   
 WHILE @k <= 15   
 BEGIN   
 INSERT INTO t2 VALUES(@k)   
 SET @k = @k + 1   
 END   
 GO  
  --==========for t3=================   
 INSERT INTO t3 VALUES(6)   
 INSERT INTO t3 VALUES(2)  
 INSERT INTO t3 VALUES(3)  
 INSERT INTO t3 VALUES(12)   
 INSERT INTO t3 VALUES(13)   
 GO   

Now t1, t2, t3 has following values

t1 t2 t3
1 6 6
2 7 7
3 8 8
4 9 9
5 10 10
6 11 11
7 12 12
8 13 13
9 14
10 15

  • Inner Join Query

Inner joins return rows only when there is at least one row from both tables, that matches the join condition.
Inner joins eliminate the rows that do not match with a row from the other table

 
SELECT * FROM  
 T1 JOIN t2  
 ON t1.f1 = t2.f1  
 GO  

Inner Join

 
SELECT * FROM  
 t1 INNER JOIN t2  
 ON t1.f1 = t2.f1  
 GO  

EQUI Join

 
SELECT * FROM  
 t1,t2  
 WHERE  
 t1.f1 = t2.f1  
 GO  
================================
  • Outer Join Query

Outer joins, return all rows from at least one of the tables or views mentioned in the FROM clause, as long as those rows meet any WHERE or HAVING search conditions.
All rows are retrieved from the left table referenced with a left outer join, and all rows from the right table referenced in a right outer join.
All rows from both tables are returned in a full outer join.
Both queries below will have same result, as both use left outer join

 
SELECT t1.f1, t2.f1  
 FROM t1 LEFT OUTER JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  
 --==========================  
 SELECT t1.f1, t2.f1  
 FROM t1 LEFT JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  

Right outer join is used to get all values from right side table in query, ignoring the fact that values match in table on left side or not.

Both queries below will have same result, as both use right outer join

 SELECT t1.f1, t2.f1  
 FROM t1 RIGHT OUTER JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  
 --==============================  
 SELECT t1.f1, t2.f1  
 FROM t1 RIGHT JOIN t2  
 ON (t1.f1 = t2.f1)  
 GO  

Full outer join is used to get values from both the table, either values are matched in both tables or not.

Both queries below will have same result, as both use full outer join

 
SELECT t1.f1, t2.f1  
 FROM t1 FULL OUTER JOIN t2  
 ON (t1.f1 = t2.f1)  
 ORDER BY t1.f1  
 GO  
 --======================  
 SELECT t1.f1, t2.f1  
 FROM t1 FULL JOIN t2  
 ON (t1.f1 = t2.f1)  
 ORDER BY t1.f1  
 GO  
  • Multiple Joins

JOINS make much sense between two tables, but can obviously be extended to more than two tables.
Full outer join in three tables will get values from all three tables

 
SELECT a.f1, b.f1, c.f1  
 FROM t1 a  
 FULL OUTER JOIN t2 b ON (a.f1 = b.f1)  
 FULL OUTER JOIN t3 c ON (a.f1 = c.f1)  
 GO  
  • Self Joins

When the same table is used in a query with two different aliases,
such a join is referred to as a self-join.

  • Cross Joins

A cross join is referred to as a Cartesian product,which means, for each element in set-A pick all the values from set-B

Consult us to explore the Databases. Contact us