:::: MENU ::::

Posts Categorized / T-SQL Interview Questions

  • Jul 29 / 2008
  • 0
Cursors, dbDigger, T-SQL Interview Questions

Server-side cursors

Question: Of course, we all know that server-side cursors are slow on SQL Server and should be avoided when possible. However, there are a few cases where they can’t be avoided, for instance because an external process has to be invoked for each row, or because you run into one of the very rare problems where a cursor actually runs faster than a set-based solution.
So, suppose you find yourself in a situation where you do need a cursor, and you also need to change the data retrieved by the cursor – how can you assure maximum performance?

 

Answer: Do not use FOR UPDATE in the cursor declaration, and use WHERE keycolumn = @keyvalue in the update statement

 

Explanation: Though not documented by Microosoft, extensive testing has shown that reading a STATIC cursor with no FOR UPDATE option and using the primary key to update the row just read is faster than specifying a FOR UPDATE option (either with or without a column list) and using the WHERE CURRENT OF clause in the UPDATE statement.

Note: The Question of day is taken from SQLServercentral.com. I want to compile selected QOD.

  • Jul 08 / 2008
  • 0
dbDigger, Identity Columns, T-SQL Interview Questions

Do you know T-SQL insert statement?

Question: Consider the following table.

 
CREATE TABLE Test  
 (  
 Col1 int IDENTITY(1,1)  
 )  

Write the INSERT statement to populate the above table with sample values. Note: You are not allowed to use ‘set identity_insert’

Answer: INSERT dbo.Test DEFAULT VALUES

Explanation: The table contains only one column and it is an IDENTITY column. You cannot explicitly insert values to an IDENTITY column. Either you have to SET IDENTITY_INSERT ON before inserting the value or use the default values clause. DEFAULT VALUE is not allowed.

Note: The question of day is taken from SqlServerCentral.com. I want to compile selected QOD to share.

  • May 21 / 2008
  • 0
dbDigger, T-SQL Interview Questions

what is T-SQL batch

Batch is a group of one or more Transact-SQL statements sent at one time from an application to Microsoft® SQL Server™ for execution.
SQL Server compiles the statements of a batch into a single executable unit, called an execution plan. The statements in the execution plan are then executed one at a time. A compile error, such as a syntax error, prevents the compilation of the execution plan, so none of the statements in the batch are executed. A run-time error, such as an arithmetic overflow or a constraint violation, has one of two effects:

1. Most run-time errors stop the current statement and the statements that follow it in the batch.

2. A few run-time errors, such as constraint violations, stop only the current statement. All the remaining statements in the batch are executed.

The statements executed before the one that encountered the run-time error are not affected. The only exception is if the batch is in a transaction and the error causes the transaction to be rolled back. In this case, any uncommitted data modifications made before the run-time error are rolled back.
It is important to note that CREATE DEFAULT, CREATE PROCEDURE, CREATE RULE, CREATE TRIGGER, and CREATE VIEW statements cannot be combined with other statements in a batch. The CREATE statement must begin the batch. All other statements that follow in that batch will be interpreted as part of the definition of the first CREATE statement. A table cannot be altered and then the new columns referenced in the same batch.

If an EXECUTE statement is the first statement in a batch, the EXECUTE keyword is not required. The EXECUTE keyword is required if the EXECUTE statement is not the first statement in the batch.
Each batch is compiled into a single execution plan. If the batch contains multiple SQL statements, all of the optimized steps needed to perform all the statements are built into a single execution plan.

GO: Signals the end of a batch of Transact-SQL statements to the SQL Server™ utilities. GO is not a Transact-SQL statement; it is a command recognized by the osql and isql utilities and SQL Query Analyzer. SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to SQL Server. The current batch of statements is composed of all statements entered since the last GO, or since the start of the ad hoc session or script if this is the first GO. SQL Query Analyzer and the osql and isql command prompt utilities implement GO differently.

A T-SQL statement cannot occupy the same line as a GO command. However, the line can contain comments. Users must follow the rules for batches. For example, any execution of a stored procedure after the first statement in a batch must include the EXECUTE keyword. The scope of local (user-defined) variables is limited to a batch, and cannot be referenced after a GO command. This utility command that requires no permissions. It can be executed by any user.

  • May 21 / 2008
  • 0
DBA Interview questions, dbDigger, T-SQL Interview Questions

Basics and definition of OLEDB

OLE DB is a low-level, COM API that is used for accessing data. OLE DB is recommended for developing tools, utilities, or low-level components that need high performance.
The OLE DB Provider for SQL Server (SQLOLEDB) is a native, high performance provider that accesses the SQL Server TDS protocol directly.
SQLOLEDB exposes interfaces to consumers wanting access to data on one or more computers running an instance of Microsoft® SQL Server™ 2000 or SQL Server version 7.0 or earlier.
SQLOLEDB is installed with SQL Server 2000 and is recommended when developing new applications. MSDASQL is provided for backward compatibility only.

  • May 21 / 2008
  • 0
Data Modeling and Database Design, DBA Interview questions, dbDigger, T-SQL Interview Questions

Codd’s 12 rules for relational database design

These rules are defined by Dr. Codd, the pioneer of relational databases. The rules primarily address implementation requirements for relational database management system (RDBMS) vendors.

1. Information Rule: All information in the database should be represented in one and only one way — as values in a table.

2. Guaranteed Access Rule: Each and every datum (atomic value) is guaranteed to be logically accessible by resorting to a combination of table name, primary key value, and column name.
3. Systematic Treatment of Null Values: Null values (distinct from empty character string or a string of blank characters and distinct from zero or any other number) are supported in the fully relational DBMS for representing missing information in a systematic way, independent of data type.
4. Dynamic Online Catalog Based on the Relational Model: The database description is represented at the logical level in the same way as ordinary data, so authorized users can apply the same relational language to its interrogation as they apply to regular data.
5. Comprehensive Data Sublanguage Rule: A relational system may support several languages and various modes of terminal use. However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and whose ability to support all of the following is comprehensible:
  • data definition
  • view definition
  • data manipulation (interactive and by program)
  • integrity constraints
  • authorization
  • transaction boundaries (begin, commit, and rollback).
6. View Updating Rule: All views that are theoretically update able are also update able by the system.
7. High-Level Insert, Update, and Delete: The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data, but also to the insertion, update, and deletion of data.
8. Physical Data Independence: Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representation or access methods.
9. Logical Data Independence: Application programs and terminal activities remain logically unimpaired when information preserving changes of any kind that theoretically permit un impairment are made to the base tables.
10. Integrity Independence: Integrity constraints specific to a particular relational database must be definable in the relational data sub language and storable in the catalog, not in the application programs.
11. Distribution Independence: The data manipulation sublanguage of a relational DBMS must enable application programs and terminal activities to remain logically unimpaired whether and whenever data are physically centralized or distributed.
12. Non subversion Rule: If a relational system has or supports a low-level (single-record-at-a-time) language, that low-level language cannot be used to subvert or bypass the integrity rules or constraints expressed in the higher-level (multiple-records-at-a-time) relational language.
  • May 21 / 2008
  • 0
DBA best practices, dbDigger, T-SQL Interview Questions

ACID properties for Database design

A logical unit of work (TRANSACTION) must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction:
Atomicity: A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them is performed.
Consistency: When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction’s modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.
Isolation: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.

Durability: After a transaction has completed, its effects are permanently

Consult us to explore the Databases. Contact us