:::: MENU ::::

Posts Categorized / Data Modeling and Database Design

  • Aug 26 / 2014
  • 0
Data Modeling and Database Design, dbDigger, SQL Server Collations

Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AS” in the equal to operation.

Today one of our development team member reported following error on newly restored database in staging environment.

Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AS” in the equal to operation.

Error was generated in a stored procedure. First of all i found that collation of subject database is different than SQL_Latin1_General_CP1_CI_AS. Using this method i changed the collation of subject database to required SQL_Latin1_General_CP1_CI_AS.
I was hopeful that changing collation of database will solve the issue but it was not the case. I opened the stored procedure code and got the tables being used in it.
Suppose we have table1, table2 and table3 in USP. Now we have to get all those columns where collation is different than our required collation. Following script did task and returned the columns with different collation.

SELECT object_name(object_id) as TableName,   
 name as columnName, collation_name  
 FROM sys.columns  
 WHERE OBJECT_ID IN (SELECT OBJECT_ID  
 FROM sys.objects where name in ('table1','table2','table3'))  
 and collation_name <> 'SQL_Latin1_General_CP1_CI_AS'
GO

Once we have the list of columns then we can change their collation to match with required collation. This can be done with following script.

ALTER TABLE table1 ALTER COLUMN ColumnNameHere nvarchar(512) COLLATE SQL_Latin1_General_CP1_CI_AS NULL  
GO
ALTER TABLE table2 ALTER COLUMN ColumnNameHere varchar(64) COLLATE SQL_Latin1_General_CP1_CI_AS NULL  
GO
ALTER TABLE table3 ALTER COLUMN ColumnNameHere varchar(512) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
GO

This time i found that issue was resolved and there was no error.

  • Dec 03 / 2013
  • 0
Constraints and Keys, Data Modeling and Database Design, dbDigger, DDL, SQL Server Error messages

The ALTER TABLE statement conflicted with the FOREIGN KEY constraint

Today i was working to create foreign keys on few tables. Some of ADD CONSTRAINT statements got failed by generating the following error message.

The ALTER TABLE statement conflicted with the FOREIGN KEY constraint “%”. The conflict occurred in database “%”, table “%”, column ‘%’.

If we look closely at the descriptive message it points to the same foreign key constraint that we are just trying to create. Reason is that while creating the constraint SQL Server is trying to validate the existing data based on new constraint. There are some records with no reference in their base primary key table. Such records are cause of error here.

SOLUTION

we have couple of options. If you want to make sure that no such orphaned records be there in subject table you may just detect and delete them. Then ADD CONSTRAINT statement will work without any error. However deleting the data will hardly be a suitable option. Alternate is to use the ADD CONSTRAINT statement with NOCHECK option. It will make sure that existing data is not validated for constraint at time of creation.

WITH CHECK | WITH NOCHECK

Here is some informative text on these options from BOL.

Specifies whether the data in the table is or is not validated against a newly added or re-enabled FOREIGN KEY or CHECK constraint. If not specified, WITH CHECK is assumed for new constraints, and WITH NOCHECK is assumed for re-enabled constraints.
If you do not want to verify new CHECK or FOREIGN KEY constraints against existing data, use WITH NOCHECK. Any constraint violations that are suppressed by WITH NOCHECK when the constraint is added may cause future updates to fail if they update rows with data that does not comply with the constraint.

QUERY OPTIMIZER BEHAVIOR

The query optimizer does not consider constraints that are defined WITH NOCHECK. Such constraints are ignored until they are re-enabled by using ALTER TABLE table WITH CHECK CHECK CONSTRAINT ALL.

  • Aug 29 / 2013
  • 0
Data Modeling and Database Design, dbDigger, Security and Permissions, T-SQL Interview Questions, T-SQL Tips and Tricks

Drawbacks of using Dynamic SQL

I am performing some knowledge discovery that what may be the bad effects of using dynamic SQL in SQL Server environment. So while going through some good articles on it i have figured out following major issues/side effects of dynamic SQL

  • Query plans of Dynamic SQL are not cached so not reused
  • Dynamic SQL can put you vulnerable for SQL injection attack
  • Messy quotation marks and  spacing complicate the query writing
  • Network traffic is increased as compared to a USP executed
  • Generating the dependencies through various methods becomes unreliable as objects used in dynamic SQL can not be traced by system views
  • Ownership chaining is skipped hence permissions are compromised

These major reasons are enough to think twice before using the dynamic SQL. So use it wisely and also look for options to avoid it.

  • Mar 07 / 2012
  • 0
Data Modeling and Database Design, DBA best practices, dbDigger, Identity Columns, Performance Tunning and Optimization

Utilize the negative half of identity column data type

Identity columns are auto generated sequences used for ensuring row uniqueness at table level. Following data types of Numeric family may be used as data type of identity column.

  • int
  • bigint
  • smallint
  • tinyint
  • decimal/numeric

Above mentioned data types of numeric family may store negative values except the TINYINT. It is important to note that half of data types storage capacity of these data types is assigned to negative values. For example SMALLINT can have values from -32768 to 32767. It can hold -32768 to -1 negative values, a zero and 1 to 32767 positive values. Please also have a look at following table (Microsoft Technet) to get an idea about all numeric data types

Utilize the negative part of data type for identity columns

We may analyze that except the TINYINT data type, all other numeric data types have half capacity reserved for non negative values.

Point to Consider

Point to consider here is that identity column capacity should be fully utilized by including the negative part of data type also. Otherwise you are going to leave unused almost half of data type capacity. Very simple point is to give the lowest negative end of data type as SEED value of identity column. Consider the following demo

 -- Create table for demo  
 IF EXISTS   
 (SELECT * FROM sys.objects where name = 'IdentityDemo')  
 DROP TABLE IdentityDemo  
 GO  
 CREATE TABLE IdentityDemo (id SMALLINT identity(1 ,1), EmpName varchar(50))  
 GO  
 -- Insert values more than +ive part of smallint  
 insert into IdentityDemo values ('Atif')  
 GO 32769  

In above demo we used SMALLINT data type with seed and increment both 1, while creating the table. So inserting the values it failed for values more than 32767 and error was generated.

Error when identity column is saturated

Same may be verified by selecting the data from table

 -- Verify the inserted rows  
 SELECT * FROM IdentityDemo order by ID DESC  

The identity column is saturated and while negative position of data type is there unused. Now following demo would utilize the negative portion also and would have double capacity of storage

 -- Create table for demo with negative seed  
 IF EXISTS  
 (SELECT * FROM sys.objects where name = 'IdentityDemo')  
 DROP TABLE IdentityDemo  
 GO  
 CREATE TABLE IdentityDemo (id SMALLINT identity(-32768 ,1), EmpName varchar(50))  
 GO  
 -- Insert values more than +ive part of smallint  
 insert into IdentityDemo values ('Atif')  
 GO 34769  

This time insertion is successful with 34769 values. Can be verified by

 -- Verify the inserted rows  
 SELECT * FROM IdentityDemo order by ID DESC  

Same is the case for int, bigint and decimal/numeric also. So use the identity column as valuable resource and fully utilize it to avoid early saturation.

  • Feb 29 / 2012
  • 0
Data Modeling and Database Design, Data Types, dbDigger, DDL, SQL Server Error messages

Table creation failed due to row size limit

Following error messages may be generated while trying to create index on table having row size greater than 8060 bytes. In case where index is also tried to create following error would be generated
Index ‘%’ row length exceeds the maximum permissible length of ‘8060’ bytes.

And while trying to create table with out index and size greater than 8060 bytes, following error would be generated
Creating or altering table ‘%’ failed because the minimum row size would be 16011, including 7 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.

SQL Server extent size is 8060 bytes and is comprised of 8 pages. Individual table row under 8060 bytes size is requirement of table creation. However data types like image or text would not be considered as participant in max size. If row size is expected to cross this limit then error message would be generated and creation would be failed.

Work Around

Some work around may be applied to solve the issue.

  • Try to cut down the number of columns by normalization
  • Try to optimize the data types
  • Study feasibility to change columns like char(8000) columns with text data type

Following are some scenarios where error messages related to row max size may be generated

-- Row size error message without index   
 IF EXISTS (SELECT name FROM sys.objects WHERE name = 'tblMaxRowDemo')  
 DROP TABLE tblMaxRowDemo  
 GO  
 CREATE TABLE tblMaxRowDemo  
 (ID INT IDENTITY(1,1),  
 col1 CHAR(6000),col2 CHAR(6000))  
 GO  
 -- Row size error message with index   
 IF EXISTS (SELECT name FROM sys.objects WHERE name = 'tblMaxRowDemo')  
 DROP TABLE tblMaxRowDemo  
 GO  
 CREATE TABLE tblMaxRowDemo  
 (ID INT PRIMARY KEY IDENTITY(1,1),  
 col1 CHAR(6000),col2 CHAR(6000))  
 GO  
  • Feb 07 / 2012
  • 0
Data Modeling and Database Design, DBA Interview questions, dbDigger, Identity Columns, SQL Server Error messages, T-SQL Interview Questions

Multiple identity columns specified for table %’. Only one identity column per table is allowed

Following error message would be generated if more than one identity column is tried to define in a table.
Multiple identity columns specified for table ‘%’. Only one identity column per table is allowed
SQL Server does not allow to have more than one identity column in a single table. Consider following design for generating error message.

 CREATE TABLE identTest  
 (ID1 SMALLINT IDENTITY(1,1),   
 ID2 SMALLINT IDENTITY(1,1))  
 GO  

There would be hardly any scenario when such a design is required. However such functionality may be achieved by a couple of work around

  • Use trigger to insert values in second column
  • Configure second column as computed column and reflect values from identity column
Pages:123456
Consult us to explore the Databases. Contact us