I have struggled almost a week to findout the solution. From google also I found different suggestions in various sites, but it doesn't met to my requirement. Finally I got up the solution.
1) SELECT UID, NAME, HASDBACCESS,B.* FROM SYSUSERS A INNER JOIN SYSPERMISSIONS B ON A.UID = B.ID
2) Exec sp_helprotect null,'UserName'
for assigning role to login we can use sys procedures like
sp_addrolemember, sp_change_users_login ........
Tuesday, March 31, 2009
Tuesday, March 24, 2009
Mirror the Database using SQL Server 2005
Mirroing is the new concept introduced in SQL Server 2005. It is very useful to maintain a standby database when disaster occurs. Following is the process how to configure mirroring and test.
1) Create & backup database test_mirror to disk='d:\test_mirror.bkp'
2) backup log test_mirror to disk='d:\test_mirror_log.bkp'
3) move the backup file into destination server
4) Restore the database with norecovery
RESTORE DATABASE [test_mirror] FROM DISK = N'd:\test_mirror.bkp' WITH FILE = 1,
NORECOVERY, NOUNLOAD, REPLACE, STATS = 10
GO
5) Restore the log with norecovery
RESTORE log [test_mirror] FROM DISK = N'd:\test_mirror_log.bkp' WITH FILE = 1,
NORECOVERY, NOUNLOAD, REPLACE, STATS = 10
6) Configure the mirroring
a) Select test_mirror database right click - go to tasks - select mirror
b) Configure the Security using configure Security button. It will prompt for the
connections and authentication. You can configure different servers or 2 instances for
mirroring.
c) As first step it will ask for Witness server, this is optional. Witness server
can be configured to watch the mirroring from different server.
d) Principal and Mirror servers can be configured using next steps.
e) Select Operating Mode either Asynchronous(high performance) or Synchronous (high protection).
Asynchronous is nothing but changes will happend at principal server first then changes pass to mirror server.
Synchronous is like changes will happend same time at both the servers.
7) create the test table before start the mirroring.
create table test_1(regno numeric)
8) Start the mirroring
9) create the test_2 table after start the mirroring
create table test_2(regno numeric)
10) Create the snapshot copy in destination
CREATE DATABASE test_mirror_copy ON
(
NAME = test_mirror,
FILENAME = 'd:\testmirror.ss'
)
AS SNAPSHOT OF test_mirror
11) In the snapshot copy I can see both test-1 and test_2 tables.
12) Insert data into test_1 table
insert into test_1 values(1)
13) drop the existing snapshot
EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'test_mirror_copy'
GO
USE [master]
GO
/****** Object: Database [test_mirror_copy] Script Date: 03/18/2009 05:38:05 ******/
DROP DATABASE [test_mirror_copy]
GO
14) create the new snapshot
CREATE DATABASE test_mirror_copy ON
(
NAME = test_mirror,
FILENAME = 'd:\testmirror.ss'
)
AS SNAPSHOT OF test_mirror
15) So we can able to see one row in test_1 table. It means mirror happening from
principle to mirror server.
Note :
** If you are using SQL server 2005 with Service Pack 2, then no need to configure any trace flags. Otherwise Trace Flag 1400 is required to put in start up Parameters.
- Open Configuration Tools
- Goto Sql Server Configuration Manager
- Goto SQL Server 2005 Services
- then open the sql server service
- goto Advanced
- in startup parameters mention ;-T1400 (T should be capital)
- restart the service to effect the settings
1) Create & backup database test_mirror to disk='d:\test_mirror.bkp'
2) backup log test_mirror to disk='d:\test_mirror_log.bkp'
3) move the backup file into destination server
4) Restore the database with norecovery
RESTORE DATABASE [test_mirror] FROM DISK = N'd:\test_mirror.bkp' WITH FILE = 1,
NORECOVERY, NOUNLOAD, REPLACE, STATS = 10
GO
5) Restore the log with norecovery
RESTORE log [test_mirror] FROM DISK = N'd:\test_mirror_log.bkp' WITH FILE = 1,
NORECOVERY, NOUNLOAD, REPLACE, STATS = 10
6) Configure the mirroring
a) Select test_mirror database right click - go to tasks - select mirror
b) Configure the Security using configure Security button. It will prompt for the
connections and authentication. You can configure different servers or 2 instances for
mirroring.
c) As first step it will ask for Witness server, this is optional. Witness server
can be configured to watch the mirroring from different server.
d) Principal and Mirror servers can be configured using next steps.
e) Select Operating Mode either Asynchronous(high performance) or Synchronous (high protection).
Asynchronous is nothing but changes will happend at principal server first then changes pass to mirror server.
Synchronous is like changes will happend same time at both the servers.
7) create the test table before start the mirroring.
create table test_1(regno numeric)
8) Start the mirroring
9) create the test_2 table after start the mirroring
create table test_2(regno numeric)
10) Create the snapshot copy in destination
CREATE DATABASE test_mirror_copy ON
(
NAME = test_mirror,
FILENAME = 'd:\testmirror.ss'
)
AS SNAPSHOT OF test_mirror
11) In the snapshot copy I can see both test-1 and test_2 tables.
12) Insert data into test_1 table
insert into test_1 values(1)
13) drop the existing snapshot
EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'test_mirror_copy'
GO
USE [master]
GO
/****** Object: Database [test_mirror_copy] Script Date: 03/18/2009 05:38:05 ******/
DROP DATABASE [test_mirror_copy]
GO
14) create the new snapshot
CREATE DATABASE test_mirror_copy ON
(
NAME = test_mirror,
FILENAME = 'd:\testmirror.ss'
)
AS SNAPSHOT OF test_mirror
15) So we can able to see one row in test_1 table. It means mirror happening from
principle to mirror server.
Note :
** If you are using SQL server 2005 with Service Pack 2, then no need to configure any trace flags. Otherwise Trace Flag 1400 is required to put in start up Parameters.
- Open Configuration Tools
- Goto Sql Server Configuration Manager
- Goto SQL Server 2005 Services
- then open the sql server service
- goto Advanced
- in startup parameters mention ;-T1400 (T should be capital)
- restart the service to effect the settings
Tuesday, March 17, 2009
SQL Server 2005 New Features for improving Performance
TABLE OF CONTENTS
1. SCHEMA BINDING VIEW (Materialized View) :
2. CTE (Common Table Expression) :
3. TABLE PARTITION :
4. OUTPUT Keyword :
5. PIVOT / UNPIVOT :
6. APPLY Operator :
7. RANKING FUNCTIONS :
8. SQL SERVER 2005 INDEX FEATURES :
9. Sp_Executesql
1. SCHEMA BINDING VIEW (Materialized View) :
Schema Binding View is nothing but a view but it will bind the schema like a table. It means the view will hold the data. So we can able to create indexes on that. When index is created on a table automatically it improves the performance.
Benefits of Using Indexed Views :
• It is possible to create a unique clustered index on a view, as well as nonclustered indexes, to improve data access performance on the most complex queries.
• Aggregations can be precomputed and stored in the index to minimize expensive computations during query execution.
• Tables can be prejoined and the resulting data set stored.
• Combinations of joins or aggregations can be stored.
Considerations : The following considerations should follow while creating SB views ..
• The view, and all tables referenced in the view, must be in the same database and have the same owner.
• The indexed view does not need to contain all the tables referenced in the query to be used by the optimizer.
• A unique clustered index must be created before any other indexes can be created on the view.
• The view must be created using schema binding and any user-defined functions referenced in the view must also be created with the SCHEMABINDING option.
• Additional disk space will be required to hold the data defined by the indexed view.
• DTA (Database Tuning Advisor) tool can be used to findout indexed view requirement. It is a new tool available in SQL 2005 only.
Creating Indexed Views:
The steps required to create an indexed view are critical to the successful implementation of the view.
1. Verify the setting of ANSI_NULLS is correct for all existing tables that will be referenced in the view.
2. Verify ANSI_NULLS is set correctly for the current session as shown in the table below before creating any new tables.
3. Verify ANSI_NULLS and QUOTED_IDENTIFIER are set correctly for the current session as shown in the table below before creating the view.
4. Verify the view definition is deterministic.
5. Create the view using the WITH SCHEMABINDING option.
6. Verify your session's SET options are set correctly as shown in the table below before creating the unique clustered index on the view.
7. Create the unique clustered index on the view.
8. The OBJECTPROPERTY function can be used to check the value of ANSI_NULLS and QUOTED_IDENTIFIER on an existing table or view.
Note The indexed view may contain float and real columns; however, such columns cannot be included in the clustered index key if they are non-persisted computed columns.
GROUP BY Restrictions
If GROUP BY is present, the VIEW definition:
• Must contain COUNT_BIG(*).
• Must not contain HAVING, CUBE, ROLLUP, or GROUPING()
These restrictions are applicable only to the indexed view definition. A query can use an indexed view in its execution plan even if it does not satisfy these GROUP BY restrictions
Example :
CREATE VIEW Vdiscount1 WITH SCHEMABINDING AS
SELECT SUM(UnitPrice*OrderQty) AS SumPrice,
SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS SumDiscountPrice,
COUNT_BIG(*) AS Count, ProductID
FROM Sales.SalesOrderDetail
GROUP BY ProductID
GO
CREATE UNIQUE CLUSTERED INDEX VDiscountInd ON Vdiscount1 (ProductID)
Go
Can create many non clustered indexes
2. CTE (Common Table Expression) :
A named temporary result set based on a SELECT query. By mentioning WITH CTE Name we can write the query and the result set can be used again in the same query. The main purpose of CTE is we can avoid temporary table.
Advantages :
• Result set can be used in SELECT, INSERT, UPDATE, or DELETE
• Queries with derived tables become more readable
• Provide traversal of recursive hierarchies
Example :
WITH TopSales (SalesPersonID, NumSales) AS
(SELECT SalesPersonID, Count(*)
FROM Sales.SalesOrderHeader GROUP BY SalesPersonId)
SELECT LoginID, NumSales
FROM HumanResources.Employee e INNER JOIN TopSales
ON TopSales.SalesPersonID = e.EmployeeID
ORDER BY NumSales DESC
3. TABLE PARTITION :
Partition Tables
Vertical Table Partitioning :
You can use vertical table partitioning to move infrequently used columns into another table. Moving the infrequently used columns makes the main table narrower and allows more rows to fit on a page.
Horizontal Table Partitioning :
Horizontal table partitioning is a bit more complicated. But when tables that use horizontal table partitioning are designed correctly, you may obtain huge scalability gains. One of the most common scenarios for horizontal table partitioning is to support history or archive databases where partitions can be easily delineated by date. A simple method that you can use to view the data is to use partitioned views in conjunction with check constraints.
How to partition a table using Horizontal Partitioning :
Step 1: Create the partition function
CREATE PARTITION FUNCTION emailPF (nvarchar(30))
AS RANGE RIGHT FOR VALUES ('G', 'N')
Step 2: Create the partition scheme
CREATE PARTITION SCHEME emailPS
AS PARTITION emailPF TO (fg1, fg2, fg3)
Step 3 : Create the partitioned table
CREATE TABLE Sales.CustomerEmail
(CustID int, email nvarchar(30))
ON EMailPS (email)
* fg1,fg2,fg3 are filegroups located in different drives.
4. OUTPUT Keyword :
With help of Output keyword we can avoid multiple declarations and assigning the values. While performing INSERT/Update we can take the value into variable and it can be used in further queries.
Example :
DECLARE @InsertDetails TABLE
(ProductModelID int,
InsertedBy sysname)
INSERT INTO Production.ProductModel(Name, ModifiedDate)
OUTPUT inserted.ProductModel ID, suser_name()
INTO @InsertDetails
VALUES
('Racing Bike', getdate())
SELECT * FROM @InsertDetails
5. PIVOT / UNPIVOT :
PIVOT – converts values to columns :
Cust Prod Qty
Mike Bike 3
Mike Chain 2
Mike Bike 5
Lisa Bike 3
Lisa Chain 3
Lisa Chain 4
SELECT * FROM Sales.Order
PIVOT (SUM(Qty) FOR Prod IN ([Bike],[Chain])) PVT
Cust Bike Chain
Mike 8 2
Lisa 3 7
UNPIVOT – converts columns to values :
Cust Bike Chain
Mike 8 2
Lisa 3 7
SELECT Cust, Prod, Qty
FROM Sales.PivotedOrder
UNPIVOT (Qty FOR Prod IN ([Bike],[Chain])) UnPVT
Cust Prod Qty
Mike Bike 8
Mike Chain 2
Lisa Bike 3
Lisa Chain 7
6. APPLY Operator :
With help of APPLY operator we can join a function with table and get the result. In SQL 2000 then function result need to be inserted in a table, then only we can make a join from that. But using APPLY operator directly joining with table we can get the output.
Advantages : Invokes a table-valued function once per row
CROSS APPLY – only rows with matching function results
OUTER APPLY – all rows, regardless of matching function results
Example :
CREATE FUNCTION Sales.MostRecentOrders
(@CustID AS int) RETURNS TABLE AS
RETURN
SELECT TOP(3) SalesOrderID, OrderDate
FROM Sales.SalesOrderHeader
WHERE CustomerID = @CustID
ORDER BY OrderDate DESC
SELECT Name AS Customer, MR.*
FROM Sales.Store
CROSS APPLY Sales.MostRecentOrders(CustomerID) AS MR
7. RANKING FUNCTIONS :
Function Description
RANK Returns a rank for each row within a specified partition in a result set
DENSE_RANK Returns a consecutive rank for each row within a specified partition in a result set
ROW_NUMBER Returns the ordinal row position of each row in a grouping within a result set
NTILE Divides the rows in each partition of a result set into a specified number of ranks based on a given value.
With help of ranking functions we can find the data sequentially, like we can avoid top and other keywords.
8. SQL SERVER 2005 INDEX FEATURES :
ONLINE INDEXING :
The online index option allows concurrent modifications (updates, deletes, and inserts) to the underlying table or clustered index data and any associated indexes during index data definition language (DDL) execution. For example, while a clustered index is being rebuilt, you can continue to make updates to the underlying data and perform queries against the data.
Eg :
CREATE CLUSTERED INDEX [PK_Employee_EmployeeID] ON [HumanResources].[Employee] ([EmployeeID] ASC)
WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = ON) ON [PRIMARY]
ALTER INDEX ALL on HumanResources.Employee REBUILD WITH (ONLINE=ON)
INCLUDE CLAUSE :
With help of include clause, by including non-key columns we can create non-clustered indexes that cover more queries. The Database Engine does not consider non-key columns when calculating the number of index key columns or index key size. Non-key columns can be included in non-clustered index to avoid exceeding the current index size limitations of a maximum of 16 key columns and a maximum index key size of 900 bytes. INCLUDE clause to CREATE INDEX for nonkey columns – stored in leaf nodes of the index
Eg :
create unique index XUpersonInterest_firstName_inclInterest
on personInterest(firstName) include (interest)
CREATE NONCLUSTERED INDEX IX_Address_PostalCode
ON Person.Address (PostalCode)
INCLUDE (AddressLine1, AddressLine2, City, StateProvinceID)
ALTER INDEX :
Disabling
ALTER INDEX IX_Customer_TerritoryID ON Sales.Customer DISABLE
Rebuilding
ALTER INDEX PK_Customer_CustomerID ON Sales.Customer REBUILD
Reorganizing
ALTER INDEX PK_Customer_CustomerID ON Sales.Customer REORGANIZE
Setting of options
ALTER INDEX PK_Customer_CustomerId ON Sales.Customer SET(...)
Index related considerations for improving performance :
• Create indexes based on use :
Do not create indexes if a table is rarely queried, or if a table does not ever seem to be used by the optimizer. Avoid indexes on bit, text, ntext, or image data types because they are rarely used. Avoid very wide indexes and indexes that are not selective.
• Keep clustered index keys as small as possible.
• Consider range data for clustered indexes.
• Create an index on all foreign keys.
• Create highly selective indexes.
Create indexes that exhibit high selectivity. In other words, create indexes that have many distinct values. For example, an index on a region column may have a small number of distinct values. Therefore, there may not be enough distinct values for the optimizer to use. Another example of an item that may not have enough distinct values is a bit column. Since there are only two values, an index cannot be very selective and as a result, the index may not be used.
• Consider a covering index for often-used, high-impact queries.
• Use multiple narrow indexes rather than a few wide indexes.
• Create composite indexes with the most restrictive column first.
• Consider indexes on columns used in WHERE, ORDER BY, GROUP BY, and DISTINCT clauses.
• Remove unused indexes.
• Use the Index Tuning Wizard to tune the indexes.
• Keep statistics up to date
9. sp_executesql :
This system procedure can be used to execute dynamic sql queries instead of using EXEC keyword. Compare with EXEC keyword it will give the much performance.
Example :
DECLARE @SQLString NVARCHAR(500);
--Set column list. CHAR(13) is a carriage return, line feed
SET @SQLString = N'SELECT FirstName, LastName, JobTitle' + CHAR(13);
-- Set FROM clause with carriage return, line feed.
SET @SQLString = @SQLString + N'FROM HumanResources.vEmployee' + CHAR(13);
--Set WHERE clause.
SET @SQLString = @SQLString + N'WHERE LastName LIKE ''D%''';
EXEC sp_executesql @SQLString;
* The declared data type should be ntext/nchar/nvarchar
1. SCHEMA BINDING VIEW (Materialized View) :
2. CTE (Common Table Expression) :
3. TABLE PARTITION :
4. OUTPUT Keyword :
5. PIVOT / UNPIVOT :
6. APPLY Operator :
7. RANKING FUNCTIONS :
8. SQL SERVER 2005 INDEX FEATURES :
9. Sp_Executesql
1. SCHEMA BINDING VIEW (Materialized View) :
Schema Binding View is nothing but a view but it will bind the schema like a table. It means the view will hold the data. So we can able to create indexes on that. When index is created on a table automatically it improves the performance.
Benefits of Using Indexed Views :
• It is possible to create a unique clustered index on a view, as well as nonclustered indexes, to improve data access performance on the most complex queries.
• Aggregations can be precomputed and stored in the index to minimize expensive computations during query execution.
• Tables can be prejoined and the resulting data set stored.
• Combinations of joins or aggregations can be stored.
Considerations : The following considerations should follow while creating SB views ..
• The view, and all tables referenced in the view, must be in the same database and have the same owner.
• The indexed view does not need to contain all the tables referenced in the query to be used by the optimizer.
• A unique clustered index must be created before any other indexes can be created on the view.
• The view must be created using schema binding and any user-defined functions referenced in the view must also be created with the SCHEMABINDING option.
• Additional disk space will be required to hold the data defined by the indexed view.
• DTA (Database Tuning Advisor) tool can be used to findout indexed view requirement. It is a new tool available in SQL 2005 only.
Creating Indexed Views:
The steps required to create an indexed view are critical to the successful implementation of the view.
1. Verify the setting of ANSI_NULLS is correct for all existing tables that will be referenced in the view.
2. Verify ANSI_NULLS is set correctly for the current session as shown in the table below before creating any new tables.
3. Verify ANSI_NULLS and QUOTED_IDENTIFIER are set correctly for the current session as shown in the table below before creating the view.
4. Verify the view definition is deterministic.
5. Create the view using the WITH SCHEMABINDING option.
6. Verify your session's SET options are set correctly as shown in the table below before creating the unique clustered index on the view.
7. Create the unique clustered index on the view.
8. The OBJECTPROPERTY function can be used to check the value of ANSI_NULLS and QUOTED_IDENTIFIER on an existing table or view.
Note The indexed view may contain float and real columns; however, such columns cannot be included in the clustered index key if they are non-persisted computed columns.
GROUP BY Restrictions
If GROUP BY is present, the VIEW definition:
• Must contain COUNT_BIG(*).
• Must not contain HAVING, CUBE, ROLLUP, or GROUPING()
These restrictions are applicable only to the indexed view definition. A query can use an indexed view in its execution plan even if it does not satisfy these GROUP BY restrictions
Example :
CREATE VIEW Vdiscount1 WITH SCHEMABINDING AS
SELECT SUM(UnitPrice*OrderQty) AS SumPrice,
SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS SumDiscountPrice,
COUNT_BIG(*) AS Count, ProductID
FROM Sales.SalesOrderDetail
GROUP BY ProductID
GO
CREATE UNIQUE CLUSTERED INDEX VDiscountInd ON Vdiscount1 (ProductID)
Go
Can create many non clustered indexes
2. CTE (Common Table Expression) :
A named temporary result set based on a SELECT query. By mentioning WITH CTE Name we can write the query and the result set can be used again in the same query. The main purpose of CTE is we can avoid temporary table.
Advantages :
• Result set can be used in SELECT, INSERT, UPDATE, or DELETE
• Queries with derived tables become more readable
• Provide traversal of recursive hierarchies
Example :
WITH TopSales (SalesPersonID, NumSales) AS
(SELECT SalesPersonID, Count(*)
FROM Sales.SalesOrderHeader GROUP BY SalesPersonId)
SELECT LoginID, NumSales
FROM HumanResources.Employee e INNER JOIN TopSales
ON TopSales.SalesPersonID = e.EmployeeID
ORDER BY NumSales DESC
3. TABLE PARTITION :
Partition Tables
Vertical Table Partitioning :
You can use vertical table partitioning to move infrequently used columns into another table. Moving the infrequently used columns makes the main table narrower and allows more rows to fit on a page.
Horizontal Table Partitioning :
Horizontal table partitioning is a bit more complicated. But when tables that use horizontal table partitioning are designed correctly, you may obtain huge scalability gains. One of the most common scenarios for horizontal table partitioning is to support history or archive databases where partitions can be easily delineated by date. A simple method that you can use to view the data is to use partitioned views in conjunction with check constraints.
How to partition a table using Horizontal Partitioning :
Step 1: Create the partition function
CREATE PARTITION FUNCTION emailPF (nvarchar(30))
AS RANGE RIGHT FOR VALUES ('G', 'N')
Step 2: Create the partition scheme
CREATE PARTITION SCHEME emailPS
AS PARTITION emailPF TO (fg1, fg2, fg3)
Step 3 : Create the partitioned table
CREATE TABLE Sales.CustomerEmail
(CustID int, email nvarchar(30))
ON EMailPS (email)
* fg1,fg2,fg3 are filegroups located in different drives.
4. OUTPUT Keyword :
With help of Output keyword we can avoid multiple declarations and assigning the values. While performing INSERT/Update we can take the value into variable and it can be used in further queries.
Example :
DECLARE @InsertDetails TABLE
(ProductModelID int,
InsertedBy sysname)
INSERT INTO Production.ProductModel(Name, ModifiedDate)
OUTPUT inserted.ProductModel ID, suser_name()
INTO @InsertDetails
VALUES
('Racing Bike', getdate())
SELECT * FROM @InsertDetails
5. PIVOT / UNPIVOT :
PIVOT – converts values to columns :
Cust Prod Qty
Mike Bike 3
Mike Chain 2
Mike Bike 5
Lisa Bike 3
Lisa Chain 3
Lisa Chain 4
SELECT * FROM Sales.Order
PIVOT (SUM(Qty) FOR Prod IN ([Bike],[Chain])) PVT
Cust Bike Chain
Mike 8 2
Lisa 3 7
UNPIVOT – converts columns to values :
Cust Bike Chain
Mike 8 2
Lisa 3 7
SELECT Cust, Prod, Qty
FROM Sales.PivotedOrder
UNPIVOT (Qty FOR Prod IN ([Bike],[Chain])) UnPVT
Cust Prod Qty
Mike Bike 8
Mike Chain 2
Lisa Bike 3
Lisa Chain 7
6. APPLY Operator :
With help of APPLY operator we can join a function with table and get the result. In SQL 2000 then function result need to be inserted in a table, then only we can make a join from that. But using APPLY operator directly joining with table we can get the output.
Advantages : Invokes a table-valued function once per row
CROSS APPLY – only rows with matching function results
OUTER APPLY – all rows, regardless of matching function results
Example :
CREATE FUNCTION Sales.MostRecentOrders
(@CustID AS int) RETURNS TABLE AS
RETURN
SELECT TOP(3) SalesOrderID, OrderDate
FROM Sales.SalesOrderHeader
WHERE CustomerID = @CustID
ORDER BY OrderDate DESC
SELECT Name AS Customer, MR.*
FROM Sales.Store
CROSS APPLY Sales.MostRecentOrders(CustomerID) AS MR
7. RANKING FUNCTIONS :
Function Description
RANK Returns a rank for each row within a specified partition in a result set
DENSE_RANK Returns a consecutive rank for each row within a specified partition in a result set
ROW_NUMBER Returns the ordinal row position of each row in a grouping within a result set
NTILE Divides the rows in each partition of a result set into a specified number of ranks based on a given value.
With help of ranking functions we can find the data sequentially, like we can avoid top and other keywords.
8. SQL SERVER 2005 INDEX FEATURES :
ONLINE INDEXING :
The online index option allows concurrent modifications (updates, deletes, and inserts) to the underlying table or clustered index data and any associated indexes during index data definition language (DDL) execution. For example, while a clustered index is being rebuilt, you can continue to make updates to the underlying data and perform queries against the data.
Eg :
CREATE CLUSTERED INDEX [PK_Employee_EmployeeID] ON [HumanResources].[Employee] ([EmployeeID] ASC)
WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = ON) ON [PRIMARY]
ALTER INDEX ALL on HumanResources.Employee REBUILD WITH (ONLINE=ON)
INCLUDE CLAUSE :
With help of include clause, by including non-key columns we can create non-clustered indexes that cover more queries. The Database Engine does not consider non-key columns when calculating the number of index key columns or index key size. Non-key columns can be included in non-clustered index to avoid exceeding the current index size limitations of a maximum of 16 key columns and a maximum index key size of 900 bytes. INCLUDE clause to CREATE INDEX for nonkey columns – stored in leaf nodes of the index
Eg :
create unique index XUpersonInterest_firstName_inclInterest
on personInterest(firstName) include (interest)
CREATE NONCLUSTERED INDEX IX_Address_PostalCode
ON Person.Address (PostalCode)
INCLUDE (AddressLine1, AddressLine2, City, StateProvinceID)
ALTER INDEX :
Disabling
ALTER INDEX IX_Customer_TerritoryID ON Sales.Customer DISABLE
Rebuilding
ALTER INDEX PK_Customer_CustomerID ON Sales.Customer REBUILD
Reorganizing
ALTER INDEX PK_Customer_CustomerID ON Sales.Customer REORGANIZE
Setting of options
ALTER INDEX PK_Customer_CustomerId ON Sales.Customer SET(...)
Index related considerations for improving performance :
• Create indexes based on use :
Do not create indexes if a table is rarely queried, or if a table does not ever seem to be used by the optimizer. Avoid indexes on bit, text, ntext, or image data types because they are rarely used. Avoid very wide indexes and indexes that are not selective.
• Keep clustered index keys as small as possible.
• Consider range data for clustered indexes.
• Create an index on all foreign keys.
• Create highly selective indexes.
Create indexes that exhibit high selectivity. In other words, create indexes that have many distinct values. For example, an index on a region column may have a small number of distinct values. Therefore, there may not be enough distinct values for the optimizer to use. Another example of an item that may not have enough distinct values is a bit column. Since there are only two values, an index cannot be very selective and as a result, the index may not be used.
• Consider a covering index for often-used, high-impact queries.
• Use multiple narrow indexes rather than a few wide indexes.
• Create composite indexes with the most restrictive column first.
• Consider indexes on columns used in WHERE, ORDER BY, GROUP BY, and DISTINCT clauses.
• Remove unused indexes.
• Use the Index Tuning Wizard to tune the indexes.
• Keep statistics up to date
9. sp_executesql :
This system procedure can be used to execute dynamic sql queries instead of using EXEC keyword. Compare with EXEC keyword it will give the much performance.
Example :
DECLARE @SQLString NVARCHAR(500);
--Set column list. CHAR(13) is a carriage return, line feed
SET @SQLString = N'SELECT FirstName, LastName, JobTitle' + CHAR(13);
-- Set FROM clause with carriage return, line feed.
SET @SQLString = @SQLString + N'FROM HumanResources.vEmployee' + CHAR(13);
--Set WHERE clause.
SET @SQLString = @SQLString + N'WHERE LastName LIKE ''D%''';
EXEC sp_executesql @SQLString;
* The declared data type should be ntext/nchar/nvarchar
Tuesday, March 10, 2009
Brief about TSQL (Transact SQL)
Transact SQL is Structure Query Language which is using by SQL Server. When working with Oracle; PL-SQL can be used. A brief about TSQL query language.........
DDL – DATA DEFINITION LANGUAGE
DML – DATA MANIPULATION LANGUAGE
DCL – DATA CONTROL LANGUAGE
TCL – TRANSACT CONTROL LANGUAGE
DDL – CREATE, ALTER, DROP, TRUNCATE
Eg : CREATE DATABASE SampleDatabase
ON
( NAME = MyDatabase,
FILENAME = ‘C:\program files\MyData\mydatabasefile.mdf’,
SIZE = 10MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 1MB)
LOG ON
( NAME = MyDatabase_LOG,
FILENAME = ‘C:\program files\MyData\mydatabaselog.ldf’,
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB )
Eg :
CREATE TABLE MyTable (
Field1 int PRIMARY KEY,
Field2 char(10) NOT NULL),
Field3 datetime
)
Eg : ALTER DATABASE SampleDatabase
ADD FILE
(
NAME = MyDatabase1,
FILENAME =’c:\program files\MyData\mydatabasefile2.ndf’,
SIZE = 10MB,
MAXSIZE = 50MB,
FILEGROWTH = 1MB
)
Eg : ALTER TABLE MyTable ADD Field4 VARCHAR(10) NULL
ALTER TABLE MyTable DROP COLUMN Field4
Eg : DROP DATABASE SampleDatabase
Eg : DROP TABLE MyTable
DML – SELECT, INSERT, UPDATE, DELETE
SELECT :
Select * from [ABCD]
INSERT :
Insert into ABCD [ColumnA, ColumnB] values 5, 6
Insert into ABCD values 5,6
DELETE
Delete from ABCD where ColumnA = 5
UPDATE
Update ABCD set ColumnA = 8 where ColumnA = 5
DCL – GRANT, REVOKE
CREATE LOGIN [venky] WITH PASSWORD=N'111'
CREATE USER [venky] FOR LOGIN [venky]
EXEC sp_addrolemember N'db_owner', N'venky'
DROP LOGIN [venky]
Grant select to venkat
Revoke select to venkat
TCL – COMMIT, ROLLBACK
Commit
Rollback
DDL – DATA DEFINITION LANGUAGE
DML – DATA MANIPULATION LANGUAGE
DCL – DATA CONTROL LANGUAGE
TCL – TRANSACT CONTROL LANGUAGE
DDL – CREATE, ALTER, DROP, TRUNCATE
Eg : CREATE DATABASE SampleDatabase
ON
( NAME = MyDatabase,
FILENAME = ‘C:\program files\MyData\mydatabasefile.mdf’,
SIZE = 10MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 1MB)
LOG ON
( NAME = MyDatabase_LOG,
FILENAME = ‘C:\program files\MyData\mydatabaselog.ldf’,
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB )
Eg :
CREATE TABLE MyTable (
Field1 int PRIMARY KEY,
Field2 char(10) NOT NULL),
Field3 datetime
)
Eg : ALTER DATABASE SampleDatabase
ADD FILE
(
NAME = MyDatabase1,
FILENAME =’c:\program files\MyData\mydatabasefile2.ndf’,
SIZE = 10MB,
MAXSIZE = 50MB,
FILEGROWTH = 1MB
)
Eg : ALTER TABLE MyTable ADD Field4 VARCHAR(10) NULL
ALTER TABLE MyTable DROP COLUMN Field4
Eg : DROP DATABASE SampleDatabase
Eg : DROP TABLE MyTable
DML – SELECT, INSERT, UPDATE, DELETE
SELECT :
Select * from [ABCD]
INSERT :
Insert into ABCD [ColumnA, ColumnB] values 5, 6
Insert into ABCD values 5,6
DELETE
Delete from ABCD where ColumnA = 5
UPDATE
Update ABCD set ColumnA = 8 where ColumnA = 5
DCL – GRANT, REVOKE
CREATE LOGIN [venky] WITH PASSWORD=N'111'
CREATE USER [venky] FOR LOGIN [venky]
EXEC sp_addrolemember N'db_owner', N'venky'
DROP LOGIN [venky]
Grant select to venkat
Revoke select to venkat
TCL – COMMIT, ROLLBACK
Commit
Rollback
CONSTRAINTS IN SQL SERVER
CONSTRAINTS :
Primary Key, Foreign Key,Unique ,Default Key,Check,Null
primary key
NEW TABLE : create table abc (sl int identity(1,1), name varchar(100), constraint PK_sl Primary key(Sl))
ON EXISTING : ALTER TABLE ABC ADD CONSTRAINT pk_sL PRIMARY KEY (SL)
DROPPING : ALTER TABLE ABC DROP CONSTRAINT pk_sL
Foreign Key
NEW TABLE : CREATE TABLE ABCD (SL INT, PLACE VARCHAR(100),DISTRICT VARCHAR(50), CITY VARCHAR(50), CONSTRAINT FK_sl$abc$sl Foreign Key (Sl) references abc(Sl))
ON EXISTING : alter table abcd ADD constraint FK_Sl$abc$sl FOREIGN KEY (SL) REFERENCES ABC(SL)
DROPPNG : alter table abcd drop constraint FK_Sl$abc$sl
unique
NEW TABLE : create table abcde (sl int, rollnum int, classname varchar(10),
Constraint UQ_rollnum Unique(rollnum))
ON EXISTING : ALTER TABLE ABCDE ADD CONSTRAINT uq_ROLLNUM UNIQUE CLUSTERED (ROLLNUM)
DROPPING : ALTER TABLE ABCDE DROP CONSTRAINT UQ_ROLLNUM
check
NEW TABLE : CREATE TABLE ABCDEF (SL INT, AMOUNT INT, CONSTRAINT ch_AMOUNT CHECK(AMOUNT>5))
ON EXISTING : ALTER TABLE ABCDEF ADD CONSTRAINT CH_AMOUNT CHECK(AMOUNT>5)
DROPPING : ALTER TABLE ABCDEF DROP CONSTRAINT CH_AMOUNT
default
CREATE TABLE ABCDEFF (SL INT, MARKS INT DEFAULT 35)
Null / not null
Primary Key, Foreign Key,Unique ,Default Key,Check,Null
primary key
NEW TABLE : create table abc (sl int identity(1,1), name varchar(100), constraint PK_sl Primary key(Sl))
ON EXISTING : ALTER TABLE ABC ADD CONSTRAINT pk_sL PRIMARY KEY (SL)
DROPPING : ALTER TABLE ABC DROP CONSTRAINT pk_sL
Foreign Key
NEW TABLE : CREATE TABLE ABCD (SL INT, PLACE VARCHAR(100),DISTRICT VARCHAR(50), CITY VARCHAR(50), CONSTRAINT FK_sl$abc$sl Foreign Key (Sl) references abc(Sl))
ON EXISTING : alter table abcd ADD constraint FK_Sl$abc$sl FOREIGN KEY (SL) REFERENCES ABC(SL)
DROPPNG : alter table abcd drop constraint FK_Sl$abc$sl
unique
NEW TABLE : create table abcde (sl int, rollnum int, classname varchar(10),
Constraint UQ_rollnum Unique(rollnum))
ON EXISTING : ALTER TABLE ABCDE ADD CONSTRAINT uq_ROLLNUM UNIQUE CLUSTERED (ROLLNUM)
DROPPING : ALTER TABLE ABCDE DROP CONSTRAINT UQ_ROLLNUM
check
NEW TABLE : CREATE TABLE ABCDEF (SL INT, AMOUNT INT, CONSTRAINT ch_AMOUNT CHECK(AMOUNT>5))
ON EXISTING : ALTER TABLE ABCDEF ADD CONSTRAINT CH_AMOUNT CHECK(AMOUNT>5)
DROPPING : ALTER TABLE ABCDEF DROP CONSTRAINT CH_AMOUNT
default
CREATE TABLE ABCDEFF (SL INT, MARKS INT DEFAULT 35)
Null / not null
Labels:
CONSTRAINTS,
DEFAULT,
FOREIGN KEY,
NOT NULL,
NULL,
PRIMARY KEY,
UNIQUE
Subscribe to:
Posts (Atom)