Monday, December 27, 2010

New Index features in Sql server 2005 & 2008

1. Indexes in SQL Server 2005: Index features:

INCLUDE (column [,... n ] )

 Specifies the nonkey columns to be added to the leaf level of the nonclustered index.
 The maximum number of included nonkey columns is 1,023 columns; the minimum number is 1 column.
 Column names cannot be repeated in the INCLUDE list and cannot be used simultaneously as both key and nonkey columns.
 All data types are allowed except text, ntext, and image.

Index arguments

ONLINE = {ON|OFF}
 Specifies whether underlying tables and associated indexes are available for queries and data modification during the index operation.
 The default is OFF.
 Online index operations are available only in SQL Server 2005 Enterprise Edition.

ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index operation, only an Intent Share (IS) lock is held on the source table. This enables queries or updates to the underlying table and indexes to proceed. At the start of the operation, a Shared (S) lock is held on the source object for a very short period of time. At the end of the operation, for a short period of time, an S (Shared) lock is acquired on the source if a nonclustered index is being created; or an SCH-M (Schema Modification) lock is acquired when a clustered index is created or dropped online and when a clustered or nonclustered index is being rebuilt. ONLINE cannot be set to ON when an index is being created on a local temporary table.

OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates, rebuilds, or drops a clustered index, or rebuilds or drops a nonclustered index, acquires a Schema modification (Sch-M) lock on the table. This prevents all user access to the underlying table for the duration of the operation. An offline index operation that creates a nonclustered index acquires a Shared (S) lock on the table. This prevents updates to the underlying table but allows read operations, such as SELECT statements.


ALLOW_ROW_LOCKS = {ON | OFF}

 Specifies whether row locks are allowed.
 The default is ON.

ON

 Row locks are allowed when accessing the index.
 The Database Engine determines when row locks are used.

OFF
 Row locks are not used.

ALLOW_PAGE_LOCKS = {ON | OFF}

 Specifies whether page locks are allowed.
 The default is ON.

ON
 Page locks are allowed when accessing the index.
 The Database Engine determines when page locks are used.

OFF
 Page locks are not used.

MAXDOP = max_degree_of_parallelism

 Overrides the max degree of parallelism configuration option for the duration of the index operation.
 Use MAXDOP to limit the number of processors used in a parallel plan execution.
 The maximum is 64 processors.
 The below table shows the possible max_degree_of_parallelism and its description.

Max_degree_of_parallelism Description
1 Suppresses parallel plan generation.
>1 Restricts the max.no of processors used in a parallel index operation to the specified number
0(Default) Uses the actual number of processors






2. Indexes in SQL Server 2008: Index features:

WHERE

 Creates a filtered index by specifying which rows to include in the index.
 The filtered index must be a nonclustered index on a table.
 Filtered indexes do not apply to XML indexes and full-text indexes.
 Filtered indexes do not allow the IGNORE_DUP_KEY option.
 The filter predicate uses simple comparison logic and cannot reference a computed column, a UDT column, a spatial data type column, or a hierarchyID data type column. Comparisons using NULL literals are not allowed with the comparison operators.

FILESTREAM_ON { filestream_filegroup_name | partition_scheme_name | NULL}

 Specifies the placement of FILESTREAM data for the table when a clustered index is created.
 The FILESTREAM_ON clause allows FILESTREAM data to be moved to a different FILESTREAM filegroup or partition scheme.
 filestream_filegroup_name is the name of a FILESTREAM filegroup.
 The filegroup must have one file defined for the filegroup otherwise an error is raised.
 If the table is partitioned, the FILESTREAM_ON clause must be included and must specify a partition scheme of FILESTREAM filegroups that uses the same partition function and partition columns as the partition scheme for the table. Otherwise, an error is raised.
 If the table is not partitioned, the FILESTREAM column cannot be partitioned. FILESTREAM data for the table must be stored in a single filegroup that is specified in the FILESTREAM_ON clause.
 FILESTREAM_ON NULL can be specified in a CREATE INDEX statement if a clustered index is being created and the table does not contain a FILESTREAM column.

Index arguments

DATA_COMPRESSION
 Specifies the data compression option for the specified index, partition number, or range of partitions. The options are as follows:
NONE
 Index or specified partitions are not compressed.
ROW
 Index or specified partitions are compressed by using row compression.
PAGE
 Index or specified partitions are compressed by using page compression.

Saturday, December 11, 2010

FIND LIST OF JOBS AND THEIR SCHEDULE WITH STEP WISE

SELECT SJ.NAME, SJS.STEP_ID,SJS.STEP_NAME,SJS.COMMAND,
SJ.DESCRIPTION,
(SELECT NAME FROM MASTER.DBO.SYSLOGINS WHERE SID IN (SJ.OWNER_SID)) JOB_OWNER,
SJ.DATE_CREATED,SJ.DATE_MODIFIED, SJS.DATABASE_NAME, case when d.freq_type = 8 then 'Weekly' when d.freq_type = 4 then 'Daily' else 'None' end Schedule,
d.Freq_Subday_Interval Interval_in_Minutes,
case when len(d.active_start_time) = 5 then convert(varchar,left(d.active_start_time,1)) + ':' + convert(varchar,left(right(d.active_start_time,4),2)) + ':' + convert(varchar,right (d.active_start_time,2))
when len(d.active_start_time) = 4 then '00' + ':'+convert(varchar,left(d.active_start_time,2)) + ':'+convert(varchar,right(d.active_start_time,2))
when len(d.active_start_time) = 3 then '00:0' + convert(varchar,left(d.active_start_time,1)) + ':'+convert(varchar,right(d.active_start_time,2))
when len(d.active_start_time) = 2 then '00:00:' + convert(varchar,left(d.active_start_time,2))
when len(d.active_start_time) = 1 then '00:00:0' + convert(varchar,left(d.active_start_time,1))
else
convert(varchar,left(d.active_start_time,2)) + ':' + convert(varchar,left(right(d.active_start_time,4),2)) + ':' + convert(varchar,right (d.active_start_time,2)) End [Scheduled_Time]
FROM SYSJOBS SJ INNER JOIN SYSJOBSTEPS SJS
ON SJ.JOB_ID = SJS.JOB_ID
LEFT OUTER JOIN SYSJOBSCHEDULES SJSC ON SJ.JOB_ID = SJSC.JOB_ID
LEFT OUTER JOIN SYSSCHEDULES d ON SJSC.SCHEDULE_ID = d.SCHEDULE_ID
ORDER BY NAME

Thursday, October 14, 2010

Sql Monitoring - Job Failure information

A common requiremnet in DBA Environment to findout what are the jobs got failure in the servers. The jobs use to perform for various activities like Mainteance, Performance Related, Projects related, Backups/restore, making Disaster recovery techniques etc. If the job got failure and no solution has been provided for that, there could be some problem in regular activities.

Following query helps to findout what are the jobs got failure and by getting the information solution can be provided. If required this can be created as procedure in MSDB to simplify the task.

----------------------------------------------------------------------------------
Create Proc Pr_MonitorFailure as
Begin
select b.name Job,
case when len(a.run_time) = 5 then convert(varchar,left(a.run_time,1)) + ':' + convert(varchar,left(right(a.run_time,4),2)) + ':' + convert(varchar,right (a.run_time,2))
when len(a.run_time) = 4 then '00' + ':'+convert(varchar,left(a.run_time,2)) + ':'+convert(varchar,right(a.run_time,2))
when len(a.run_time) = 3 then '00:0' + convert(varchar,left(a.run_time,1)) + ':'+convert(varchar,right(a.run_time,2))
when len(a.run_time) = 2 then '00:00:' + convert(varchar,left(a.run_time,2))
when len(a.run_time) = 1 then '00:00:0' + convert(varchar,left(a.run_time,1))
else
convert(varchar,left(a.run_time,2)) + ':' + convert(varchar,left(right(a.run_time,4),2)) + ':' + convert(varchar,right (a.run_time,2)) End [Last Run Datetime],
--run_duration,
ISNULL(SUBSTRING(CONVERT(varchar(7),run_duration+1000000),2,2) + ':'
+ SUBSTRING(CONVERT(varchar(7),run_duration+1000000),4,2) + ':'
+ SUBSTRING(CONVERT(varchar(7),run_duration+1000000),6,2),'') AS [Last Run Duration], Message,
case when run_status = 0 then 'Fail' else 'Other Reason' end run_status from
Sysjobhistory a inner join Sysjobs b on a.job_id = b.job_id
where run_status <> 1 and message not like '%The Job was invoked%'
End

----------------------------------------------------------------------------------

Tuesday, September 14, 2010

Basic information about Joins in Sql server

Joins are really important for writing queries in any database language. If you take SQL Server there are few joins available which can help to construct sql statemetns. The Joins are combination of keywords Intersect, Union, Union all which we have learnt during our X standards. Join can be matched to 2 or more tables (usually), you can match single table also using join i.e., called Self join.

In Sql server we have mainly
Inner Join
Left Outer Join
Right Outer Join
Full Outer Join
Cross Join
Self Join


For best understanding create following tables and insert the data in sample database. So that you can understand very easily how the join condition results.

----------------------------
Create table Table1 (id int, name varchar(10))
Create table Table2 (id int, Name varchar(10))

insert into table1 (id, name)
values (1,'A')
insert into table1 (id, name)
values (2,'AB')
insert into table1 (id, name)
values (3,'ABC')

insert into table2 (id, name)
values (1,'A')
insert into table2 (id, name)
values (2,'AB')

----------------------------
JOINS :
Inner join : A Intersect B; it means the data results which are available in both the tables.
Eg :
SELECT a.id, a.Name,b.id, b.Name from
Table1 A inner join Table2 B on a.id = b.id

Left outer Join : A union B : Whatever the data available in A table and the matching data from B Table
Eg :
SELECT a.id, a.Name,b.id, b.Name from
Table1 A Left outer join Table2 B on a.id = b.id

Right outer Join : A union B : Whatever the data available in B table and the matching data from A Table
Eg :
SELECT a.id, a.Name,b.id, b.Name from
Table1 A Right outer join Table2 B on a.id = b.id

Full outer Join : All the data from A and B tables
Eg :
SELECT a.id, a.Name,b.id, b.Name from
Table1 A Full outer join Table2 B on a.id = b.id

Cross Join : A X B ; for each row in a multiply into all rows in B. Like wise for all the rows.
SELECT a.id, a.Name,b.id, b.Name from
Table1 A cross join Table2 B

Self Join : By using self join we can match the same table for different columns.
SELECT a.id, a.Name,b.id, b.Name from
Table1 A join Table1 B on a.id = b.id

Monday, July 12, 2010

Reindex all tables which are highly fragmented in the database

A common database maintenance activity, we need to Reindex database tables to maintain less fragmentation. For performing this we can identify each and every object which has fragmentation reached our expectation level; instead I have designed 2 procedures to findout the fragmented tables and perform the reindex on all indexes available in identified tables.

Hope this will reduce your maintenance time of writing queries for reindexing.
You can choose MSDB to create this proc and change the content to refer actual databases


-- Identifying Fragmentation of Tables (1)

Create Proc Pr_View_Defragmentation (@database sysname, @Frag int)
as
Declare @dbid int
select @dbid = dbid from sys.sysdatabases where name = @database
select distinct xtype, name,object_name(object_id),* from sys.dm_db_index_physical_stats (@dbid,NULL,NULL,NULL,'SAMPLED')
a inner join sys.sysobjects b on a.object_id = id where xtype = 'u' and name not like '%sys%'
and avg_fragmentation_in_percent >= @frag


-- Perform Reindexing to decrease the Fragmentation (2)

Create Proc Pr_Alter_index (@database sysname, @frag int) as
--DECLARE @DATABASE SYSNAME
--DECLARE @FRAG INT
Declare @a int
Declare @b int
Declare @sql varchar(1000)
Declare @Table table(id int identity(1,1),Object varchar(100))
Declare @sql1 varchar(100)
--set @database = 'DATABASE'
--set @frag = 10
SEt @a = 1
select @B = count(DISTINCT object_id) from sys.dm_db_index_physical_stats (db_id(@database),NULL,NULL,NULL,'SAMPLED')
a inner join sys.sysobjects b on a.object_id = id where xtype = 'u' and name not like '%sys%'
and avg_fragmentation_in_percent >= @frag
insert into @table(object)
Select distinct object_name(object_id) from sys.dm_db_index_physical_stats (db_id(@database),NULL,NULL,NULL,'SAMPLED')
a inner join sys.sysobjects b on a.object_id = id where xtype = 'u' and name not like '%sys%'
and avg_fragmentation_in_percent >= @frag
while @a < @b
Begin
select @SQL1 = OBJECT FROM @TABLE where id = @a SET @SQL = 'ALTER INDEX ALL ON ' + @database + '..' + @sql1 + ' REBUILD WITH (ONLINE = ON)'
exec (@sql)
set @a = @a+1
End



exec pr_View_Defragmentation database, fragmentation percentage
exec Pr_Alter_index database,fragmentation percentage

Eg : exec pr_View_Defragmentation 'abcd', 70

Thursday, June 10, 2010

Maintain all jobs with a default owner account in Sql server

This is common requirement in DBA environment, like all created jobs in the server should run with specified username. Normally in production environment user will not perform the job manually and if it is on schedule also it should run with some service account. When creating a job it would be created with the default user who creates the job and there may be chances also to forget changing the job owner.

This query will helps to perform all jobs should be under default owner account.

declare @username varchar(100)
set @username = 'sa'
update sysjobs set owner_sid =
(select sid from master.dbo.syslogins where name = @username)
where name in (job_a, job_b,...... job_n)


* you can change the parameters as per your requirement

Monday, June 7, 2010

Findout Database Restoration Details

I got confusion that have I restored my database using latest back or yet to be restored ? B'cos I have no. of databases in different environments and requires to be restored with the latest backup arrived from client dbs.

To findout these details in the database like when it was restored either full database (or) only filegroup or part of some files restored, It can be achieved in following way :
USE MSDB
GO

select BS.user_name,
destination_database_name Database_name,
restore_date Date,
BS.database_name Actual_Database,
BS.server_name,
BS.name,
physical_name,
backup_start_date,
BF.backup_size
from RestoreHistory RH inner join BackupSet BS on RH.backup_set_id = BS.backup_set_id
inner join BackupFile BF on BF.backup_set_id = BS.backup_set_id
order by RH.Restore_Date

* Database_name and Actual_Database both are same but Actual_Database refers the database which you have restored. If the database is renamed after restoration also it shows the actual name when it was restored.

Tuesday, May 25, 2010

How PERFMON (Windows tool) helps DBA to identify the Performance Issues and take necessary solutions

In SQL Server DBA Environment, there are several ways to improve the performance of SQL Server Database Applications such as Query Execution Plans, Sql Profiler, DTA (Database Tuning Advisor), Server Level properties, Object level improvements (Indexes, statistics, other maintenance stuff) and Windows Level Applications. I mean the database performance can be identified in 3 levels i.e., Server Level (Operating Systems, Networking Protocols); Database Level (Sql Server Databsae Engine, SQL Server) ; Object Level (Objects within the database).


So if we are facing the performance degrade then we need to check up at all levels for taking necessary actions. Currently I will discuss about the Server Level Tool PERFMON which is very useful to identify the sever levels activities, based on the identified results we can take appropriate action to improve the performance. It can be Hardware, Memory (or) System changes.


PERFMON : PERFMON is a windows inbuilt tool which can provide the workload of the resources running in the system. It can be used to find out Windows resources data as well as SQL Server resources.


Main Benefits Of The Tool :
• Understand your workload and its effect on your system's resources.
• Observe changes and trends in workloads and resource usage so you can plan for future upgrades.
• Test configuration changes or other tuning efforts by monitoring the results.


System Monitor and Performance Logs and Alerts provide detailed data about the resources used by specific components of the operating system and by programs that have been designed to collect performance data.

Choosing the data to monitor :
Start by monitoring the activity of the following components in order:
• Memory
• Processors
• Disks
• Network


Following counters can be helpful to trace the data

1:
Component : Disk
Performance aspect being monitored : Usage
Counters to monitor :
Physical Disk\Disk Reads/sec, Physical Disk\Disk Writes/sec, LogicalDisk\% Free Space, Interpret the % Disk Time counter carefully. Because the _Total instance of this counter may not accurately reflect utilization on multiple-disk systems, it is important to use the % Idle Time counter as well. Note that these counters cannot display a value exceeding 100%.


2 :
Component : Disk
Performance aspect being monitored : Hindrances
Counters to Monitor : Physical Disk\Avg. Disk Queue Length (all instances)


3:
Component : Memory
Performance aspect being monitored : Usage
Counters to Monitor : Memory\Available Bytes, Memory\Cache Bytes


4:
Component : Memory
Performance aspect being monitored : Hindrances
Counters to Monitor : Memory\Pages/sec, Memory\Page Reads/sec, Memory\Transition Faults/sec, Memory\Pool Paged Bytes, Memory\Pool Nonpaged Bytes.
Although not specifically Memory object counters, the following are also useful for memory analysis: Paging File\% Usage object (all instances), Cache\Data Map Hits %, Server\Pool Paged Bytes and Server\Pool Nonpaged Bytes


5:
Component : Network
Performance aspect being monitored : Throughput
Counters to Monitor : Protocol transmission counters (varies with networking protocol); for TCP/IP: Network Interface\Bytes total/sec, Network Interface\ Packets/sec, Server\Bytes Total/sec, or Server\Bytes Transmitted/sec and Server\Bytes Received/sec


6:
Component : Processor
Performance aspect being monitored : Usage
Counters to Monitor : Processor\% Processor Time (all instances)


7:
Component : Processor
Performance aspect being monitored : Hindrances
Counters to Monitor : System\Processor Queue Length (all instances),
Processor\ Interrupts/sec, System\Context switches/sec



How to Create and perform :
1. Go to RUN and type PERFMON then Enter
2. Double-click Performance Logs and Alerts, and then double-click Counter Logs. Any existing logs will be listed in the details pane. A green icon indicates that a log is running; a red icon indicates that a log has been stopped.
3. Right-click a blank area of the details pane, and click New Log Settings.
4. In Name, type the name of the log, and then click OK.
5. On the General tab, click Add Objects and select the performance objects you want to add, or click Add Counters to select the individual counters you want to log.


Configure the other setings as you required. It can be run for certain time period i.e, 10 hours, 1 day, 1 week to identify how the processes are running in the system. The data can be saved as .CSV or text files. When you get the data you can make a charts using Excel and it can be understandable what necessary action to be taken for improving performance.

Thursday, May 20, 2010

Understand how shrinking a LOG file works :

Understand how shrinking a LOG file works :

Many of them faced while shrinking a log file in the database, after applying the DBCC
Shrinkfile option also the files size not reduced. What was the structure behind, how it works.

For an example you have a database consists of data and log files, which log file size is growing abnormally exceeding hard drive size. So you need to reduce the file to accommodate within the available space. So maximum people would perform following options to decrease the file size.

1) Either truncating log file directly then shrink
Backup log 'DATABASE' with truncate_only
DBCC shrinkdatabase ('database',10) (or)
DBCC shrinkfile ('logfile')

2) or Applying the shrinkdatabase option directly on the database.

3) or Keeping Full / Log backup jobs in the server with some time intervals and shrinking the database.

Whether these above options are sufficient to perform the maintenance of Log file in the server. No, these can be manageable upto some extent only, if you can follow the above activities with slight changes you can expect 100% results from it.


What are the drawbacks in above processes :

1) If you truncate directly your log file then shrink the database, you can have reduced log file, but if you need to restore your database using the log backup.......... no backup available

2) Applying shrink option directly doesn't give you much impact

3) Keeping Log backup and perform shrink option will give you the good result but it should happend sequentially.


The Process behind log backup and shrink :
1) I have database consists of log file size 100Mb and Used space in the log file is 94 MB.
This can be find out using DBCC sqlperf(logspace).
2) I have performed DBCC shrinkfile ('log file') for size decrease ; but it increased instead of decrease. Why bcos the log file having the logs and for shrinking it written on the top of it, it increased. Use DBCC sqlperf(logspace) to check the details.
3) Now I have performed backup log file then performed the shrink option. It reduced to the initial size and my drive is free now.


Additional options to findout :

A) Run following query
DBCC loginfo
If the status in the below table is 2 then it is waiting for the backup
else it is 0 then it can be shrinked

result :
Field FileSize Startoffset FseqNo STatus Parity CreateLSN
2 2555904 8192 98682 2 64 0
2 2555904 2564096 98683 2 128 0
2 2555904 5120000 98685 2 64 0
2 2809856 7675904 98684 2 128 0
2 253952 10485760 98686 2 64 98685000000407400016
2 253952 10739712 98687 2 64 98685000000407400016
2 253952 10993664 98688 2 64 98685000000407400016



b) Run following query whether the backup is pending or not
select log_reuse_wait_desc from sys.databases where name = 'database'

if the result is 'LOG_BACKUP' then waiting for backup
if the result if 'NOTHING' then it can be shrinked


Note : In Sql Server 2008 there is no truncating log backup option, instead you can alter the recovery mode into simple then shrink. Again change the mode into normal.

Tuesday, April 13, 2010

Log file Maintenance in SQL SERVER 2000/2005 & 2008 :

In earlier version of SQL Server if the database log is full then by truncating the Log file, we can free up the database log file space. For an example you have allocated 5 GB for log file and it crosses the limit after certain period. As a result it doesn't allows any database transactions and receives continuous errors. In SQL server 2005 and 2000 we can free up the space by using following commands.

USE DatabaseName
* Database will be your user database.

BACKUP LOG DatabaseName WITH TRUNCATE_ONLY
* it will truncate all the contents from log file

DBCC SHRINKFILE (Log File)
* It will shrink the file into its intial size.


But the above process doesn't work in SQL server 2008. These features are discontinued from the latest version and for achieving this requirement you need to work on following method.

First Alter the database from Full recovery mode to Simple recovery mode, then Shrink the database and again change the mode into full.

ALTER DATABASE DatabaseName SET RECOVERY SIMPLE
* This will change the database mode into simple

DBCC SHRINKFILE (Log File)
* It will shrink the file into its intial size.

ALTER DATABASE DatabaseName SET RECOVERY FULL
* This will change the database mode into full

The above process can be done using SQL Server Management studio also.

To check up the database and its file details you can use system procs sp_helpdb, sp_helpdb databasename.

Thursday, March 18, 2010

Partition a Table for better performance in Sql Server 2005 / 2008

In earlier versions we didn't have partitioning a table option to distribute the data across multiple file groups. This was introduced with Sql server 2005 including other valuable features. Partitioning can be performed with help of Partition Function, Partition Scheme.

Partition is nothing like distributing the data across multiple file groups, as we all know if we share the data into multiple drives automatically performance of the database will be increased. Let assume we have a big table comprises million of records, when fetching the data it took too long time. So, for increasing the performance of the table we can implement Partitioning concept and achieve the better performance.

HOW TO PARTITION A TABLE :
Requirement : A table is to be defined with multiple columns, out of which we need to fetch the data based on column "Name". If you select data based on Name there would be many records with different names comprising letters from A-Z. So by using following method, the data can be distributed into different file groups.

Step 1 : As a first step you need to create a Partition function :
CREATE PARTITION FUNCTION NameFunc (nvarchar(30))
AS RANGE RIGHT FOR VALUES ('H', 'N')


Step 2 : Need to create partition scheme based on the partition function :

CREATE PARTITION SCHEME NameScheme
AS PARTITION NameFunc TO (fg1, fg2, fg3)


Step 3 : Create a table based on Function and Scheme

CREATE TABLE dbo.Employees
(EmpID int, Name nvarchar(30))
ON NameScheme (Name)



Explanation :
In the first step 1, 'H', 'N' values defines
A-H first group,
H-N second group,
N-Z Third group

the above 3 groups will be stored on fg1, fg2 and fg3 file groups respectively.

Wednesday, January 6, 2010

Online Indexing in Sql Server 2005

In SQL Server 2005, the index can be created while Insert/update/delete happening on a table. Without affecting performance degradance to table the index will be created. This concept is called Online indexing in Sql server.

I have taken following sample queries to test online indexing :

Step 1: Create sampe tables to hold the datacreate table Index_Check (id int identity(1,1), name varchar(20),join_date datetime, Amount float)
Create table names (name varchar(50))
insert into names values ('Software company,Bangalore,India')


Step 2 : Executing cursor to insert the data continuously
declare venkat cursor local
for
Select NAME FROM NAMES
open venkat
Declare @aaa VARCHAR(20)
fetch next from venkat into @aaa
while @@fetch_status =0
exec ('insert into Index_Check (name,join_date) select '''+@aaa+''', GETDATE()')
fetch next from venkat into @aaa
deallocate venkat
close venkat


Step 3 : Connect the same server and same table from other connection, try to create cluster / nonclustered online index

Create clustered index IDX_Check on Index_Check(id) with (online=on)
Create nonclustered index IDX_Check1 on Index_Check(id) with (online=on)


Successfully index creates without disturbing the table process.


You can schedule rebuilding index depends on the data usage in the database. While performing rebuild it can be configured to perform online/offline. Online means it doens't impact anything to the table operation and does. But it has some restrictions also like or text data type columns we can't perform online reindexing and etc.

Monday, January 4, 2010

RAID : Understanding of RAID and its functionality

Understanding of RAID......Redundant Array of Indepenent Disks (RAID)
RAID is a combination of multilple disks to improve the performance and safety of the data. This can be configured using SCSI contollers. SCSI is nothing but a card which can have facility of connect multiple disks. Once configured the RAID, the data is available on ARRAY comprising the available disks. There are mainly 3 terms need to remember while working with RAID levels. Mirroring, Stripped disks and fault tolerance.

There are different RAID levels for various requirements. We can implement the actual RAID level to achieve 100% success.

RAID 0 : It distributes the data across multiple disks. The data retrieving can be faster as the data is dividing into multilpe disks. But this levels doesn't have any data safety, means if one disk fails you will loose the data automatically.

RAID 1 : Mirror the data from one disk to another. Here we have more safety, if one disk fails automatically another disks works which already has mirrored data.

RAID 3 (OR) 4 : This level requires minimum 3 disks including 2 for data distribution and 1 for fault tolerance. Here we have data safety mechanism with full performance including redundant data identification. For this level 1 dedicated disk is required to store redundant data.

RAID 5 : This level also requires 3 or more disks to protect the data in case any one disk fails. Here we dont required dedicated disk for storing redundant data, this will be stored in the combination of all disks. This level has higher data safety allows only one disks to fail. If more than one disks fails then the data will not be available on array.

RAID 6 : Similar to RAID 5, but it requires minimum 4 disks and allows to fail 2 disks. Means 2 disks can fail in this raid level, if more than 2 disks fails than the data wouldn't be available.


There are 2 more new levels introduced i.e....,

RAID 1+0 : Combination of RAID 1 + RAID 0, whatever the features having both levels can incorporate here. But in the sequence like first it mirror the data (safety) then distribute the data across drives (performance).

RAID 0+1 : Combination of RAID 0 + RAID 1, whatever the features having both levels can incorporate here. But in the sequence like first it distribute the data across drives (performance) then mirror the data (safety).


When consider to SQL Server installation mostly we can use RAID 5 for backups of the databases where it requires only 3 disks (less hardware) and high safety with high performance. If the data is too much important and your company can able to spend money like data worth's than any thing, you can suggest RAID 6 level where it comprises 4 disks with dual disk safety.

RAID 1+0 is good for installing Tempdb database where TEMPDB database requires safety of data with good performance.