Tuesday, May 25, 2010

How PERFMON (Windows tool) helps DBA to identify the Performance Issues and take necessary solutions

In SQL Server DBA Environment, there are several ways to improve the performance of SQL Server Database Applications such as Query Execution Plans, Sql Profiler, DTA (Database Tuning Advisor), Server Level properties, Object level improvements (Indexes, statistics, other maintenance stuff) and Windows Level Applications. I mean the database performance can be identified in 3 levels i.e., Server Level (Operating Systems, Networking Protocols); Database Level (Sql Server Databsae Engine, SQL Server) ; Object Level (Objects within the database).


So if we are facing the performance degrade then we need to check up at all levels for taking necessary actions. Currently I will discuss about the Server Level Tool PERFMON which is very useful to identify the sever levels activities, based on the identified results we can take appropriate action to improve the performance. It can be Hardware, Memory (or) System changes.


PERFMON : PERFMON is a windows inbuilt tool which can provide the workload of the resources running in the system. It can be used to find out Windows resources data as well as SQL Server resources.


Main Benefits Of The Tool :
• Understand your workload and its effect on your system's resources.
• Observe changes and trends in workloads and resource usage so you can plan for future upgrades.
• Test configuration changes or other tuning efforts by monitoring the results.


System Monitor and Performance Logs and Alerts provide detailed data about the resources used by specific components of the operating system and by programs that have been designed to collect performance data.

Choosing the data to monitor :
Start by monitoring the activity of the following components in order:
• Memory
• Processors
• Disks
• Network


Following counters can be helpful to trace the data

1:
Component : Disk
Performance aspect being monitored : Usage
Counters to monitor :
Physical Disk\Disk Reads/sec, Physical Disk\Disk Writes/sec, LogicalDisk\% Free Space, Interpret the % Disk Time counter carefully. Because the _Total instance of this counter may not accurately reflect utilization on multiple-disk systems, it is important to use the % Idle Time counter as well. Note that these counters cannot display a value exceeding 100%.


2 :
Component : Disk
Performance aspect being monitored : Hindrances
Counters to Monitor : Physical Disk\Avg. Disk Queue Length (all instances)


3:
Component : Memory
Performance aspect being monitored : Usage
Counters to Monitor : Memory\Available Bytes, Memory\Cache Bytes


4:
Component : Memory
Performance aspect being monitored : Hindrances
Counters to Monitor : Memory\Pages/sec, Memory\Page Reads/sec, Memory\Transition Faults/sec, Memory\Pool Paged Bytes, Memory\Pool Nonpaged Bytes.
Although not specifically Memory object counters, the following are also useful for memory analysis: Paging File\% Usage object (all instances), Cache\Data Map Hits %, Server\Pool Paged Bytes and Server\Pool Nonpaged Bytes


5:
Component : Network
Performance aspect being monitored : Throughput
Counters to Monitor : Protocol transmission counters (varies with networking protocol); for TCP/IP: Network Interface\Bytes total/sec, Network Interface\ Packets/sec, Server\Bytes Total/sec, or Server\Bytes Transmitted/sec and Server\Bytes Received/sec


6:
Component : Processor
Performance aspect being monitored : Usage
Counters to Monitor : Processor\% Processor Time (all instances)


7:
Component : Processor
Performance aspect being monitored : Hindrances
Counters to Monitor : System\Processor Queue Length (all instances),
Processor\ Interrupts/sec, System\Context switches/sec



How to Create and perform :
1. Go to RUN and type PERFMON then Enter
2. Double-click Performance Logs and Alerts, and then double-click Counter Logs. Any existing logs will be listed in the details pane. A green icon indicates that a log is running; a red icon indicates that a log has been stopped.
3. Right-click a blank area of the details pane, and click New Log Settings.
4. In Name, type the name of the log, and then click OK.
5. On the General tab, click Add Objects and select the performance objects you want to add, or click Add Counters to select the individual counters you want to log.


Configure the other setings as you required. It can be run for certain time period i.e, 10 hours, 1 day, 1 week to identify how the processes are running in the system. The data can be saved as .CSV or text files. When you get the data you can make a charts using Excel and it can be understandable what necessary action to be taken for improving performance.

Thursday, May 20, 2010

Understand how shrinking a LOG file works :

Understand how shrinking a LOG file works :

Many of them faced while shrinking a log file in the database, after applying the DBCC
Shrinkfile option also the files size not reduced. What was the structure behind, how it works.

For an example you have a database consists of data and log files, which log file size is growing abnormally exceeding hard drive size. So you need to reduce the file to accommodate within the available space. So maximum people would perform following options to decrease the file size.

1) Either truncating log file directly then shrink
Backup log 'DATABASE' with truncate_only
DBCC shrinkdatabase ('database',10) (or)
DBCC shrinkfile ('logfile')

2) or Applying the shrinkdatabase option directly on the database.

3) or Keeping Full / Log backup jobs in the server with some time intervals and shrinking the database.

Whether these above options are sufficient to perform the maintenance of Log file in the server. No, these can be manageable upto some extent only, if you can follow the above activities with slight changes you can expect 100% results from it.


What are the drawbacks in above processes :

1) If you truncate directly your log file then shrink the database, you can have reduced log file, but if you need to restore your database using the log backup.......... no backup available

2) Applying shrink option directly doesn't give you much impact

3) Keeping Log backup and perform shrink option will give you the good result but it should happend sequentially.


The Process behind log backup and shrink :
1) I have database consists of log file size 100Mb and Used space in the log file is 94 MB.
This can be find out using DBCC sqlperf(logspace).
2) I have performed DBCC shrinkfile ('log file') for size decrease ; but it increased instead of decrease. Why bcos the log file having the logs and for shrinking it written on the top of it, it increased. Use DBCC sqlperf(logspace) to check the details.
3) Now I have performed backup log file then performed the shrink option. It reduced to the initial size and my drive is free now.


Additional options to findout :

A) Run following query
DBCC loginfo
If the status in the below table is 2 then it is waiting for the backup
else it is 0 then it can be shrinked

result :
Field FileSize Startoffset FseqNo STatus Parity CreateLSN
2 2555904 8192 98682 2 64 0
2 2555904 2564096 98683 2 128 0
2 2555904 5120000 98685 2 64 0
2 2809856 7675904 98684 2 128 0
2 253952 10485760 98686 2 64 98685000000407400016
2 253952 10739712 98687 2 64 98685000000407400016
2 253952 10993664 98688 2 64 98685000000407400016



b) Run following query whether the backup is pending or not
select log_reuse_wait_desc from sys.databases where name = 'database'

if the result is 'LOG_BACKUP' then waiting for backup
if the result if 'NOTHING' then it can be shrinked


Note : In Sql Server 2008 there is no truncating log backup option, instead you can alter the recovery mode into simple then shrink. Again change the mode into normal.