Pages

Wednesday 23 March 2011

SET VS SELECT

There are a couple of differences between using SET or SELECT statement when it comes to assigning values to local variables. Some are trivial and others may cause bugs that are hard to find in your code;

One of the issues that you need to be aware of is when using SELECT to assign value to a variable, if the select returns more than one record, it will assign one of the returned values to your local variable which may not be what you want.

But, SET will return an error if there are more than one records returned which in my openion is a better option. Here is a demo;


DECLARE @Id INT
DECLARE @Temp TABLE (ID INT , NAME VARCHAR(25))

INSERT INTO @temp (id, name) VALUES (1,'one')
INSERT INTO @temp (id, name) VALUES (2,'one')
INSERT INTO @temp (id, name) VALUES (3,'Two')
INSERT INTO @temp (id, name) VALUES (4,'Three')

-- With select you dont get an error

SELECT @Id =id FROM @Temp WHERE Name='one'
SELECT @Id AS RETURNS_one_of_the_records_Without_any_errors

--You get NO errors, and ONE of the records is returned which may not be what you want.

-- With SET you will get an error if there are more than one records returned
SET @Id =(SELECT id FROM @Temp WHERE Name='one')

--You get error, Which is better
SELECT @Id AS RETURNS_errors_as_There_Are_More_than_one_records

In the SET statement, you can check for @@Error and handle accordingly or if you have to use SELECT, you may be able to pick the record that you need by using TOP 1 and order by in the SELECT statement.

The second issue is when the SELECT returns NULL, it does not assign the NULL to the variable. So if your variable had a value of X before the SELECT statement was executed, and the SELECT returned NULL, your variable value will still be X as shown below.


DECLARE @Id INT
DECLARE @Temp TABLE (ID INT , NAME VARCHAR(25))

-- Assume @Id had a vlaue of 100
SET @Id =100

-- Get the new value of @id from a table using SELECT
SELECT @Id =id FROM @Temp WHERE Name='Does not exist'

--You get wrong value. the value of @id should be Null and not 100
SELECT @Id AS Id_Value_Should_be_Null

SET @Id =(SELECT id FROM @Temp WHERE Name='Does not exist')

--This will show correct value i.e Null
SELECT @Id AS Id_Value_Should_be_Null
SELECT @Id =(SELECT id FROM @Temp WHERE Name='Does not exist')

--This will show correct value i.e. null
SELECT @Id AS Id_Value_Should_be_Null


Hopefully knowing these settle differences will save you time debugging !

Monday 14 March 2011

Why “Extent Scan fragmentation” remains high after rebuilding index

According to BOL, extent scan fragmentation is; “Percentage of out-of-order extents in scanning the leaf pages of an index. This number is not relevant to heaps. An out-of-order extent is one for which the extent that contains the current page for an index is not physically the next extent after the extent that contains the previous page for an index.”

It is also referred to as “physical fragmentation”, which basically means that the extents on the disk are not contagious.
If rebuilding an index did not bring this value down, it means your physical volume is fragmented.

In my case, I have a sql server running on a netapp san (FAS 3140)

I have a table called objects containing 12+ million records and a clustered index and a couple of non clustered indexes.

When running the following command, I get extent scan fragmentation of 60.59%
dbcc showcontig (Objects,IX_ObjectsExpires)

DBCC SHOWCONTIG scanning 'Objects' table...
Table: 'Objects' (309576141); index ID: 12, database ID: 17
LEAF level scan performed.
- Pages Scanned................................: 30593
- Extents Scanned..............................: 3852
- Extent Switches..............................: 3917
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 97.63% [3825:3918]
- Logical Scan Fragmentation ..................: 0.88%
- Extent Scan Fragmentation ...................: 60.59%
- Avg. Bytes Free per Page.....................: 683.2
- Avg. Page Density (full).....................: 91.56%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Alter rebuilding the index

Alter index IX_ObjectsExpires ON [Objects] Rebuild;
The value of extent scan fragmentation is gone up!

DBCC SHOWCONTIG scanning 'Objects' table...
Table: 'Objects' (309576141); index ID: 12, database ID: 17
LEAF level scan performed.
- Pages Scanned................................: 31109
- Extents Scanned..............................: 3889
- Extent Switches..............................: 3888
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [3889:3889]
- Logical Scan Fragmentation ..................: 0.01%
- Extent Scan Fragmentation ...................: 69.17%
- Avg. Bytes Free per Page.....................: 806.1
- Avg. Page Density (full).....................: 90.04%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

-- as can be seen, the logical fragmentation has dropped to 0, but the extent fragmentation has gone up!!

I run “reallocate” command on the volume containing the database. please note that this is a Netap specific command that reorders the block on a volume. Your SAN vendor will provide a different command.

reallocate start -f /vol/ascio_otherdb_data_vol

During reallocate, cpu gone up from 1 to around 27 % on the filer.
But no performance hit was noticed on the database. After the reallocation was completed, I rebuild the index again;

alter index IX_ObjectsExpires ON [Objects] Rebuild;

and hooray ..extent fragmentation set back to almost 0.15%

DBCC SHOWCONTIG scanning 'Objects' table...
Table: 'Objects' (309576141); index ID: 12, database ID: 17
LEAF level scan performed.
- Pages Scanned................................: 31109
- Extents Scanned..............................: 3889
- Extent Switches..............................: 3888
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [3889:3889]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 0.15%
- Avg. Bytes Free per Page.....................: 806.0
- Avg. Page Density (full).....................: 90.04%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.


-- for a volume size of 400 GB, it took around 1 hour t complete the reallocation command on the filer.

Mon Mar 7 17:28:52 GMT [LDN-HA-SAN01: wafl.scan.start:info]: Starting file reallocating on volume ascio_otherdb_data_vol.
Mon Mar 7 18:26:41 GMT [LDN-HA-SAN01: wafl.reallocate.full.done:info]: Full reallocation on '/vol/ascio_otherdb_data_vol' is complete.

CXPACKET wait times and Degree of Parallelism

I have heard many times that the quickest way to improve the performance of a Sql Server instance is to throw more hardware at it. You would think this makes sense, but I have seen doubling the number of CPUs from (8 to 16) decreased the servers performance !

The reason is, under the default server settings, Sql Server will try to execute queries across all the CPUs in parallel. Depending on the type of the workload (OLTP or OLAP), the queries may take much longer to execute as splitting the query plan and combining the result again could take a lot loner than executing the query serially or with less number of CPUs.

Generally you will notice the parallelism issue with servers having more than 8 CPUs.
A quick way to check if your server is suffering from excessive parallelism is to look at the CXPACKET wait times.

This query will give you the top 10 wait types.

select top 10 * from sys.dm_os_wait_stats order by wait_time_ms desc

If you find CXPACKET value at the top of the list and you have more than 8 CPUs, you will probably benefit from lowering the degree of parallelism.
Also note that completely disabling parallelism (setting value to 1) may adversely affect large queries (OLAP). So you need to find the best value for your environment.
You can set the MAXDOP for individual queries which will override the server settings.

In order to change the degree of parallelism from the default (0) which means use all available CPUs to (say 4)

-- set to number of physical processors
EXEC sys.sp_configure N'max degree of parallelism', 4
GO
RECONFIGURE WITH OVERRIDE
GO

This will take effect immediately and no service restart is required.

Another option that is related to the degree of parallelism is the threshold (in seconds) that sql server will check before deciding to use parallelism.
The default value is 5 (i.e. Sql server will only use parallelism if the cost of executing a query serially is more than 5 seconds)

The following statement will modify this setting so that, sql server will only use parallelism for queries that take longer than 10 seconds to execute.

-- number of seconds elapsed
EXEC sys.sp_configure N'cost threshold for parallelism', N'10'
GO

The pictures below shows the affect of lowering MAXDOP on one of my servers on the processor queue length.

There is no rule of what value you should use, it all depends on your specific environment. But here are a few guidelines. Create a baseline of your server so that you can compare the performance of your server before and after the change.

Friday 11 March 2011

upgrading sql server 2008 to SP2 breaks reports

After upgrading an instacne of sql server 2008 to SP2, I noticed that non of the reports from the management studio are accessable. including the reports that I frequently look at under the Management Data Warehouse.

The error message I was getting was;

The file 'Microsoft.ReportViewer.WinForms, version=9.0.0.0, Calture=neutral, publicKeyToken=b03xxxxxxx' cannot
be opened. Do you want to remove the reference to it from the Recent list?




In order to solve this, you need to download and install the Reporting Services Report Viewer 9.0 available from the following link;

http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=6ae0aa19-3e6c-474c-9d57-05b2347456b1


installation was very quick and did not require any restart.