Showing posts with label optimize. Show all posts
Showing posts with label optimize. Show all posts

Wednesday, March 21, 2012

query performance question

Hello. I have query performance question.
I need to optimize procedure

CREATE PROCEDURE dbo.SECUQUSRCOMPACCES
@.P1 VARCHAR(50),
@.P2 INTEGER
AS
DECLARE @.IORGANIZATIONID INTEGER
EXECUTE dbo.ORGNQGETORGID @.PORGUNIQUEID = @.IORGANIZATIONID OUTPUT

SELECT TSECCOMP.ID,
CASE TSECPROFILEGRP.ACCESSTYPE
WHEN -1 THEN
CASE TSECCLASS.DEFAULTACCESS
WHEN -1 THEN
CASE TSECGROUPCOMP.DEFAULTACCESS
WHEN -1 THEN
TSECCOMP.DEFAULTACCESS
ELSE
TSECGROUPCOMP.DEFAULTACCESS
END

ELSE
TSECCLASS.DEFAULTACCESS
END
ELSE TSECPROFILEGRP.ACCESSTYPE
END AS EXPR1
FROM TSECCOMP
INNER JOIN ((TSECPROFILE
INNER JOIN (TSECCLASS
INNER JOIN TSECPROFILEGRP
ON TSECCLASS.UNIQUEID = TSECPROFILEGRP.SECURITYGROUPID)
ON TSECPROFILE.UNIQUEID = TSECPROFILEGRP.PROFILEID) INNER JOIN
TSECGROUPCOMP ON TSECCLASS.UNIQUEID = TSECGROUPCOMP.SECURITYGROUPID)
ON TSECCOMP.UNIQUEID = TSECGROUPCOMP.SECCOMPID
WHERE
(
CASE TSECPROFILEGRP.ACCESSTYPE
WHEN -1 THEN
CASE TSECCLASS.DEFAULTACCESS
WHEN -1 THEN
CASE TSECGROUPCOMP.DEFAULTACCESS
WHEN -1 THEN
TSECCOMP.DEFAULTACCESS
ELSE
TSECGROUPCOMP.DEFAULTACCESS
END
ELSE
TSECCLASS.DEFAULTACCESS
END
ELSE TSECPROFILEGRP.ACCESSTYPE
END > 0 ) AND (TSECPROFILE.KEYVALUE=@.P1) AND ( TSECCOMP.TYPE =@.P2)
AND TSECCOMP.ORGANIZATIONID = @.IORGANIZATIONID
GO
Thank you In advance.Make sure you have indexed the join keys. You can try running the Index
Tuning Wizard for advice.

Gert-Jan

inna wrote:
> Hello. I have query performance question.
> I need to optimize procedure
> CREATE PROCEDURE dbo.SECUQUSRCOMPACCES
> @.P1 VARCHAR(50),
> @.P2 INTEGER
> AS
> DECLARE @.IORGANIZATIONID INTEGER
> EXECUTE dbo.ORGNQGETORGID @.PORGUNIQUEID = @.IORGANIZATIONID OUTPUT
> SELECT TSECCOMP.ID,
> CASE TSECPROFILEGRP.ACCESSTYPE
> WHEN -1 THEN
> CASE TSECCLASS.DEFAULTACCESS
> WHEN -1 THEN
> CASE TSECGROUPCOMP.DEFAULTACCESS
> WHEN -1 THEN
> TSECCOMP.DEFAULTACCESS
> ELSE
> TSECGROUPCOMP.DEFAULTACCESS
> END
> ELSE
> TSECCLASS.DEFAULTACCESS
> END
> ELSE TSECPROFILEGRP.ACCESSTYPE
> END AS EXPR1
> FROM TSECCOMP
> INNER JOIN ((TSECPROFILE
> INNER JOIN (TSECCLASS
> INNER JOIN TSECPROFILEGRP
> ON TSECCLASS.UNIQUEID = TSECPROFILEGRP.SECURITYGROUPID)
> ON TSECPROFILE.UNIQUEID = TSECPROFILEGRP.PROFILEID) INNER JOIN
> TSECGROUPCOMP ON TSECCLASS.UNIQUEID = TSECGROUPCOMP.SECURITYGROUPID)
> ON TSECCOMP.UNIQUEID = TSECGROUPCOMP.SECCOMPID
> WHERE
> (
> CASE TSECPROFILEGRP.ACCESSTYPE
> WHEN -1 THEN
> CASE TSECCLASS.DEFAULTACCESS
> WHEN -1 THEN
> CASE TSECGROUPCOMP.DEFAULTACCESS
> WHEN -1 THEN
> TSECCOMP.DEFAULTACCESS
> ELSE
> TSECGROUPCOMP.DEFAULTACCESS
> END
> ELSE
> TSECCLASS.DEFAULTACCESS
> END
> ELSE TSECPROFILEGRP.ACCESSTYPE
> END > 0 ) AND (TSECPROFILE.KEYVALUE=@.P1) AND ( TSECCOMP.TYPE =@.P2)
> AND TSECCOMP.ORGANIZATIONID = @.IORGANIZATIONID
> GO
> Thank you In advance.|||Hi

Check out the query execution plan

http://www.sql-server-performance.c...an_analysis.asp

http://www.sql-server-performance.com/transact_sql.asp

John

"inna" <mednyk@.hotmail.com> wrote in message
news:347a408b.0309131204.19c074e8@.posting.google.c om...
> Hello. I have query performance question.
> I need to optimize procedure
> CREATE PROCEDURE dbo.SECUQUSRCOMPACCES
> @.P1 VARCHAR(50),
> @.P2 INTEGER
> AS
> DECLARE @.IORGANIZATIONID INTEGER
> EXECUTE dbo.ORGNQGETORGID @.PORGUNIQUEID = @.IORGANIZATIONID OUTPUT
> SELECT TSECCOMP.ID,
> CASE TSECPROFILEGRP.ACCESSTYPE
> WHEN -1 THEN
> CASE TSECCLASS.DEFAULTACCESS
> WHEN -1 THEN
> CASE TSECGROUPCOMP.DEFAULTACCESS
> WHEN -1 THEN
> TSECCOMP.DEFAULTACCESS
> ELSE
> TSECGROUPCOMP.DEFAULTACCESS
> END
> ELSE
> TSECCLASS.DEFAULTACCESS
> END
> ELSE TSECPROFILEGRP.ACCESSTYPE
> END AS EXPR1
> FROM TSECCOMP
> INNER JOIN ((TSECPROFILE
> INNER JOIN (TSECCLASS
> INNER JOIN TSECPROFILEGRP
> ON TSECCLASS.UNIQUEID = TSECPROFILEGRP.SECURITYGROUPID)
> ON TSECPROFILE.UNIQUEID = TSECPROFILEGRP.PROFILEID) INNER JOIN
> TSECGROUPCOMP ON TSECCLASS.UNIQUEID = TSECGROUPCOMP.SECURITYGROUPID)
> ON TSECCOMP.UNIQUEID = TSECGROUPCOMP.SECCOMPID
> WHERE
> (
> CASE TSECPROFILEGRP.ACCESSTYPE
> WHEN -1 THEN
> CASE TSECCLASS.DEFAULTACCESS
> WHEN -1 THEN
> CASE TSECGROUPCOMP.DEFAULTACCESS
> WHEN -1 THEN
> TSECCOMP.DEFAULTACCESS
> ELSE
> TSECGROUPCOMP.DEFAULTACCESS
> END
> ELSE
> TSECCLASS.DEFAULTACCESS
> END
> ELSE TSECPROFILEGRP.ACCESSTYPE
> END > 0 ) AND (TSECPROFILE.KEYVALUE=@.P1) AND ( TSECCOMP.TYPE =@.P2)
> AND TSECCOMP.ORGANIZATIONID = @.IORGANIZATIONID
> GO
> Thank you In advance.

Friday, March 9, 2012

Query Optimize.

Hi all,

Table 'A' is having 105 fields & 233000 records.
One Clusterd Primary Key & 10 nonclusterd key.

If I joined with table 'A' or 'Select * from 'A' , Query takes more time so please let me know best way to structure query or table, Indexes etc.

Reply to me asap positivaly ...

Regards,
M. G.Without having the DDL (the CREATE TABLE and CREATE INDEX statements in particular), and the query(s) that are running slow, I'm pretty severly handicapped in giving you much advice.

The only real insight that I can give without more informations is to be sure that your statistics are current using the UPDATE STATISTICS (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_dbcc_4vxn.asp) statement.

-PatP

Query optimize

can this be rewritten in more optimized way..
DELETE FROM A WHERE ID Not In (SELECT ID FROM @.ABC)DELETE FROM A T WHERE ID Not exists (SELECT ID FROM @.ABC where ID=T.ID)
Madhivanan|||You cannot use a variable table name. Indexes will help, but the
optimizer is msart enough to pick a good plan.|||You could possibly try:
DELETE def
WHERE NOT EXISTS
(
SELECT 1
FROM abc
WHERE abc.ID = def.ID
)
Frank
"Sunny" <Sunny@.discussions.microsoft.com> wrote in message
news:1BF75571-9841-41EC-95EF-16D6837095C9@.microsoft.com...
> can this be rewritten in more optimized way..
> DELETE FROM A WHERE ID Not In (SELECT ID FROM @.ABC)|||delete A
from A
left join B
on B.CommonKey = A.CommonKey
where (B.CommonKey is null)
ML|||On Thu, 11 Aug 2005 06:54:01 -0700, Sunny wrote:

> can this be rewritten in more optimized way..
> DELETE FROM A WHERE ID Not In (SELECT ID FROM @.ABC)
As Celko said, you can't use a variable table name directly. The best you
can do is dynamic SQL:
declare @.abc nvarchar(32)
declare @.sql nvarchar(500)
set @.abc='MyTable'
set @.sql = N'DELETE FROM A WHERE NOT EXISTS (SELECT * FROM ' + @.ABC + ' B
WHERE A.ID=B.ID)'
execute sp_executesql @.sql|||Ah, I must have just breezed through the DML statement. I didn't catch the
@.. Sunny, if you were using a declared table variable by design, you should
follow the advice in the posts that followed mine.
Frank Castora.
"Sunny" <Sunny@.discussions.microsoft.com> wrote in message
news:1BF75571-9841-41EC-95EF-16D6837095C9@.microsoft.com...
> can this be rewritten in more optimized way..
> DELETE FROM A WHERE ID Not In (SELECT ID FROM @.ABC)|||Before you issue blanket statements like the one below, you should check
your facts. You can indeed use a variable table name, provided the variable
table name refers to an existing table variable. Table variables are a
feature of SQL Server 2000. You refer to a table variable in the same way
as a scalar variable.
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1123769246.878870.169080@.g44g2000cwa.googlegroups.com...
> You cannot use a variable table name. Indexes will help, but the
> optimizer is msart enough to pick a good plan.
>

Query optimization question...

I'm trying to optimize some queries on an existing system, and I'm
noticing some odd behavior. I'm performing a join between several
tables, the final table being joined by the optimizer has a clustered
index on the field that it is using to join to the rest of the query,
but for some reason SQL Server doesn't seem to actually use this index
(it's doing an index scan instead of an index seek). Is there some
reason why SQL Server would not use a valid Clustered Index? I've
dropped and readded the index, but that doesn't seem to help. I don't
know if it would be relevant, but the tables I'm working on are fairly
fat (2 to 7K bytes/row).

This is happening for several tables. I've been able to get around it
for some of the tables by creating a non-clustered index on all the
fields that are being queried so that the leaf pages don't need to be
loaded, but this isn't a valid solution for all of the tables I'm
struggling with.

Any ideas? (and no, they aren't willing to redesign any of the
tables)."Mathew Relick" <ticars@.yahoo.com> wrote in message
news:dd84d8a7.0311181017.39d1c69@.posting.google.co m...
> I'm trying to optimize some queries on an existing system, and I'm
> noticing some odd behavior. I'm performing a join between several
> tables, the final table being joined by the optimizer has a clustered
> index on the field that it is using to join to the rest of the query,
> but for some reason SQL Server doesn't seem to actually use this index
> (it's doing an index scan instead of an index seek). Is there some
> reason why SQL Server would not use a valid Clustered Index? I've
> dropped and readded the index, but that doesn't seem to help. I don't

This can happen if your table statistics are out of date. If SQL Server
determines that the join will require more than x % of the table to be
retrieved, then doing a clustered index scan may be faster than doing a
clustered index seek (faster because an index scan can read the rows in a
page sequentially, whereas an index seek has to traverse the B-tree
structure. Sequential read is faster because you cut down on seek time, and
also because you may be able to read more than one page in a single I/O
operation, since the data is sequential.) Try updating statistics and see
if it helps:

UPDATE STATISTICS <table_name> WITH FULLSCAN
GO

> know if it would be relevant, but the tables I'm working on are fairly
> fat (2 to 7K bytes/row).
> This is happening for several tables. I've been able to get around it
> for some of the tables by creating a non-clustered index on all the
> fields that are being queried so that the leaf pages don't need to be
> loaded, but this isn't a valid solution for all of the tables I'm
> struggling with.

This is because when you create a new index, new statistics are generated
for that index, so you have the most up-to-date statistics with your new
index. Same if you rebuild your existing indexes.

HTH,
Dave

>
> Any ideas? (and no, they aren't willing to redesign any of the
> tables).|||"Dave Hau" <nospam_dave_nospam_123@.nospam_netscape_nospam.net_ nospam> wrote
in message news:gCvub.33788$yj4.5497@.newssvr27.news.prodigy.c om...
> "Mathew Relick" <ticars@.yahoo.com> wrote in message
> news:dd84d8a7.0311181017.39d1c69@.posting.google.co m...
> > I'm trying to optimize some queries on an existing system, and I'm
> > noticing some odd behavior. I'm performing a join between several
> > tables, the final table being joined by the optimizer has a clustered
> > index on the field that it is using to join to the rest of the query,
> > but for some reason SQL Server doesn't seem to actually use this index
> > (it's doing an index scan instead of an index seek). Is there some
> > reason why SQL Server would not use a valid Clustered Index? I've
> > dropped and readded the index, but that doesn't seem to help. I don't
> This can happen if your table statistics are out of date. If SQL Server
> determines that the join will require more than x % of the table to be
> retrieved, then doing a clustered index scan may be faster than doing a
> clustered index seek (faster because an index scan can read the rows in a
> page sequentially, whereas an index seek has to traverse the B-tree
> structure. Sequential read is faster because you cut down on seek time,
and
> also because you may be able to read more than one page in a single I/O
> operation, since the data is sequential.) Try updating statistics and see
> if it helps:
> UPDATE STATISTICS <table_name> WITH FULLSCAN
> GO
> > know if it would be relevant, but the tables I'm working on are fairly
> > fat (2 to 7K bytes/row).
> > This is happening for several tables. I've been able to get around it
> > for some of the tables by creating a non-clustered index on all the
> > fields that are being queried so that the leaf pages don't need to be
> > loaded, but this isn't a valid solution for all of the tables I'm
> > struggling with.
> This is because when you create a new index, new statistics are generated
> for that index, so you have the most up-to-date statistics with your new
> index. Same if you rebuild your existing indexes.

I'm going to chime in because I think Dave has some good points here.
However, as I understand it, the original poster did rebuild the clustered
index, so there may be more to this problem than meets the eye.

One other thing that can happen is that the optimizer decides it's faster to
do a scan instead of a seek. This is particularly true if the result it
expects to return is a large percentage of the index. (i.e. if you have 100
rows and will return 80).

I'll be honest, I'm not sure exactly how this applies with a clustered
index.

> HTH,
> Dave
> > Any ideas? (and no, they aren't willing to redesign any of the
> > tables).

Query optimization in query that uses CTE.

Hi Guys,

I want to optimize one query which uses the CTE (server 2005 feature).
I am sending you the abstract query.

currently this query take 4-5 seconds to execute.
but i want to reduce it to 1 sec.
Plz, do help me, if someone know how to do it.


--
DECLARE @.X INT
DECLARE @.LowerGradeRange INT
DECLARE @.UpperGradeRange INT
DECLARE @.Keyword NVARCHAR(500)

SET @.X = 11500001
SET @.LowerGradeRange = NULL
SET @.UpperGradeRange = NULL
SET @.Keyword = ''

IF ISNULL(@.Keyword,'')=''
SET @.Keyword='';


WITH SelPath (path_id,x,y,z,r)
AS
(

-- Anchor member definition (returns base result set)
SELECT path_id,x,y,z,r

FROM tab1 a
INNER JOIN tab2 b ON a.x= b.x

WHERE
a.x = @.X
-- AND (a.parent IS NULL OR a.parent = 0)
AND
CASE
WHEN ISNULL(@.LowerGradeRange,'')='' THEN 1
WHEN ISNULL(@.LowerGradeRange,'')<>'' AND b.lgr >= @.LowerGradeRange THEN 1
END=1
AND
CASE
WHEN ISNULL(@.UpperGradeRange,'')='' THEN 1
WHEN ISNULL(@.UpperGradeRange,'')<>'' AND b.ugr <= @.UpperGradeRange THEN 1

END=1
AND
CASE
WHEN @.Keyword <>'' AND b.y LIKE @.Keyword THEN 1
ELSE 1
END =1


UNION ALL

-- Recursive member definition
-- (returns the direct subordinate(s) of the activity in the anchor member result set)


SELECT path_id,x,y,z,r
FROM SelPath b
INNER JOIN tab1 a ON a.parent = b.path_id
INNER JOIN tab2 c ON a.x = c.x
WHERE
CASE
WHEN ISNULL(@.LowerGradeRange,'')='' THEN 1
WHEN ISNULL(@.LowerGradeRange,'')<>'' AND c.lgr >= @.LowerGradeRange THEN 1
END=1
AND
CASE
WHEN ISNULL(@.UpperGradeRange,'')='' THEN 1
WHEN ISNULL(@.UpperGradeRange,'')<>'' AND c.ugr <= @.UpperGradeRange THEN 1

END=1
AND
CASE
WHEN @.Keyword <>'' AND c.y LIKE @.Keyword THEN 1
ELSE 1
END =1

)

-- Statement that executes the CTE
SELECT path_id,x,y,z,r

FROM SelPath a
INNER JOIN pce.qq c ON a.r = c.r

ORDER BY x

--

Reply soon...
bye
take care

Regards,
-Surendra


Go to the T-SQL forum. You'll get more help there.

-Jamie

Query Optimization - Please Help

Hi,

Can anyone help me optimize the SELECT statement in the 3rd step? I am actually writing a monthly report. So for each employee (500 employees) in a row, his attendance totals for all days in a month are displayed. The problem is that in the 3rd step, there are actually 31 SELECT statements which are assigned to 31 variables. After I assign these variable, I insert them in a Table (4th step) and display it. The troublesome part is the 3rd step. As there are 500 employees, then 500x31 times the variables are assigned and inserted in the table. This is taking more than 4 minutes which I know is not required :). Can anyone help me optimize the SELECT statements I have in the 3rd step or give a better suggestion.

DECLARE @.EmpID, @.DateFrom, @.Total1 ... // Declaring different variables

SELECT @.DateFrom = // Set to start of any month e.g. 2007-06-01 ..... 1st

Loop (condition -- Get all employees, working fine)

BEGIN

SELECT @.EmpID = // Get EmployeeID ..... 2nd

SELECT @.Total1 = SUM (Abences) ..... 3rd

FROM Attendance

WHERE employee_id_fk = @.EmpID (from 2nd step)

AND Date_Absent = DATEADD ("day", 0, Convert (varchar, @.DateFrom)) (from 1st step)

SELECT @.Total2 ..................... same as above

SELECT @.Total3 ..................... same as above

INSERT IN @.TABLE (@.EmpID, @.Total1, ..... @.Total31) ..... 4th

Iterate (condition) to next employee ..... 5th

END

It's only the loop which consumes the 4 minutes. If I can somehow optimize this part, I will be most satisfied. Thanks for anyone helping me...

What does the Attendance table look like? I have some ideas for you but I need to know how the attendance is stored. Can you give us the schema of that table please? Thanks!

|||

See this sample example-->

========================================================================================

Declare @.fromDate datetime,
@.toDate datetime

Set @.fromDate = '1-Aug-2007'
Set @.toDate = '4-Aug-2007'


Select distinct a.dtAttendate,

'STATUS'=(select
(case
when lv_status='EL' then 'EL'
when lv_status='CL' then 'CL'
when lv_status='SL' then 'SL'
when lv_status='ML' then 'ML'
when (wk_status='S' AND log_status='P') then 'SB'
when (wk_status='S' AND log_status='A') then 'Absent (SB)'
when wk_status='N' then 'Weekend'
when bIsHoliday=1 then 'Holiday'
when log_status='A' then 'Absent'
--when mnyLateHrs>0 then 'Late'
when log_status='P' then 'Present'
end)
from tblLogAbsent
where dtAttendate = a.dtAttendate and intEmpCode=a.intEmpCode),

'TIMEIN'=(select dtEmpTimeIn from tblLogStatus where dtAttendate=a.dtAttendate and intEmpCode=a.intEmpCode),

'TIMEOUT'=(select dtEmpTimeOUT from tblLogStatus where dtAttendate=a.dtAttendate and intEmpCode=a.intEmpCode)

from tblLogStatus a
where a. dtAttendate between @.fromDate and @.toDate

group by a.dtAttendate, intEmpCode

========================================================================================

here i use two different table "tbllogabsent" for his attendance status and "tbllogstatus" as a for additional information... this is not becoming problem... if u cant understand any line of code ask me again...hopefully this will be helpfull to u...

|||

Hmmm... I'm not sure how to use this information. In your initial code, you do a SELECT from a table called Attendance, which seems to have columns like Abences, employee_id_fk and Date_Absent. Can you give us more details on this table please? Thanks.

|||

Hi johram,

Thanks for replying. Yes you are right. The attendance table for employees has columns like EmpID_fk, Attendance_Date, ... , Attendance_Total. The Attendance_Total column is dependent on our business rules which include reason for signing in/out. e.g. If an employee has signed out for some official task, 1 is added to his Attendance_Total column. If he is going away for a business tour, 2 may be added to his Attendance_Total column. So in one day, an employee can have more than one record. The records for an employee may look like the following:

EmpID Attendance_Date ... Attendance_Total

1001 04/28/2006 1

1001 04/28/2006 2

1001 04/28/2006 1

So on 28th April, the total for Emp (1001) = 4. There are other columns in there but im only concerned with Attendance_Total. I need to display the SUM (Attendance_Total) for each day in a month for each employee.

If I further elaborate my report based on the above example, it may look something like:

EmpID EmpName D1 D2 D3 D4 D5 D6 ......

1001 ABC 1 0 4 4 1 0

I have tried a few techniques (under my experienceJ), but when it comes to computing the sum for each day, it takes almost 4-5 minutes which I am sure nobody wants. Also this report can be accessed anytime within a day and employees keep coming and going for business, so cant store the records and need to compute them everytime the report is accessed.

Thanks to you all for helping me...

|||

Hi patuary,

Thanks for your reply aswell. I have tried your technique and understand it - i bet :) Anyways, after applying your technique, there are the following two problems:

1. All records are being returned as rows e.g. The records for Emp 1001 in a month is not in a single row, rather seperate rows are returned for each day

2. Also, this query only returns data for days on which attendance may be marked. But if there is a weekend or the employee was absent, his attendance record for that day is not computed as there are no records. Whereas in my case, if he was absent or no record found on a given day, his attendance record must be marked as 0.

Thanks again for your time...

|||

Guys, please provide your valueable feedback...

|||

Is it important that you get a result with all the days, even if the sum is zero? Cause that will make it a bit more complicated in the SQL. You can have the SQL report back all days that actually have a total (greater than zero), and then in your GUI you can render the rest of the days as empty. In that case, I think we can work out a solution for you. At least that's what we'll start with ;-) I'll see what I can do!

Also, what's the datatype of you Attendance_Date column?

|||

If you are using SQL 2005, there should be a new statement called PIVOT, although it is nowhere to be found in the T-SQL reference manual on MSDN. Maybe you are luckier than me ;-) Pivot is the term for when you shift the layout of a table so that you look on it from a different perspective. In this case, you want to pivot the table on the date so that each date represents a column rather than a row.

Now, this can be done with a function calledCrosstable, which was developed by the legendary Rob Volk. The source code for this function can be foundhere. Note that you need to change the column "pivot" to "tpivot" or something, since "pivot" is a keyword in SQL 2005.

This is the modified version of Crosstable that will work in SQL 2005:

ALTER PROCEDURE crosstab @.select varchar(8000),@.sumfuncvarchar(100), @.pivotvarchar(100), @.table varchar(100)ASDECLARE @.sqlvarchar(8000), @.delimvarchar(1)SET NOCOUNT ONSET ANSI_WARNINGSOFFEXEC ('SELECT ' + @.pivot +' AS tpivot INTO ##pivot FROM ' + @.table +' WHERE 1=2')EXEC ('INSERT INTO ##pivot SELECT DISTINCT ' + @.pivot +' FROM ' + @.table +' WHERE ' + @.pivot +' Is Not Null')SELECT @.sql='', @.sumfunc=stuff(@.sumfunc,len(@.sumfunc), 1,' END)' )SELECT @.delim=CASE Sign( CharIndex('char', data_type)+CharIndex('date', data_type) )WHEN 0THEN''ELSE''''END FROM tempdb.information_schema.columnsWHERE table_name='##pivot'AND column_name='tpivot'SELECT @.sql=@.sql +'''' +convert(varchar(100), tpivot) +''' = ' + stuff(@.sumfunc,charindex('(', @.sumfunc )+1, 0,' CASE ' + @.pivot +' WHEN ' + @.delim +convert(varchar(100), tpivot) + @.delim +' THEN ' ) +', 'FROM ##pivotDROP TABLE ##pivotSELECT @.sql=left(@.sql,len(@.sql)-1)SELECT @.select=stuff(@.select, charindex(' FROM ', @.select)+1, 0,', ' + @.sql +' ')EXEC (@.select)SET ANSI_WARNINGSON

Now, to demonstrate the power of this function I made a quick sample for you to push you in the right direction:

EXECUTE Crosstab'SELECT EmpId FROM Attendance GROUP BY EmpId','SUM(Attendance_total)','Attendance_date','attendance'

This will give you a matrix with all the employees vertically, and horizontally you will have all unique dates, with the respective attendance total for each employee on that day. As I said earlier, this will not give you all the days of the month, unless there are data for each day. So you might need to do some logic in your GUI to render "empty" days correctly. Good luck!

|||

Hi Johram,

Once again thanks for your time. I really appreciate all your help. Yes your rite, we do need to handle all days in a month. But as you mentioned this can be handled in my logic so im lesser concerned about the days without any data.

Anyways, im aware of the Pivot function. Its basically used to convert rows into columns. Indeed the functionality i have in my stored procedure does the same job. What i do is that i have a temporary table in which i insert 31 rows for each employee. so for 500 employee, i insert 500x31 rows. Later i convert the rows into columns and display it. Although i do not use the Pivot function but i did once give it a try and the Processing Time was similar to what i have in there right now.

Still, im not ready to backout and will definitely give a try to your solution. Let me see what can i get out of it. By the way Johram, if you have dealt with any monthly or annual report in the past, usually how much time does it take to display such a report? Do you think that im being over ambitious in displaying such a report or such reports do take their time...

Once again, thanks alot for every help you have provided...

|||

Sorry, haven't done exactly this kind of report before. But it will depend on the amount of data you are trying to cover. Is it relevant to show ALL employees in a list/report? Maybe you should restrict it to region, or last name or something. Try to do a selection out of the 500 if it possible.

Although I havent been able to compare this crosstable thing with your first query, I still think that it might be faster. Try implement it and see for yourself. Good luck!

|||

Hi Johram

Thanks alot for your time and patience. I really appreciate your efforts and the help you have provided.

keep up the good work...