I am having an issue with a stored procedure I wrote. The stored procedure executes pretty fast if I am calling it once or twice. However my application requires to call the same stored procedure constantly. When called constantly, the CPU spikes at 100% . If I terminate the call, the CPU comes back down to 0%.
To test what could be causing the issue, I put a wrote a "while loop" in SQL to call my stored procedure 1000 times and I was able to narrow the issue to one line.
Select numID from StudentInfo where FirstName = 'FirstName'
Note:
The StudentInfo table has over 1.5 million records.
numID is Primary Key of the table.
To address the issue, I tried to add an index (non clustered) to the "FirstName" column but I cannot see any significant difference. The CPU runs at 95% - 100% even after adding an index to the FirstName column.
If I remove the above line of code, the remaining code in the stored procedure runs at 40% CPU utilisation.
How can I run the above query without having to spike the CPU utilisation?
Thanks in Advance!!
Please post the procedure code.
|||Could you try
Select numID from StudentInfo WITH (NOLOCK)
where FirstName = 'FirstName'
and see if there is any difference?
|||
Check the execution plan for this query. Whether its index seek or scan. i think this is only part of a join. in that case post the whole script as well as tell us is there any bookmark lookup invloveled. NOLOCK Hint should be applied very carefuly and its nothing but READUncommited issolation level for that statement. there may be dirty read.
Madhu
|||I would first check the query plan to make sure that the index is even being used. Interesting query - WHERE clause on FirstName. At any rate, you need to dig into this index further; quite possibly you might have to change this up a bit. Look at various system views and procs - sp_lock, sysprocesses, etc. - to see the degree of locking and waits.
www.texastoo.com/sqlblog
|||Hi All,
i agree with Lee. Really execution plan will help you.
did u tried to create an index that match the search query
find below steps of choosing the most 10 missing indexes
The following query will get the 10 missing indexes would produce the highest anticipated cumulative improvement, in descending order, for user queries.
SELECT TOP 10 *
FROM sys.dm_db_missing_index_group_stats
ORDER BY avg_total_user_cost * avg_user_impact * (user_seeks + user_scans)DESC
You can get the missing index details in the following way:
The following query determines which missing indexes comprise a particular missing index group, and displays their column details.
For the sake of this example, the missing index group handle is 24.(You will need to change the handle value with handle values which comes up from the earlier query)
SELECT migs.group_handle, mid.*
FROM sys.dm_db_missing_index_group_stats migs
INNER JOIN sys.dm_db_missing_index_groups mig
ON (migs.group_handle = mig.index_group_handle)
INNER JOIN sys.dm_db_missing_index_details mid
ON (mig.index_handle = mid.index_handle)
WHERE migs.group_handle = 24 <<put your handle value here>>
For details on this refer to the following articles:
http://msdn2.microsoft.com/en-us/library/ms345421.aspx
Using Missing Index Information to Write CREATE INDEX Statements
http://msdn2.microsoft.com/en-us/library/ms345405.aspx
|||Check the execution plan. Also could you try updating the statistics and running DTA.
HTH
Vishal
|||Hi all I am looking for a SP to get this requirement:
Identify queries that are taking a long time to run on our server. What is the query, how long it took, what is the user name, what is the client machine it ran from.
thanks|||SQL Server 2005 Profiler , select appropriate coulms and events to find out expensive quieries should help.|||I can do it from Profiler
but i need a SP / Query to do the same or filter the query as per the columns required
waiting for the reply
|||why not run the profiler with the relevant options and save it to a table - then you can run queries on that table for the info you require.................