Hi,
We have an application which fetches data from a table which has approximately 1 million records.
1. Nearly 25 users will be using this application concurrently.
2. frequent updations will be done to the records in the geographyrolecurriculum table.
3. This table has 1 clustered index and 4 nonclustered index bounded to it.
Problem Statement:
1. Application runs smoothly for 15 - 20 days and after that all the screens throws timeout errors.
When i clear the sys.syscacheobjects its working fine again and screens get loaded quickly.
Please tell me how clearing the syscacheobjects makes the execution fast? and is this the correct way to solve the timeout issue or is there any other alternative?
2. Will the stored procs timeout if the tempdb is full ?
Thanks,
Arunprasad
Hi,
1. Not sure if you're doing this but you should NEVER manually make changes to the system tables, I am hoping that you are running something like DBCC FREEPROCCACHE to clear out the cache.
What I reckon is happening when you clear out the cache is you are basically forcing all the queries to generate new execution plans, and the new plans seem to be faster than the older ones.
What you need to do to ensure that the query optimiser has enough information to determine the correct query plan is to:
a. Maintain you indexes
DBCC INDEXDEFRAG
ALTER INDEX ….. REBUILD / DBCC DBREINDEX
b. Keep your statistics up-to-date
sp_createstats / CREATE STATISTICS
sp_updatestats / UPDATE STATISTICS
2. Managing you tempdb is a task in itself. This is a very interesting read:
http://www.microsoft.com/technet/prodtechnol/sql/2005/workingwithtempdb.mspx
Basically if you sps are creating temp tables or you are using order by, group by, etc you will be using your tempdb. There are issues around the tempdb but they can be managed.
|||
Hi xrayb,
Thanks for the suggestion and we did what you have adviced us to do but the customer says its a hectic process.
So can you suggest me a workaround?
|||1. i have update STATISTICS ON for all the databases( DATABASE LEVEL) my server has will this have an effect on the table as well?
(Ex: Update statistics <tablename>)
2. Will the new execution plan be always better the older ones always? because this is happening very frequently in my application.
3. Do you have any other suggestion or findings why this timeout happens this frequently.
4. I appreaciate the effort you have put in to clear my doubt.
Thanks,
Arun
1. I have a couple of issues with turning the create / update stats feature on at DB level, although they may not be an issues in your environment but I see them as:
a. You have no control over when it's run.
b. The update is triggered when they are already out of date, and I think this is based on a percentage (I’ve heard 10% and 20%), this is not a problem when you 1000 rows in your table but if you have 50,000,000 then 10% is 5,000,000 which is an awful long time to wait for the stats to be updated. BTW you can enable trace flag 205 to check how often this happens.
Also compare your estimated and an actual graphical execution plans, this will also let on if the stats are current.
Updating stats should be run each time there's a substantial change to the data / indexes, also the ALTER INDEX can be an online operation (although I would only run it during the working day under duress).
Use the DBCC SHOW_STATISTICS command to check individual tables and see what their current state is regarding stats.
2. The execution may be different and hopefully it will be quicker and more efficient (but not always). You may need to do some research into why this is happening, again, like managing your tempdb, performance tuning is an art in its self. Have a look at the link below it will give you enough information to query the DMVs and check how your query is being treated by the query optimiser (like the query plan).
http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx
I have slightly modified one of the queries and this returns the top 50 overview of which currently cached batches or procedures are using the most CPU and how many times they are being executed.
Code Snippet
select top 50
sum(qs.total_worker_time) as total_cpu_time,
sum(qs.execution_count) as total_execution_count,
count(*) as number_of_statements,
qs.plan_handle,
eqp.objectid,
OBJECT_NAME(eqp.objectid)
from
sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS eqp
group by qs.plan_handle,eqp.objectid
order by sum(qs.total_worker_time) desc, sum(qs.execution_count) desc
3. You basically need to identify what queries are performing badly, you can capture them (with the above mentioned script) or with server side traces or profiler. You have options to return long running queries. Once you have identified these queries you can start working on them individually.
From what you have described I reckon you have some poorly written queries and as soon as there's a load on the server or the stats are slightly outdated they just collapse in on themselves. I would try and capture the offending query and examine it, you won’t have to go through many before you realise if it’s poorly written.
Have a look at the DMVs too.
From BOL:
Find the 10 missing indexes with the highest anticipated improvement for user queries
SELECT TOP 10 *
FROM sys.dm_db_missing_index_group_stats
ORDER BY avg_total_user_cost * avg_user_impact * (user_seeks + user_scans) DESC;
No comments:
Post a Comment