I found an SP scanning a table of 100k rows which lacked the proper
index, doing a join to several other tables. Statistics IO reported
it took 26000 logical reads (to do it twice, btw).
I then added the proper index. Reads dropped to 1500. Good, huh? I
should have declared victory and gone home.
But no, not me. I then dropped the index and reran. It went up
again, of course, but only to 6700 reads.
Um, ... anyone ever see anything like this? I'm at something of a
loss. I thought I'd run it several times in each mode, and in fact
was running it in a loop of 100 from query analyzer to get a decent
clock-time idea of execution (but got the statistics IO from single
executions).
Josh"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:kduho09q2pojpejabtn73p13d8m3lq2jnu@.4ax.com...
>I found an SP scanning a table of 100k rows which lacked the proper
> index, doing a join to several other tables. Statistics IO reported
> it took 26000 logical reads (to do it twice, btw).
> I then added the proper index. Reads dropped to 1500. Good, huh? I
> should have declared victory and gone home.
> But no, not me. I then dropped the index and reran. It went up
> again, of course, but only to 6700 reads.
> Um, ... anyone ever see anything like this? I'm at something of a
> loss. I thought I'd run it several times in each mode, and in fact
> was running it in a loop of 100 from query analyzer to get a decent
> clock-time idea of execution (but got the statistics IO from single
> executions).
>
Was the index perhaps clustered? If so the data would have have been
reorganized by clustering and unclustering it, perhaps making the joins more
efficient.
Also you may have ended up with better statistics after adding and dropping
the index, which enabled a better query plan.
David|||On Wed, 3 Nov 2004 10:17:07 -0600, "David Browne" <davidbaxterbrowne
no potted meat@.hotmail.com> wrote:
>Was the index perhaps clustered? If so the data would have have been
>reorganized by clustering and unclustering it, perhaps making the joins more
>efficient.
Not clustered.
>Also you may have ended up with better statistics after adding and dropping
>the index, which enabled a better query plan.
But without the index, it has to scan.
It *appears* that the "logical reads" are not all that logical, and
are affected by the data cached. I've seen this phenomenon again
today. It is upsetting. Unless there is some larger principle at
work here I have not yet understood?
Josh|||JXStern wrote:
> I found an SP scanning a table of 100k rows which lacked the proper
> index, doing a join to several other tables. Statistics IO reported
> it took 26000 logical reads (to do it twice, btw).
> I then added the proper index. Reads dropped to 1500. Good, huh? I
> should have declared victory and gone home.
> But no, not me. I then dropped the index and reran. It went up
> again, of course, but only to 6700 reads.
> Um, ... anyone ever see anything like this? I'm at something of a
> loss. I thought I'd run it several times in each mode, and in fact
> was running it in a loop of 100 from query analyzer to get a decent
> clock-time idea of execution (but got the statistics IO from single
> executions).
> Josh
What are these"reads"? Are you using STATISTS IO for the information? If
so, how many logical vs physical? You might consider using Profiler to
see the reads, CPU, and duration. Don't be alarmed that the reads in
Profiler do not match that of STATISTICS IO. They are reading different
structures and the SQL Trace API will normally report reads that are not
tracked by STATISTICS IO.
--
David Gugick
Imceda Software
www.imceda.com|||JxStern
Before starting to tune the query after re-creating an index run the
following commands
DBCC FREEPROCCACHE clears the procedure cache and causes ad hoc queries to
be recompiled
if you want a stored procedure to be compiled you will need to use the WITH
RECOMPILE option
if you want to clear the data cache you will need to use DBCC
DROPCLEANBUFFERS
DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a
specific database on a SQL Server, not the entire SQL Server. The database
ID number to be affected must be entered as part of the command.
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:kduho09q2pojpejabtn73p13d8m3lq2jnu@.4ax.com...
> I found an SP scanning a table of 100k rows which lacked the proper
> index, doing a join to several other tables. Statistics IO reported
> it took 26000 logical reads (to do it twice, btw).
> I then added the proper index. Reads dropped to 1500. Good, huh? I
> should have declared victory and gone home.
> But no, not me. I then dropped the index and reran. It went up
> again, of course, but only to 6700 reads.
> Um, ... anyone ever see anything like this? I'm at something of a
> loss. I thought I'd run it several times in each mode, and in fact
> was running it in a loop of 100 from query analyzer to get a decent
> clock-time idea of execution (but got the statistics IO from single
> executions).
> Josh
>|||On Thu, 4 Nov 2004 09:26:32 +0200, "Uri Dimant" <urid@.iscar.co.il>
wrote:
>JxStern
>Before starting to tune the query after re-creating an index run the
>following commands
>DBCC FREEPROCCACHE clears the procedure cache and causes ad hoc queries to
>be recompiled
>if you want a stored procedure to be compiled you will need to use the WITH
>RECOMPILE option
>if you want to clear the data cache you will need to use DBCC
>DROPCLEANBUFFERS
>DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a
>specific database on a SQL Server, not the entire SQL Server. The database
>ID number to be affected must be entered as part of the command.
All good suggestions.
I did use sp_recompile, but it didn't seem to make any difference.
Which is additionally odd, isn't it? Could it have cached the plan
and the old index? But it didn't return the old (with index) value,
but an intermediate one.
J.|||On Wed, 3 Nov 2004 19:47:13 -0500, "David Gugick"
<davidg-nospam@.imceda.com> wrote:
>What are these"reads"? Are you using STATISTS IO for the information?
Yes, statistics io, and I'm looking at just the logical reads.
> If
>so, how many logical vs physical?
Lots of logical, very few physical.
>You might consider using Profiler to
>see the reads, CPU, and duration.
I've been running the profiler also, but didn't use it on these cases.
> Don't be alarmed that the reads in
>Profiler do not match that of STATISTICS IO. They are reading different
>structures and the SQL Trace API will normally report reads that are not
>tracked by STATISTICS IO.
I'm just comparing runs via statistics io at this point.
Interesting if profiler shows a different pattern, I'll try to post if
I get the chance to try it.
So, nobody is quite going with my theory that the logical reads aren't
purely logical'
J.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment