Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Thursday, March 22, 2012

Cluster indexing time increasing each week

SQL 7.0
There's a table with 100M records (and growing) that's
needing to be reindexed every week. If I don't recreate
the index, the fragmentation causes performance of the
queries to get slower and slower. I have a job that runs
the following script, but it's taking a little longer
each week and I'm wondering if there's a way to speed it
up...
Any help appreciated.
Thx,
Don
job script:
CREATE UNIQUE CLUSTERED
INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
WITH
FILLFACTOR = 90
,DROP_EXISTING
ON [PRIMARY]
table structure:
CREATE TABLE [dbo].[DATA] (
[Name] [varchar] (32) NOT NULL ,
[Date] [smalldatetime] NOT NULL ,
[TD] [smalldatetime] NULL ,
[DO] [decimal](18, 6) NOT NULL ,
[DH] [decimal](18, 6) NOT NULL ,
[DL] [decimal](18, 6) NOT NULL ,
[DC] [decimal](18, 6) NOT NULL ,
[SP] [decimal](18, 6) NULL ,
[SD] [smalldatetime] NULL ,
[DV] [int] NOT NULL ,
[DI] [int] NOT NULL ,
[Fg] [int] NULL
) ON [PRIMARY]
GODon wrote:
> SQL 7.0
> There's a table with 100M records (and growing) that's
> needing to be reindexed every week. If I don't recreate
> the index, the fragmentation causes performance of the
> queries to get slower and slower. I have a job that runs
> the following script, but it's taking a little longer
> each week and I'm wondering if there's a way to speed it
> up...
> Any help appreciated.
> Thx,
> Don
> job script:
> CREATE UNIQUE CLUSTERED
> INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
> WITH
> FILLFACTOR = 90
> ,DROP_EXISTING
> ON [PRIMARY]
> table structure:
> CREATE TABLE [dbo].[DATA] (
> [Name] [varchar] (32) NOT NULL ,
> [Date] [smalldatetime] NOT NULL ,
> [TD] [smalldatetime] NULL ,
> [DO] [decimal](18, 6) NOT NULL ,
> [DH] [decimal](18, 6) NOT NULL ,
> [DL] [decimal](18, 6) NOT NULL ,
> [DC] [decimal](18, 6) NOT NULL ,
> [SP] [decimal](18, 6) NULL ,
> [SD] [smalldatetime] NULL ,
> [DV] [int] NOT NULL ,
> [DI] [int] NOT NULL ,
> [Fg] [int] NULL
> ) ON [PRIMARY]
> GO
Is there a reason you chose those columns for the clustered index? What
is the PK on the table? Are there other indexes? One reason for
fragmentation has to do with clustered index keys that change. Or it
could be caused by newly inserted keys that require new pages. You could
try using DBCC INDEXDEFRAG (oh, that's SQL 2000 and you are on 7). It is
an online operation and may keep the table from getting too fragmented.
I don't know if you have other indexes, but if you do, rebuilding a
clustered index can cause the non-clustered indexes to rebuild as well,
although using the DROP_EXISTING should deal with this somewhat. Another
thing to remember is that the clustered keys are a part of all
non-clustered indexes. A large clustered index key like yours can cause
the non-clustered index to bulk up.
I assume you are using a 90 Fill Factor because you have a lot of
inserts that occur over the course of a week. Leaving 10% of each page
free is a lot of available space and makes the table that much larger.
It does give you some room for growth before page splits start to occur,
but you might consider lowering the free space if your inserting reach
high levels. Are you really adding 10% to the table each week? If not,
consider leaving less space.
What types of queries are you running on this table? Do the clustered
keys get updated? Do you have other indexes? Have you considering using
the clustered index for something else? It would help to understand your
table and its use a little more.
--
David Gugick
Imceda Software
www.imceda.com|||I don't know what values you are putting into name and date columns. If date
values are subsequent then making date first key in clustered index would
decrease fragmentation.
--
Thank you,
Alex
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:09bf01c4f1c0$c086a180$a501280a@.phx.gbl...
> SQL 7.0
> There's a table with 100M records (and growing) that's
> needing to be reindexed every week. If I don't recreate
> the index, the fragmentation causes performance of the
> queries to get slower and slower. I have a job that runs
> the following script, but it's taking a little longer
> each week and I'm wondering if there's a way to speed it
> up...
> Any help appreciated.
> Thx,
> Don
> job script:
> CREATE UNIQUE CLUSTERED
> INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
> WITH
> FILLFACTOR = 90
> ,DROP_EXISTING
> ON [PRIMARY]
> table structure:
> CREATE TABLE [dbo].[DATA] (
> [Name] [varchar] (32) NOT NULL ,
> [Date] [smalldatetime] NOT NULL ,
> [TD] [smalldatetime] NULL ,
> [DO] [decimal](18, 6) NOT NULL ,
> [DH] [decimal](18, 6) NOT NULL ,
> [DL] [decimal](18, 6) NOT NULL ,
> [DC] [decimal](18, 6) NOT NULL ,
> [SP] [decimal](18, 6) NULL ,
> [SD] [smalldatetime] NULL ,
> [DV] [int] NOT NULL ,
> [DI] [int] NOT NULL ,
> [Fg] [int] NULL
> ) ON [PRIMARY]
> GO
>|||"Alex" <alex_removethis_@.healthmetrx.com> wrote in message
news:10tjnve4gf28cd7@.corp.supernews.com...
> I don't know what values you are putting into name and date columns. If
date
> values are subsequent then making date first key in clustered index would
> decrease fragmentation.
subsequent = sequential, of course|||Thank you Mark, that's what I meant, sorry for confusion, English is not my
first language.
--
Thank you,
Alex
"Mark Wilden" <mark@.mwilden.com> wrote in message
news:NtWdnTDXGpYUeUTcRVn-qw@.sti.net...
> "Alex" <alex_removethis_@.healthmetrx.com> wrote in message
> news:10tjnve4gf28cd7@.corp.supernews.com...
> > I don't know what values you are putting into name and date columns. If
> date
> > values are subsequent then making date first key in clustered index
would
> > decrease fragmentation.
> subsequent = sequential, of course
>|||On Mon, 3 Jan 2005 10:19:23 -0800, "Don"
<anonymous@.discussions.microsoft.com> wrote:
>SQL 7.0
>There's a table with 100M records (and growing) that's
>needing to be reindexed every week. If I don't recreate
>the index, the fragmentation causes performance of the
>queries to get slower and slower. I have a job that runs
>the following script, but it's taking a little longer
>each week and I'm wondering if there's a way to speed it
>up...
Are these SELECT queries or INSERT, UPDATE, or DELETE, and whichever,
do they reference the clustered index?
If the file grows each week then yes, it is going to take a little
longer every week.
Maybe you should look into partitioned tables?
J.|||You should really consider moving non-current data to a DSS or Warehouse
structure. I suspect that it is not fragmentation that is your problem as
much as it is the statistics. As the base number of records increases, the
more new records require to be inserted or modified before the AUTOSTATS
kicks in. You might just try updating the statistics more frequently and
rely less on the index rebuild operations.
Sincerely,
Anthony Thomas
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:09bf01c4f1c0$c086a180$a501280a@.phx.gbl...
SQL 7.0
There's a table with 100M records (and growing) that's
needing to be reindexed every week. If I don't recreate
the index, the fragmentation causes performance of the
queries to get slower and slower. I have a job that runs
the following script, but it's taking a little longer
each week and I'm wondering if there's a way to speed it
up...
Any help appreciated.
Thx,
Don
job script:
CREATE UNIQUE CLUSTERED
INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
WITH
FILLFACTOR = 90
,DROP_EXISTING
ON [PRIMARY]
table structure:
CREATE TABLE [dbo].[DATA] (
[Name] [varchar] (32) NOT NULL ,
[Date] [smalldatetime] NOT NULL ,
[TD] [smalldatetime] NULL ,
[DO] [decimal](18, 6) NOT NULL ,
[DH] [decimal](18, 6) NOT NULL ,
[DL] [decimal](18, 6) NOT NULL ,
[DC] [decimal](18, 6) NOT NULL ,
[SP] [decimal](18, 6) NULL ,
[SD] [smalldatetime] NULL ,
[DV] [int] NOT NULL ,
[DI] [int] NOT NULL ,
[Fg] [int] NULL
) ON [PRIMARY]
GO

Cluster indexing time increasing each week

SQL 7.0
There's a table with 100M records (and growing) that's
needing to be reindexed every week. If I don't recreate
the index, the fragmentation causes performance of the
queries to get slower and slower. I have a job that runs
the following script, but it's taking a little longer
each week and I'm wondering if there's a way to speed it
up...
Any help appreciated.
Thx,
Don
job script:
CREATE UNIQUE CLUSTERED
INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
WITH
FILLFACTOR = 90
,DROP_EXISTING
ON [PRIMARY]
table structure:
CREATE TABLE [dbo].[DATA] (
[Name] [varchar] (32) NOT NULL ,
[Date] [smalldatetime] NOT NULL ,
[TD] [smalldatetime] NULL ,
[DO] [decimal](18, 6) NOT NULL ,
[DH] [decimal](18, 6) NOT NULL ,
[DL] [decimal](18, 6) NOT NULL ,
[DC] [decimal](18, 6) NOT NULL ,
[SP] [decimal](18, 6) NULL ,
[SD] [smalldatetime] NULL ,
[DV] [int] NOT NULL ,
[DI] [int] NOT NULL ,
[Fg] [int] NULL
) ON [PRIMARY]
GODon wrote:
> SQL 7.0
> There's a table with 100M records (and growing) that's
> needing to be reindexed every week. If I don't recreate
> the index, the fragmentation causes performance of the
> queries to get slower and slower. I have a job that runs
> the following script, but it's taking a little longer
> each week and I'm wondering if there's a way to speed it
> up...
> Any help appreciated.
> Thx,
> Don
> job script:
> CREATE UNIQUE CLUSTERED
> INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
> WITH
> FILLFACTOR = 90
> ,DROP_EXISTING
> ON [PRIMARY]
> table structure:
> CREATE TABLE [dbo].[DATA] (
> [Name] [varchar] (32) NOT NULL ,
> [Date] [smalldatetime] NOT NULL ,
> [TD] [smalldatetime] NULL ,
> [DO] [decimal](18, 6) NOT NULL ,
> [DH] [decimal](18, 6) NOT NULL ,
> [DL] [decimal](18, 6) NOT NULL ,
> [DC] [decimal](18, 6) NOT NULL ,
> [SP] [decimal](18, 6) NULL ,
> [SD] [smalldatetime] NULL ,
> [DV] [int] NOT NULL ,
> [DI] [int] NOT NULL ,
> [Fg] [int] NULL
> ) ON [PRIMARY]
> GO
Is there a reason you chose those columns for the clustered index? What
is the PK on the table? Are there other indexes? One reason for
fragmentation has to do with clustered index keys that change. Or it
could be caused by newly inserted keys that require new pages. You could
try using DBCC INDEXDEFRAG (oh, that's SQL 2000 and you are on 7). It is
an online operation and may keep the table from getting too fragmented.
I don't know if you have other indexes, but if you do, rebuilding a
clustered index can cause the non-clustered indexes to rebuild as well,
although using the DROP_EXISTING should deal with this somewhat. Another
thing to remember is that the clustered keys are a part of all
non-clustered indexes. A large clustered index key like yours can cause
the non-clustered index to bulk up.
I assume you are using a 90 Fill Factor because you have a lot of
inserts that occur over the course of a week. Leaving 10% of each page
free is a lot of available space and makes the table that much larger.
It does give you some room for growth before page splits start to occur,
but you might consider lowering the free space if your inserting reach
high levels. Are you really adding 10% to the table each week? If not,
consider leaving less space.
What types of queries are you running on this table? Do the clustered
keys get updated? Do you have other indexes? Have you considering using
the clustered index for something else? It would help to understand your
table and its use a little more.
David Gugick
Imceda Software
www.imceda.com|||I don't know what values you are putting into name and date columns. If date
values are subsequent then making date first key in clustered index would
decrease fragmentation.
Thank you,
Alex
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:09bf01c4f1c0$c086a180$a501280a@.phx.gbl...
> SQL 7.0
> There's a table with 100M records (and growing) that's
> needing to be reindexed every week. If I don't recreate
> the index, the fragmentation causes performance of the
> queries to get slower and slower. I have a job that runs
> the following script, but it's taking a little longer
> each week and I'm wondering if there's a way to speed it
> up...
> Any help appreciated.
> Thx,
> Don
> job script:
> CREATE UNIQUE CLUSTERED
> INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
> WITH
> FILLFACTOR = 90
> ,DROP_EXISTING
> ON [PRIMARY]
> table structure:
> CREATE TABLE [dbo].[DATA] (
> [Name] [varchar] (32) NOT NULL ,
> [Date] [smalldatetime] NOT NULL ,
> [TD] [smalldatetime] NULL ,
> [DO] [decimal](18, 6) NOT NULL ,
> [DH] [decimal](18, 6) NOT NULL ,
> [DL] [decimal](18, 6) NOT NULL ,
> [DC] [decimal](18, 6) NOT NULL ,
> [SP] [decimal](18, 6) NULL ,
> [SD] [smalldatetime] NULL ,
> [DV] [int] NOT NULL ,
> [DI] [int] NOT NULL ,
> [Fg] [int] NULL
> ) ON [PRIMARY]
> GO
>|||"Alex" <alex_removethis_@.healthmetrx.com> wrote in message
news:10tjnve4gf28cd7@.corp.supernews.com...

> I don't know what values you are putting into name and date columns. If
date
> values are subsequent then making date first key in clustered index would
> decrease fragmentation.
subsequent = sequential, of course|||Thank you Mark, that's what I meant, sorry for confusion, English is not my
first language.
Thank you,
Alex
"Mark Wilden" <mark@.mwilden.com> wrote in message
news:NtWdnTDXGpYUeUTcRVn-qw@.sti.net...
> "Alex" <alex_removethis_@.healthmetrx.com> wrote in message
> news:10tjnve4gf28cd7@.corp.supernews.com...
>
> date
would[vbcol=seagreen]
> subsequent = sequential, of course
>|||On Mon, 3 Jan 2005 10:19:23 -0800, "Don"
<anonymous@.discussions.microsoft.com> wrote:
>SQL 7.0
>There's a table with 100M records (and growing) that's
>needing to be reindexed every week. If I don't recreate
>the index, the fragmentation causes performance of the
>queries to get slower and slower. I have a job that runs
>the following script, but it's taking a little longer
>each week and I'm wondering if there's a way to speed it
>up...
Are these SELECT queries or INSERT, UPDATE, or DELETE, and whichever,
do they reference the clustered index?
If the file grows each week then yes, it is going to take a little
longer every week.
Maybe you should look into partitioned tables?
J.|||You should really consider moving non-current data to a DSS or Warehouse
structure. I suspect that it is not fragmentation that is your problem as
much as it is the statistics. As the base number of records increases, the
more new records require to be inserted or modified before the AUTOSTATS
kicks in. You might just try updating the statistics more frequently and
rely less on the index rebuild operations.
Sincerely,
Anthony Thomas
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:09bf01c4f1c0$c086a180$a501280a@.phx.gbl...
SQL 7.0
There's a table with 100M records (and growing) that's
needing to be reindexed every week. If I don't recreate
the index, the fragmentation causes performance of the
queries to get slower and slower. I have a job that runs
the following script, but it's taking a little longer
each week and I'm wondering if there's a way to speed it
up...
Any help appreciated.
Thx,
Don
job script:
CREATE UNIQUE CLUSTERED
INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
WITH
FILLFACTOR = 90
,DROP_EXISTING
ON [PRIMARY]
table structure:
CREATE TABLE [dbo].[DATA] (
[Name] [varchar] (32) NOT NULL ,
[Date] [smalldatetime] NOT NULL ,
[TD] [smalldatetime] NULL ,
[DO] [decimal](18, 6) NOT NULL ,
[DH] [decimal](18, 6) NOT NULL ,
[DL] [decimal](18, 6) NOT NULL ,
[DC] [decimal](18, 6) NOT NULL ,
[SP] [decimal](18, 6) NULL ,
[SD] [smalldatetime] NULL ,
[DV] [int] NOT NULL ,
[DI] [int] NOT NULL ,
[Fg] [int] NULL
) ON [PRIMARY]
GOsqlsql

Cluster indexing time increasing each week

SQL 7.0
There's a table with 100M records (and growing) that's
needing to be reindexed every week. If I don't recreate
the index, the fragmentation causes performance of the
queries to get slower and slower. I have a job that runs
the following script, but it's taking a little longer
each week and I'm wondering if there's a way to speed it
up...
Any help appreciated.
Thx,
Don
job script:
CREATE UNIQUE CLUSTERED
INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
WITH
FILLFACTOR = 90
,DROP_EXISTING
ON [PRIMARY]
table structure:
CREATE TABLE [dbo].[DATA] (
[Name] [varchar] (32) NOT NULL ,
[Date] [smalldatetime] NOT NULL ,
[TD] [smalldatetime] NULL ,
[DO] [decimal](18, 6) NOT NULL ,
[DH] [decimal](18, 6) NOT NULL ,
[DL] [decimal](18, 6) NOT NULL ,
[DC] [decimal](18, 6) NOT NULL ,
[SP] [decimal](18, 6) NULL ,
[SD] [smalldatetime] NULL ,
[DV] [int] NOT NULL ,
[DI] [int] NOT NULL ,
[Fg] [int] NULL
) ON [PRIMARY]
GO
Don wrote:
> SQL 7.0
> There's a table with 100M records (and growing) that's
> needing to be reindexed every week. If I don't recreate
> the index, the fragmentation causes performance of the
> queries to get slower and slower. I have a job that runs
> the following script, but it's taking a little longer
> each week and I'm wondering if there's a way to speed it
> up...
> Any help appreciated.
> Thx,
> Don
> job script:
> CREATE UNIQUE CLUSTERED
> INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
> WITH
> FILLFACTOR = 90
> ,DROP_EXISTING
> ON [PRIMARY]
> table structure:
> CREATE TABLE [dbo].[DATA] (
> [Name] [varchar] (32) NOT NULL ,
> [Date] [smalldatetime] NOT NULL ,
> [TD] [smalldatetime] NULL ,
> [DO] [decimal](18, 6) NOT NULL ,
> [DH] [decimal](18, 6) NOT NULL ,
> [DL] [decimal](18, 6) NOT NULL ,
> [DC] [decimal](18, 6) NOT NULL ,
> [SP] [decimal](18, 6) NULL ,
> [SD] [smalldatetime] NULL ,
> [DV] [int] NOT NULL ,
> [DI] [int] NOT NULL ,
> [Fg] [int] NULL
> ) ON [PRIMARY]
> GO
Is there a reason you chose those columns for the clustered index? What
is the PK on the table? Are there other indexes? One reason for
fragmentation has to do with clustered index keys that change. Or it
could be caused by newly inserted keys that require new pages. You could
try using DBCC INDEXDEFRAG (oh, that's SQL 2000 and you are on 7). It is
an online operation and may keep the table from getting too fragmented.
I don't know if you have other indexes, but if you do, rebuilding a
clustered index can cause the non-clustered indexes to rebuild as well,
although using the DROP_EXISTING should deal with this somewhat. Another
thing to remember is that the clustered keys are a part of all
non-clustered indexes. A large clustered index key like yours can cause
the non-clustered index to bulk up.
I assume you are using a 90 Fill Factor because you have a lot of
inserts that occur over the course of a week. Leaving 10% of each page
free is a lot of available space and makes the table that much larger.
It does give you some room for growth before page splits start to occur,
but you might consider lowering the free space if your inserting reach
high levels. Are you really adding 10% to the table each week? If not,
consider leaving less space.
What types of queries are you running on this table? Do the clustered
keys get updated? Do you have other indexes? Have you considering using
the clustered index for something else? It would help to understand your
table and its use a little more.
David Gugick
Imceda Software
www.imceda.com
|||I don't know what values you are putting into name and date columns. If date
values are subsequent then making date first key in clustered index would
decrease fragmentation.
Thank you,
Alex
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:09bf01c4f1c0$c086a180$a501280a@.phx.gbl...
> SQL 7.0
> There's a table with 100M records (and growing) that's
> needing to be reindexed every week. If I don't recreate
> the index, the fragmentation causes performance of the
> queries to get slower and slower. I have a job that runs
> the following script, but it's taking a little longer
> each week and I'm wondering if there's a way to speed it
> up...
> Any help appreciated.
> Thx,
> Don
> job script:
> CREATE UNIQUE CLUSTERED
> INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
> WITH
> FILLFACTOR = 90
> ,DROP_EXISTING
> ON [PRIMARY]
> table structure:
> CREATE TABLE [dbo].[DATA] (
> [Name] [varchar] (32) NOT NULL ,
> [Date] [smalldatetime] NOT NULL ,
> [TD] [smalldatetime] NULL ,
> [DO] [decimal](18, 6) NOT NULL ,
> [DH] [decimal](18, 6) NOT NULL ,
> [DL] [decimal](18, 6) NOT NULL ,
> [DC] [decimal](18, 6) NOT NULL ,
> [SP] [decimal](18, 6) NULL ,
> [SD] [smalldatetime] NULL ,
> [DV] [int] NOT NULL ,
> [DI] [int] NOT NULL ,
> [Fg] [int] NULL
> ) ON [PRIMARY]
> GO
>
|||"Alex" <alex_removethis_@.healthmetrx.com> wrote in message
news:10tjnve4gf28cd7@.corp.supernews.com...

> I don't know what values you are putting into name and date columns. If
date
> values are subsequent then making date first key in clustered index would
> decrease fragmentation.
subsequent = sequential, of course
|||Thank you Mark, that's what I meant, sorry for confusion, English is not my
first language.
Thank you,
Alex
"Mark Wilden" <mark@.mwilden.com> wrote in message
news:NtWdnTDXGpYUeUTcRVn-qw@.sti.net...[vbcol=seagreen]
> "Alex" <alex_removethis_@.healthmetrx.com> wrote in message
> news:10tjnve4gf28cd7@.corp.supernews.com...
> date
would
> subsequent = sequential, of course
>
|||On Mon, 3 Jan 2005 10:19:23 -0800, "Don"
<anonymous@.discussions.microsoft.com> wrote:
>SQL 7.0
>There's a table with 100M records (and growing) that's
>needing to be reindexed every week. If I don't recreate
>the index, the fragmentation causes performance of the
>queries to get slower and slower. I have a job that runs
>the following script, but it's taking a little longer
>each week and I'm wondering if there's a way to speed it
>up...
Are these SELECT queries or INSERT, UPDATE, or DELETE, and whichever,
do they reference the clustered index?
If the file grows each week then yes, it is going to take a little
longer every week.
Maybe you should look into partitioned tables?
J.
|||You should really consider moving non-current data to a DSS or Warehouse
structure. I suspect that it is not fragmentation that is your problem as
much as it is the statistics. As the base number of records increases, the
more new records require to be inserted or modified before the AUTOSTATS
kicks in. You might just try updating the statistics more frequently and
rely less on the index rebuild operations.
Sincerely,
Anthony Thomas

"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:09bf01c4f1c0$c086a180$a501280a@.phx.gbl...
SQL 7.0
There's a table with 100M records (and growing) that's
needing to be reindexed every week. If I don't recreate
the index, the fragmentation causes performance of the
queries to get slower and slower. I have a job that runs
the following script, but it's taking a little longer
each week and I'm wondering if there's a way to speed it
up...
Any help appreciated.
Thx,
Don
job script:
CREATE UNIQUE CLUSTERED
INDEX [SD] ON [dbo].[DATA] ([Name], [Date])
WITH
FILLFACTOR = 90
,DROP_EXISTING
ON [PRIMARY]
table structure:
CREATE TABLE [dbo].[DATA] (
[Name] [varchar] (32) NOT NULL ,
[Date] [smalldatetime] NOT NULL ,
[TD] [smalldatetime] NULL ,
[DO] [decimal](18, 6) NOT NULL ,
[DH] [decimal](18, 6) NOT NULL ,
[DL] [decimal](18, 6) NOT NULL ,
[DC] [decimal](18, 6) NOT NULL ,
[SP] [decimal](18, 6) NULL ,
[SD] [smalldatetime] NULL ,
[DV] [int] NOT NULL ,
[DI] [int] NOT NULL ,
[Fg] [int] NULL
) ON [PRIMARY]
GO

Sunday, March 11, 2012

CLR Stored Procedure Not Visible

I am trying to do some time series analysis on data that I have. I've read that the best way to do this is to use CLR to write the manipulation code in .Net and then load the assembly into SQL rather than extracting the data, manipulating it and putting it back again.

I'm using crystal reports with visual studio 2003 to do my reporting. The problem I am facing is that the CLR stored procedures I create using SQLExpress are not visible from my SQL explorer built into Visual Studio 2003. I can create stored procedures using standard transact sql and these are visible, but the CLR stored procedures are not.

I've noticed when I browse the stored procedures in my SQL Management Studio that the image of the stored procedure has a padlock shown in the icon, as if they are locked?

Do I have to explicitly enable a security attribute on CLR stored procedures to make them visible?

Any help would be greatly appreciated.

Try posting this question on the CLR forums: .NET Framework inside SQL Server.

Thanks
Laurentiu

|||

Mic,

Did you find an answer to the padlock icon next to the CLR stored procedures.

I am using sql server 2005, and trying to access them in reporting services.

I must be missing something.

I tried setting permissions to no avail.

Thanks

Tim

CLR Stored Procedure Not Visible

I am trying to do some time series analysis on data that I have. I've read that the best way to do this is to use CLR to write the manipulation code in .Net and then load the assembly into SQL rather than extracting the data, manipulating it and putting it back again.

I'm using crystal reports with visual studio 2003 to do my reporting. The problem I am facing is that the CLR stored procedures I create using SQLExpress are not visible from my SQL explorer built into Visual Studio 2003. I can create stored procedures using standard transact sql and these are visible, but the CLR stored procedures are not.

I've noticed when I browse the stored procedures in my SQL Management Studio that the image of the stored procedure has a padlock shown in the icon, as if they are locked?

Do I have to explicitly enable a security attribute on CLR stored procedures to make them visible?

Any help would be greatly appreciated.

Try posting this question on the CLR forums: .NET Framework inside SQL Server.

Thanks
Laurentiu

|||

Mic,

Did you find an answer to the padlock icon next to the CLR stored procedures.

I am using sql server 2005, and trying to access them in reporting services.

I must be missing something.

I tried setting permissions to no avail.

Thanks

Tim

CLR SP Execution

Hello,
I am currently working on an application which calls some CLR stored
procedures in SQL Server 2005. The first time one of the CLR stored
procedures (it doesn't matter which one) is called it takes a lot longer to
execute. After the first CLR stored procedure has been executed the CLR
stored procedures all execute in a reasonable amount of time.
I have been unable to find any articles explaining the process that occurs
when a CLR stored procedure is called. I am currently assuming the initial
delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
Server or only when needed?
If anyone can point me in the direction of an article to aid my
understanding or can explain the process I would be grateful.
Thanks in advance.
Alistair HarrisonHow long of a delay do you experience? Do you get the delay with a trivial
proc like the one below? I get a sub-second response when I execute this
after a fresh SQL Server restart.
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void MyProc()
{
Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
}
};
I'm no expert on SQL CLR internals but I can't think of any overhead beyond
the usual object-specific overhead.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> Hello,
> I am currently working on an application which calls some CLR stored
> procedures in SQL Server 2005. The first time one of the CLR stored
> procedures (it doesn't matter which one) is called it takes a lot longer
> to
> execute. After the first CLR stored procedure has been executed the CLR
> stored procedures all execute in a reasonable amount of time.
> I have been unable to find any articles explaining the process that occurs
> when a CLR stored procedure is called. I am currently assuming the initial
> delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> Server or only when needed?
> If anyone can point me in the direction of an article to aid my
> understanding or can explain the process I would be grateful.
> Thanks in advance.
> Alistair Harrison|||Dan,
Thanks for your response.
The following are examples of execution times recorded by the Client
Statistics when the query is executed:
Initial execution of my original sp: 1600 to 1700 ms
Subsequent execution: <100 ms
Initial execution of your test sp: 700 to 800 ms
Subsequent execution: <50 ms
Another point I noticed was that if the test sp is executed first followed
by my original sp following a restart the original sp takes around 1200 ms t
o
execute. This is less than the 1600 to 1700 ms it seems to take when
executing the sp first but is still much greater than the subsequent
execution time.
It might be worth mentioning that my original clr sp executes a couple of
tsql sps using a context connection and then returns some xml as an output
parameter.
Thanks,
Alistair
"Dan Guzman" wrote:

> How long of a delay do you experience? Do you get the delay with a trivia
l
> proc like the one below? I get a sub-second response when I execute this
> after a fresh SQL Server restart.
> public partial class StoredProcedures
> {
> [Microsoft.SqlServer.Server.SqlProcedure]
> public static void MyProc()
> {
> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
> }
> };
> I'm no expert on SQL CLR internals but I can't think of any overhead beyon
d
> the usual object-specific overhead.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
>
>|||It's likely that subsequent executions are faster simply a result of data
caching. I'd expect execution times to average about 1200ms if you run DBCC
DROPCLEANBUFFERS before each execution. For performance testing, I usually
execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
I don't think the sub-second delay the first time the proc is run is
anything to be concerned about.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...[vbcol=seagreen]
> Dan,
> Thanks for your response.
> The following are examples of execution times recorded by the Client
> Statistics when the query is executed:
> Initial execution of my original sp: 1600 to 1700 ms
> Subsequent execution: <100 ms
> Initial execution of your test sp: 700 to 800 ms
> Subsequent execution: <50 ms
> Another point I noticed was that if the test sp is executed first followed
> by my original sp following a restart the original sp takes around 1200 ms
> to
> execute. This is less than the 1600 to 1700 ms it seems to take when
> executing the sp first but is still much greater than the subsequent
> execution time.
> It might be worth mentioning that my original clr sp executes a couple of
> tsql sps using a context connection and then returns some xml as an output
> parameter.
> Thanks,
> Alistair
>
> "Dan Guzman" wrote:
>|||Dan
Thanks again for your response.
I have retested the two stored procedures running DROPCLEANBUFFERS and
FREEPROCCACHE before each execution. The execution times increase slightly
but are still consistently below 300ms.
I suppose as the longer execution only seems to occur following a restart it
is not too much of a worry just intriguing as to why it seems to occur.
Alistair
"Dan Guzman" wrote:

> It's likely that subsequent executions are faster simply a result of data
> caching. I'd expect execution times to average about 1200ms if you run DB
CC
> DROPCLEANBUFFERS before each execution. For performance testing, I usuall
y
> execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
> I don't think the sub-second delay the first time the proc is run is
> anything to be concerned about.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
>
>|||Hello Alistair,
It might be worth remembering that the Assembly (and its dependent assemblie
s)
still needs to be fetched, verified, JITTed and the app domain created befor
e
the CLR code runs in many cases. That could be why you have a slower initial
startup.
Thank you,
Kent Tegels
DevelopMentor
http://staff.develop.com/ktegels/

CLR SP Execution

Hello,
I am currently working on an application which calls some CLR stored
procedures in SQL Server 2005. The first time one of the CLR stored
procedures (it doesn't matter which one) is called it takes a lot longer to
execute. After the first CLR stored procedure has been executed the CLR
stored procedures all execute in a reasonable amount of time.
I have been unable to find any articles explaining the process that occurs
when a CLR stored procedure is called. I am currently assuming the initial
delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
Server or only when needed?
If anyone can point me in the direction of an article to aid my
understanding or can explain the process I would be grateful.
Thanks in advance.
Alistair Harrison
How long of a delay do you experience? Do you get the delay with a trivial
proc like the one below? I get a sub-second response when I execute this
after a fresh SQL Server restart.
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void MyProc()
{
Microsoft.SqlServer.Server.SqlContext.Pipe.Send("T est message");
}
};
I'm no expert on SQL CLR internals but I can't think of any overhead beyond
the usual object-specific overhead.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> Hello,
> I am currently working on an application which calls some CLR stored
> procedures in SQL Server 2005. The first time one of the CLR stored
> procedures (it doesn't matter which one) is called it takes a lot longer
> to
> execute. After the first CLR stored procedure has been executed the CLR
> stored procedures all execute in a reasonable amount of time.
> I have been unable to find any articles explaining the process that occurs
> when a CLR stored procedure is called. I am currently assuming the initial
> delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> Server or only when needed?
> If anyone can point me in the direction of an article to aid my
> understanding or can explain the process I would be grateful.
> Thanks in advance.
> Alistair Harrison
|||Dan,
Thanks for your response.
The following are examples of execution times recorded by the Client
Statistics when the query is executed:
Initial execution of my original sp: 1600 to 1700 ms
Subsequent execution: <100 ms
Initial execution of your test sp: 700 to 800 ms
Subsequent execution: <50 ms
Another point I noticed was that if the test sp is executed first followed
by my original sp following a restart the original sp takes around 1200 ms to
execute. This is less than the 1600 to 1700 ms it seems to take when
executing the sp first but is still much greater than the subsequent
execution time.
It might be worth mentioning that my original clr sp executes a couple of
tsql sps using a context connection and then returns some xml as an output
parameter.
Thanks,
Alistair
"Dan Guzman" wrote:

> How long of a delay do you experience? Do you get the delay with a trivial
> proc like the one below? I get a sub-second response when I execute this
> after a fresh SQL Server restart.
> public partial class StoredProcedures
> {
> [Microsoft.SqlServer.Server.SqlProcedure]
> public static void MyProc()
> {
> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("T est message");
> }
> };
> I'm no expert on SQL CLR internals but I can't think of any overhead beyond
> the usual object-specific overhead.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
>
>
|||It's likely that subsequent executions are faster simply a result of data
caching. I'd expect execution times to average about 1200ms if you run DBCC
DROPCLEANBUFFERS before each execution. For performance testing, I usually
execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
I don't think the sub-second delay the first time the proc is run is
anything to be concerned about.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...[vbcol=seagreen]
> Dan,
> Thanks for your response.
> The following are examples of execution times recorded by the Client
> Statistics when the query is executed:
> Initial execution of my original sp: 1600 to 1700 ms
> Subsequent execution: <100 ms
> Initial execution of your test sp: 700 to 800 ms
> Subsequent execution: <50 ms
> Another point I noticed was that if the test sp is executed first followed
> by my original sp following a restart the original sp takes around 1200 ms
> to
> execute. This is less than the 1600 to 1700 ms it seems to take when
> executing the sp first but is still much greater than the subsequent
> execution time.
> It might be worth mentioning that my original clr sp executes a couple of
> tsql sps using a context connection and then returns some xml as an output
> parameter.
> Thanks,
> Alistair
>
> "Dan Guzman" wrote:
|||Dan
Thanks again for your response.
I have retested the two stored procedures running DROPCLEANBUFFERS and
FREEPROCCACHE before each execution. The execution times increase slightly
but are still consistently below 300ms.
I suppose as the longer execution only seems to occur following a restart it
is not too much of a worry just intriguing as to why it seems to occur.
Alistair
"Dan Guzman" wrote:

> It's likely that subsequent executions are faster simply a result of data
> caching. I'd expect execution times to average about 1200ms if you run DBCC
> DROPCLEANBUFFERS before each execution. For performance testing, I usually
> execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
> I don't think the sub-second delay the first time the proc is run is
> anything to be concerned about.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
>
>
|||Hello Alistair,
It might be worth remembering that the Assembly (and its dependent assemblies)
still needs to be fetched, verified, JITTed and the app domain created before
the CLR code runs in many cases. That could be why you have a slower initial
startup.
Thank you,
Kent Tegels
DevelopMentor
http://staff.develop.com/ktegels/

CLR SP Execution

Hello,
I am currently working on an application which calls some CLR stored
procedures in SQL Server 2005. The first time one of the CLR stored
procedures (it doesn't matter which one) is called it takes a lot longer to
execute. After the first CLR stored procedure has been executed the CLR
stored procedures all execute in a reasonable amount of time.
I have been unable to find any articles explaining the process that occurs
when a CLR stored procedure is called. I am currently assuming the initial
delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
Server or only when needed?
If anyone can point me in the direction of an article to aid my
understanding or can explain the process I would be grateful.
Thanks in advance.
Alistair HarrisonHow long of a delay do you experience? Do you get the delay with a trivial
proc like the one below? I get a sub-second response when I execute this
after a fresh SQL Server restart.
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void MyProc()
{
Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
}
};
I'm no expert on SQL CLR internals but I can't think of any overhead beyond
the usual object-specific overhead.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> Hello,
> I am currently working on an application which calls some CLR stored
> procedures in SQL Server 2005. The first time one of the CLR stored
> procedures (it doesn't matter which one) is called it takes a lot longer
> to
> execute. After the first CLR stored procedure has been executed the CLR
> stored procedures all execute in a reasonable amount of time.
> I have been unable to find any articles explaining the process that occurs
> when a CLR stored procedure is called. I am currently assuming the initial
> delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> Server or only when needed?
> If anyone can point me in the direction of an article to aid my
> understanding or can explain the process I would be grateful.
> Thanks in advance.
> Alistair Harrison|||Dan,
Thanks for your response.
The following are examples of execution times recorded by the Client
Statistics when the query is executed:
Initial execution of my original sp: 1600 to 1700 ms
Subsequent execution: <100 ms
Initial execution of your test sp: 700 to 800 ms
Subsequent execution: <50 ms
Another point I noticed was that if the test sp is executed first followed
by my original sp following a restart the original sp takes around 1200 ms to
execute. This is less than the 1600 to 1700 ms it seems to take when
executing the sp first but is still much greater than the subsequent
execution time.
It might be worth mentioning that my original clr sp executes a couple of
tsql sps using a context connection and then returns some xml as an output
parameter.
Thanks,
Alistair
"Dan Guzman" wrote:
> How long of a delay do you experience? Do you get the delay with a trivial
> proc like the one below? I get a sub-second response when I execute this
> after a fresh SQL Server restart.
> public partial class StoredProcedures
> {
> [Microsoft.SqlServer.Server.SqlProcedure]
> public static void MyProc()
> {
> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
> }
> };
> I'm no expert on SQL CLR internals but I can't think of any overhead beyond
> the usual object-specific overhead.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> > Hello,
> >
> > I am currently working on an application which calls some CLR stored
> > procedures in SQL Server 2005. The first time one of the CLR stored
> > procedures (it doesn't matter which one) is called it takes a lot longer
> > to
> > execute. After the first CLR stored procedure has been executed the CLR
> > stored procedures all execute in a reasonable amount of time.
> >
> > I have been unable to find any articles explaining the process that occurs
> > when a CLR stored procedure is called. I am currently assuming the initial
> > delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> > Server or only when needed?
> >
> > If anyone can point me in the direction of an article to aid my
> > understanding or can explain the process I would be grateful.
> >
> > Thanks in advance.
> >
> > Alistair Harrison
>
>|||It's likely that subsequent executions are faster simply a result of data
caching. I'd expect execution times to average about 1200ms if you run DBCC
DROPCLEANBUFFERS before each execution. For performance testing, I usually
execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
I don't think the sub-second delay the first time the proc is run is
anything to be concerned about.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
> Dan,
> Thanks for your response.
> The following are examples of execution times recorded by the Client
> Statistics when the query is executed:
> Initial execution of my original sp: 1600 to 1700 ms
> Subsequent execution: <100 ms
> Initial execution of your test sp: 700 to 800 ms
> Subsequent execution: <50 ms
> Another point I noticed was that if the test sp is executed first followed
> by my original sp following a restart the original sp takes around 1200 ms
> to
> execute. This is less than the 1600 to 1700 ms it seems to take when
> executing the sp first but is still much greater than the subsequent
> execution time.
> It might be worth mentioning that my original clr sp executes a couple of
> tsql sps using a context connection and then returns some xml as an output
> parameter.
> Thanks,
> Alistair
>
> "Dan Guzman" wrote:
>> How long of a delay do you experience? Do you get the delay with a
>> trivial
>> proc like the one below? I get a sub-second response when I execute this
>> after a fresh SQL Server restart.
>> public partial class StoredProcedures
>> {
>> [Microsoft.SqlServer.Server.SqlProcedure]
>> public static void MyProc()
>> {
>> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
>> }
>> };
>> I'm no expert on SQL CLR internals but I can't think of any overhead
>> beyond
>> the usual object-specific overhead.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
>> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
>> > Hello,
>> >
>> > I am currently working on an application which calls some CLR stored
>> > procedures in SQL Server 2005. The first time one of the CLR stored
>> > procedures (it doesn't matter which one) is called it takes a lot
>> > longer
>> > to
>> > execute. After the first CLR stored procedure has been executed the CLR
>> > stored procedures all execute in a reasonable amount of time.
>> >
>> > I have been unable to find any articles explaining the process that
>> > occurs
>> > when a CLR stored procedure is called. I am currently assuming the
>> > initial
>> > delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
>> > Server or only when needed?
>> >
>> > If anyone can point me in the direction of an article to aid my
>> > understanding or can explain the process I would be grateful.
>> >
>> > Thanks in advance.
>> >
>> > Alistair Harrison
>>|||Dan
Thanks again for your response.
I have retested the two stored procedures running DROPCLEANBUFFERS and
FREEPROCCACHE before each execution. The execution times increase slightly
but are still consistently below 300ms.
I suppose as the longer execution only seems to occur following a restart it
is not too much of a worry just intriguing as to why it seems to occur.
Alistair
"Dan Guzman" wrote:
> It's likely that subsequent executions are faster simply a result of data
> caching. I'd expect execution times to average about 1200ms if you run DBCC
> DROPCLEANBUFFERS before each execution. For performance testing, I usually
> execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
> I don't think the sub-second delay the first time the proc is run is
> anything to be concerned about.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
> > Dan,
> >
> > Thanks for your response.
> >
> > The following are examples of execution times recorded by the Client
> > Statistics when the query is executed:
> >
> > Initial execution of my original sp: 1600 to 1700 ms
> > Subsequent execution: <100 ms
> >
> > Initial execution of your test sp: 700 to 800 ms
> > Subsequent execution: <50 ms
> >
> > Another point I noticed was that if the test sp is executed first followed
> > by my original sp following a restart the original sp takes around 1200 ms
> > to
> > execute. This is less than the 1600 to 1700 ms it seems to take when
> > executing the sp first but is still much greater than the subsequent
> > execution time.
> >
> > It might be worth mentioning that my original clr sp executes a couple of
> > tsql sps using a context connection and then returns some xml as an output
> > parameter.
> >
> > Thanks,
> >
> > Alistair
> >
> >
> > "Dan Guzman" wrote:
> >
> >> How long of a delay do you experience? Do you get the delay with a
> >> trivial
> >> proc like the one below? I get a sub-second response when I execute this
> >> after a fresh SQL Server restart.
> >>
> >> public partial class StoredProcedures
> >> {
> >> [Microsoft.SqlServer.Server.SqlProcedure]
> >> public static void MyProc()
> >> {
> >> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
> >> }
> >> };
> >>
> >> I'm no expert on SQL CLR internals but I can't think of any overhead
> >> beyond
> >> the usual object-specific overhead.
> >>
> >> --
> >> Hope this helps.
> >>
> >> Dan Guzman
> >> SQL Server MVP
> >>
> >> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> >> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> >> > Hello,
> >> >
> >> > I am currently working on an application which calls some CLR stored
> >> > procedures in SQL Server 2005. The first time one of the CLR stored
> >> > procedures (it doesn't matter which one) is called it takes a lot
> >> > longer
> >> > to
> >> > execute. After the first CLR stored procedure has been executed the CLR
> >> > stored procedures all execute in a reasonable amount of time.
> >> >
> >> > I have been unable to find any articles explaining the process that
> >> > occurs
> >> > when a CLR stored procedure is called. I am currently assuming the
> >> > initial
> >> > delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> >> > Server or only when needed?
> >> >
> >> > If anyone can point me in the direction of an article to aid my
> >> > understanding or can explain the process I would be grateful.
> >> >
> >> > Thanks in advance.
> >> >
> >> > Alistair Harrison
> >>
> >>
> >>
>
>

Thursday, March 8, 2012

CLR Configuration option problem

Hai,
I develope one application using .Net framework 2.0 and sql server
2005.Some time the following message displayed
Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
configuration option
how to redtify the error
Regards
T.A.Krishna
Hi
"Krishna" wrote:

> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
>
This error normally refers to the 'CLR Integration' setting, which you can
set by using the Surface Area Configuration (SAC) tool. If you choose to set
the option by feature then under Database Engine you can see 'CLR
Integration' and the Enable CLR Integration checked box should be checked.

> how to redtify the error
> Regards
> T.A.Krishna
John
|||Krishna wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
CLR integration is disabled by default.
You must change the configuration option:
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
Best regards,
Marcin Guzowski
http://guzowski.info

Wednesday, March 7, 2012

CLR Configuration option problem

Hai,
I develope one application using .Net framework 2.0 and sql server
2005.Some time the following message displayed
Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
configuration option
how to redtify the error
Regards
T.A.KrishnaHi
"Krishna" wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
>
This error normally refers to the 'CLR Integration' setting, which you can
set by using the Surface Area Configuration (SAC) tool. If you choose to set
the option by feature then under Database Engine you can see 'CLR
Integration' and the Enable CLR Integration checked box should be checked.
> how to redtify the error
> Regards
> T.A.Krishna
John|||Krishna wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
CLR integration is disabled by default.
You must change the configuration option:
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
Best regards,
Marcin Guzowski
http://guzowski.info

CLR Configuration option problem

Hai,
I develope one application using .Net framework 2.0 and sql server
2005.Some time the following message displayed
Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
configuration option
how to redtify the error
Regards
T.A.KrishnaHi
"Krishna" wrote:

> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
>
This error normally refers to the 'CLR Integration' setting, which you can
set by using the Surface Area Configuration (SAC) tool. If you choose to set
the option by feature then under Database Engine you can see 'CLR
Integration' and the Enable CLR Integration checked box should be checked.

> how to redtify the error
> Regards
> T.A.Krishna
John|||Krishna wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
CLR integration is disabled by default.
You must change the configuration option:
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
Best regards,
Marcin Guzowski
http://guzowski.info

closing SQL Connection

hello to all
i am finding my self in confusing problem.

everyay log file of my application increase by 10 GB.
every time i found more than 100 connections open in SDQL Server 2005 under my database.

i an using SQLHELPER Class by microsoft.i ma not putting my code in try-catch block. example

of one of my frequently used function is as follows:

protected Int64 GetMemberID()
{
String SqlSelect = "SELECT * FROM MemberMaster WHERE MemberUserName= '" +

Session["UserName"].ToString() + "'";
SqlDataReader dr1 =

SqlHelper.ExecuteReader(ConfigurationManager.ConnectionStrings["CowcallConnectionString2"].T

oString(), CommandType.Text, SqlSelect);
while (dr1.Read())
{
CurrentMemberID = Int64.Parse(dr1["MemberID"].ToString().Trim());
}
return CurrentMemberID;
}

well i doubt i am opening connection and do not closing it. moreover i do not know how to close connection in above code when i am using sql helper class.
please give me yours suggestion regarding my code even if it is not solving above two problems.

Hi munishbhatia,

use theCommandBehavior to close the connection after use like: SqlHelper.ExecuteReader(CommandBehavior.CloseConnection);

Closing open sessions

Hi All
Is there a way/sript that i can schedule to close open sessions at a
certain time. I do have some sessions that are left open and doing nothing o
n
my db. I would like to check for the open sessions that are not doing
anything and also to close all sessions at 3am M-F. Thank you in advance.The sp_who procedure will return all open connections and their current
status. The KILL command can be used to terminate a process. However, just
because a process is idle at the moment, does not mean that it is not in use
by an application.
There is a statement to set the database to single user mode and first close
all processes while rolling back open transactions:
alter database <dbname> set single_user with rollback immediate
"MittyKom" <MittyKom@.discussions.microsoft.com> wrote in message
news:486EA53E-8CD9-498E-8E50-34FCD18C95B5@.microsoft.com...
> Hi All
> Is there a way/sript that i can schedule to close open sessions at a
> certain time. I do have some sessions that are left open and doing nothing
> on
> my db. I would like to check for the open sessions that are not doing
> anything and also to close all sessions at 3am M-F. Thank you in advance.
>

Closing open sessions

Hi All
Is there a way/sript that i can schedule to close open sessions at a
certain time. I do have some sessions that are left open and doing nothing o
n
my db. I would like to check for the open sessions that are not doing
anything and also to close all sessions at 3am M-F. Thank you in advance.You can use something like this. Just schedule it in Job Scheduler to run at
3am, and change the > 10 to be the number of minutes that you want to
consider a connection to be idle.
-- Kill all spids that havent had a batch for X number of minutes
--
DECLARE cur_KILLSPID CURSOR
READ_ONLY
FOR select spid from master..sysprocesses
where cmd = 'AWAITING COMMAND' and datediff(mi,last_batch, getdate()) > 10
DECLARE @.spid smallint, @.sql varchar(100)
OPEN cur_KILLSPID
FETCH NEXT FROM cur_KILLSPID INTO @.spid
WHILE (@.@.fetch_status <> -1)
BEGIN
IF (@.@.fetch_status <> -2)
BEGIN
select @.sql = 'KILL ' + cast(@.spid as varchar(10))
exec (@.sql)
DECLARE @.message varchar(100)
SELECT @.message = 'Killing SPID: ' + cast(@.spid as varchar(10))
PRINT @.message
END
FETCH NEXT FROM cur_KILLSPID INTO @.spid
END
CLOSE cur_KILLSPID
DEALLOCATE cur_KILLSPID
GO
AndyP,
Sr. Database Administrator,
MCDBA 2003,
Sybase Certified Pro DBA (AA115, SD115, AA12, AP12)
"MittyKom" wrote:

> Hi All
> Is there a way/sript that i can schedule to close open sessions at a
> certain time. I do have some sessions that are left open and doing nothing
on
> my db. I would like to check for the open sessions that are not doing
> anything and also to close all sessions at 3am M-F. Thank you in advance.

Closing open sessions

Hi All
Is there a way/sript that i can schedule to close open sessions at a
certain time. I do have some sessions that are left open and doing nothing on
my db. I would like to check for the open sessions that are not doing
anything and also to close all sessions at 3am M-F. Thank you in advance.
You can use something like this. Just schedule it in Job Scheduler to run at
3am, and change the > 10 to be the number of minutes that you want to
consider a connection to be idle.
-- Kill all spids that havent had a batch for X number of minutes
DECLARE cur_KILLSPID CURSOR
READ_ONLY
FOR select spid from master..sysprocesses
where cmd = 'AWAITING COMMAND' and datediff(mi,last_batch, getdate()) > 10
DECLARE @.spid smallint, @.sql varchar(100)
OPEN cur_KILLSPID
FETCH NEXT FROM cur_KILLSPID INTO @.spid
WHILE (@.@.fetch_status <> -1)
BEGIN
IF (@.@.fetch_status <> -2)
BEGIN
select @.sql = 'KILL ' + cast(@.spid as varchar(10))
exec (@.sql)
DECLARE @.message varchar(100)
SELECT @.message = 'Killing SPID: ' + cast(@.spid as varchar(10))
PRINT @.message
END
FETCH NEXT FROM cur_KILLSPID INTO @.spid
END
CLOSE cur_KILLSPID
DEALLOCATE cur_KILLSPID
GO
AndyP,
Sr. Database Administrator,
MCDBA 2003,
Sybase Certified Pro DBA (AA115, SD115, AA12, AP12)
"MittyKom" wrote:

> Hi All
> Is there a way/sript that i can schedule to close open sessions at a
> certain time. I do have some sessions that are left open and doing nothing on
> my db. I would like to check for the open sessions that are not doing
> anything and also to close all sessions at 3am M-F. Thank you in advance.

Closing open sessions

Hi All
Is there a way/sript that i can schedule to close open sessions at a
certain time. I do have some sessions that are left open and doing nothing on
my db. I would like to check for the open sessions that are not doing
anything and also to close all sessions at 3am M-F. Thank you in advance.You can use something like this. Just schedule it in Job Scheduler to run at
3am, and change the > 10 to be the number of minutes that you want to
consider a connection to be idle.
-- Kill all spids that havent had a batch for X number of minutes
--
DECLARE cur_KILLSPID CURSOR
READ_ONLY
FOR select spid from master..sysprocesses
where cmd = 'AWAITING COMMAND' and datediff(mi,last_batch, getdate()) > 10
DECLARE @.spid smallint, @.sql varchar(100)
OPEN cur_KILLSPID
FETCH NEXT FROM cur_KILLSPID INTO @.spid
WHILE (@.@.fetch_status <> -1)
BEGIN
IF (@.@.fetch_status <> -2)
BEGIN
select @.sql = 'KILL ' + cast(@.spid as varchar(10))
exec (@.sql)
DECLARE @.message varchar(100)
SELECT @.message = 'Killing SPID: ' + cast(@.spid as varchar(10))
PRINT @.message
END
FETCH NEXT FROM cur_KILLSPID INTO @.spid
END
CLOSE cur_KILLSPID
DEALLOCATE cur_KILLSPID
GO
AndyP,
Sr. Database Administrator,
MCDBA 2003,
Sybase Certified Pro DBA (AA115, SD115, AA12, AP12)
"MittyKom" wrote:
> Hi All
> Is there a way/sript that i can schedule to close open sessions at a
> certain time. I do have some sessions that are left open and doing nothing on
> my db. I would like to check for the open sessions that are not doing
> anything and also to close all sessions at 3am M-F. Thank you in advance.

Closest Time in SQL Table

Hi. I have 2 tables which have the following formats:
TABLE A
PLC_TIME TAGNAME STATUS LOCATION
-- -- -- --
TABLE B
PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
-- -- --
-- --
Both tables will be inserted with records by a software continuously
with the PLC_TIME of Table B null, ie
PLC_TIME is not inserted with records. For eg,
TABLE_A
PLC_TIME TAGNAME STATUS LOCATION
-- -- -- --
07-05-2007 1:00:32pm TAG.A ON SITEA
07-05-2007 1:40:11pm TAG.A OFF SITEA
07-05-2007 1:10:31pm TAG.A ON SITEA
07-05-2007 2:10:11pm TAG.B NORMAL SITEB
TABLE_B
PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
-- -- --
-- --
null 07-05-2007 1:00:39pmTAG.A ONSITEA
null 07-05-2007 1:41:22pmTAG.A OFF SITEA
null 07-05-2007 2:10:12pm TAG.B NORMAL SITEB
I'm required to update PLC_TIME of TABLE B with the PLC_TIME from
table A
which both of them have the same TAGNAME and STATUS and the PLC_TIME
from table A must be
the CLOSEST to the PCVUE_TIME in table b. PLC_TIME is also always
earlier than PCVUE_TIME.
I have used the following SQL command in my VB program to update the
CLOSEST PLC_TIME of TABLE_B:
UPDATE TABLE_B SET PLC_TIME = (SELECT MAX(PLC_TIME) FROM TABLE_A WHERE
TAGNAME = TABLE_B.TAGNAME AND STATUS = TABLE_B.STATUS
AND PLC_TIME<TABLE_B.PCVUE_TIME AND
ABS(DATEDIFF(MINUTE,PLC_TIME,PCVUE_TIME))<31)
(Note: 31 is just an estimation of the maximum difference of PLC_TIME
and PCVUE_TIME in normal case.)
Since both TABLE_A and TABLE_B are updated randomly and the timing may
not be in order, i.e. Some earlier records may be inserted into
TABLE_A and TABLE_B
by the software, my VB program needs to "update TABLE_B set PLC_TIME"
constantly until the CLOSEST PLC_TIME is updated.
What i can think of is DATEDIFF() but DATEDIFF has a limitation that
it only compares the specific MINUTE of
PCVUE_TIME and PLC_TIME if i use DATEDIFF(minute,PLC_TIME, PCVUE_TIME)
and this will have a problem for 1:59pm and 2:00pm in which 2:00pm
will be treated
as earlier than 1:59pm. Also i may need to have multiple DATEDIFF
which compares YEAR, MONTH, DAY, HOUR and so on... which is not
feasible.
What correct SQL command can i use to update TABLE_B until the CLOSEST
PLC_TIME is updated into TABLE_B?
You can try something like this:
UPDATE TABLE_B SET PLC_TIME = A.PLC_TIME
FROM TABLE_B B
INNER JOIN
(SELECT MAX(TABLE_A.PLC_TIME) PLC_TIME, TABLE_A.TAGNAME, TABLE_A.STATUS
FROM TABLE_A
INNER JOIN TABLE_B
ON TABLE_A.TAGNAME = TABLE_B.TAGNAME AND TABLE_A.STATUS = TABLE_B.STATUS
WHERE TABLE_A.PLC_TIME<TABLE_B.PCVUE_TIME
GROUP BY TABLE_A.TAGNAME, TABLE_A.STATUS) A
ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
"albertleng" <albertleng@.gmail.com> wrote in message
news:1180449709.930509.45380@.r19g2000prf.googlegro ups.com...
> Hi. I have 2 tables which have the following formats:
> TABLE A
> PLC_TIME TAGNAME STATUS LOCATION
> -- -- -- --
> TABLE B
> PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
> -- -- --
> -- --
> Both tables will be inserted with records by a software continuously
> with the PLC_TIME of Table B null, ie
> PLC_TIME is not inserted with records. For eg,
> TABLE_A
> PLC_TIME TAGNAME STATUS LOCATION
> -- -- -- --
> 07-05-2007 1:00:32pm TAG.A ON SITEA
> 07-05-2007 1:40:11pm TAG.A OFF SITEA
> 07-05-2007 1:10:31pm TAG.A ON SITEA
> 07-05-2007 2:10:11pm TAG.B NORMAL SITEB
>
> TABLE_B
> PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
> -- -- --
> -- --
> null 07-05-2007 1:00:39pm TAG.A ON SITEA
> null 07-05-2007 1:41:22pm TAG.A OFF SITEA
> null 07-05-2007 2:10:12pm TAG.B NORMAL SITEB
> I'm required to update PLC_TIME of TABLE B with the PLC_TIME from
> table A
> which both of them have the same TAGNAME and STATUS and the PLC_TIME
> from table A must be
> the CLOSEST to the PCVUE_TIME in table b. PLC_TIME is also always
> earlier than PCVUE_TIME.
> I have used the following SQL command in my VB program to update the
> CLOSEST PLC_TIME of TABLE_B:
> UPDATE TABLE_B SET PLC_TIME = (SELECT MAX(PLC_TIME) FROM TABLE_A WHERE
> TAGNAME = TABLE_B.TAGNAME AND STATUS = TABLE_B.STATUS
> AND PLC_TIME<TABLE_B.PCVUE_TIME AND
> ABS(DATEDIFF(MINUTE,PLC_TIME,PCVUE_TIME))<31)
> (Note: 31 is just an estimation of the maximum difference of PLC_TIME
> and PCVUE_TIME in normal case.)
> Since both TABLE_A and TABLE_B are updated randomly and the timing may
> not be in order, i.e. Some earlier records may be inserted into
> TABLE_A and TABLE_B
> by the software, my VB program needs to "update TABLE_B set PLC_TIME"
> constantly until the CLOSEST PLC_TIME is updated.
> What i can think of is DATEDIFF() but DATEDIFF has a limitation that
> it only compares the specific MINUTE of
> PCVUE_TIME and PLC_TIME if i use DATEDIFF(minute,PLC_TIME, PCVUE_TIME)
> and this will have a problem for 1:59pm and 2:00pm in which 2:00pm
> will be treated
> as earlier than 1:59pm. Also i may need to have multiple DATEDIFF
> which compares YEAR, MONTH, DAY, HOUR and so on... which is not
> feasible.
>
> What correct SQL command can i use to update TABLE_B until the CLOSEST
> PLC_TIME is updated into TABLE_B?
>
|||Hi
"Quentin Ran" wrote:

> You can try something like this:
> UPDATE TABLE_B SET PLC_TIME = A.PLC_TIME
> FROM TABLE_B B
> INNER JOIN
> (SELECT MAX(TABLE_A.PLC_TIME) PLC_TIME, TABLE_A.TAGNAME, TABLE_A.STATUS
> FROM TABLE_A
> INNER JOIN TABLE_B
> ON TABLE_A.TAGNAME = TABLE_B.TAGNAME AND TABLE_A.STATUS = TABLE_B.STATUS
> WHERE TABLE_A.PLC_TIME<TABLE_B.PCVUE_TIME
> GROUP BY TABLE_A.TAGNAME, TABLE_A.STATUS) A
> ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
>
You should also have
WHERE B.PLC_TIME IS NULL
to both the derived table and update statement to eliminate records already
updated
i.e.
UPDATE B
SET PLC_TIME = A.PLC_TIME
FROM TABLE_B B
JOIN (SELECT MAX(C.PLC_TIME) AS PLC_TIME, C.TAGNAME, C.STATUS
FROM TABLE_A C
JOIN TABLE_B D ON C.TAGNAME = D.TAGNAME AND C.STATUS = D.STATUS
AND C.PLC_TIME< D.PCVUE_TIME AND D.PLC_TIME IS NULL
GROUP BY C.TAGNAME, C.STATUS) A
ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
WHERE B.PLC_TIME IS NULL
John

Closest Time in SQL Table

Hi. I have 2 tables which have the following formats:
TABLE A
PLC_TIME TAGNAME STATUS LOCATION
-- -- -- --
TABLE B
PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
-- -- --
-- --
Both tables will be inserted with records by a software continuously
with the PLC_TIME of Table B null, ie
PLC_TIME is not inserted with records. For eg,
TABLE_A
PLC_TIME TAGNAME STATUS LOCATION
-- -- -- --
07-05-2007 1:00:32pm TAG.A ON SITEA
07-05-2007 1:40:11pm TAG.A OFF SITEA
07-05-2007 1:10:31pm TAG.A ON SITEA
07-05-2007 2:10:11pm TAG.B NORMAL SITEB
TABLE_B
PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
-- -- --
-- --
null 07-05-2007 1:00:39pm TAG.A ON SITEA
null 07-05-2007 1:41:22pm TAG.A OFF SITEA
null 07-05-2007 2:10:12pm TAG.B NORMAL SITEB
I'm required to update PLC_TIME of TABLE B with the PLC_TIME from
table A
which both of them have the same TAGNAME and STATUS and the PLC_TIME
from table A must be
the CLOSEST to the PCVUE_TIME in table b. PLC_TIME is also always
earlier than PCVUE_TIME.
I have used the following SQL command in my VB program to update the
CLOSEST PLC_TIME of TABLE_B:
UPDATE TABLE_B SET PLC_TIME = (SELECT MAX(PLC_TIME) FROM TABLE_A WHERE
TAGNAME = TABLE_B.TAGNAME AND STATUS = TABLE_B.STATUS
AND PLC_TIME<TABLE_B.PCVUE_TIME AND
ABS(DATEDIFF(MINUTE,PLC_TIME,PCVUE_TIME))<31)
(Note: 31 is just an estimation of the maximum difference of PLC_TIME
and PCVUE_TIME in normal case.)
Since both TABLE_A and TABLE_B are updated randomly and the timing may
not be in order, i.e. Some earlier records may be inserted into
TABLE_A and TABLE_B
by the software, my VB program needs to "update TABLE_B set PLC_TIME"
constantly until the CLOSEST PLC_TIME is updated.
What i can think of is DATEDIFF() but DATEDIFF has a limitation that
it only compares the specific MINUTE of
PCVUE_TIME and PLC_TIME if i use DATEDIFF(minute,PLC_TIME, PCVUE_TIME)
and this will have a problem for 1:59pm and 2:00pm in which 2:00pm
will be treated
as earlier than 1:59pm. Also i may need to have multiple DATEDIFF
which compares YEAR, MONTH, DAY, HOUR and so on... which is not
feasible.
What correct SQL command can i use to update TABLE_B until the CLOSEST
PLC_TIME is updated into TABLE_B?You can try something like this:
UPDATE TABLE_B SET PLC_TIME = A.PLC_TIME
FROM TABLE_B B
INNER JOIN
(SELECT MAX(TABLE_A.PLC_TIME) PLC_TIME, TABLE_A.TAGNAME, TABLE_A.STATUS
FROM TABLE_A
INNER JOIN TABLE_B
ON TABLE_A.TAGNAME = TABLE_B.TAGNAME AND TABLE_A.STATUS = TABLE_B.STATUS
WHERE TABLE_A.PLC_TIME<TABLE_B.PCVUE_TIME
GROUP BY TABLE_A.TAGNAME, TABLE_A.STATUS) A
ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
"albertleng" <albertleng@.gmail.com> wrote in message
news:1180449709.930509.45380@.r19g2000prf.googlegroups.com...
> Hi. I have 2 tables which have the following formats:
> TABLE A
> PLC_TIME TAGNAME STATUS LOCATION
> -- -- -- --
> TABLE B
> PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
> -- -- --
> -- --
> Both tables will be inserted with records by a software continuously
> with the PLC_TIME of Table B null, ie
> PLC_TIME is not inserted with records. For eg,
> TABLE_A
> PLC_TIME TAGNAME STATUS LOCATION
> -- -- -- --
> 07-05-2007 1:00:32pm TAG.A ON SITEA
> 07-05-2007 1:40:11pm TAG.A OFF SITEA
> 07-05-2007 1:10:31pm TAG.A ON SITEA
> 07-05-2007 2:10:11pm TAG.B NORMAL SITEB
>
> TABLE_B
> PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
> -- -- --
> -- --
> null 07-05-2007 1:00:39pm TAG.A ON SITEA
> null 07-05-2007 1:41:22pm TAG.A OFF SITEA
> null 07-05-2007 2:10:12pm TAG.B NORMAL SITEB
> I'm required to update PLC_TIME of TABLE B with the PLC_TIME from
> table A
> which both of them have the same TAGNAME and STATUS and the PLC_TIME
> from table A must be
> the CLOSEST to the PCVUE_TIME in table b. PLC_TIME is also always
> earlier than PCVUE_TIME.
> I have used the following SQL command in my VB program to update the
> CLOSEST PLC_TIME of TABLE_B:
> UPDATE TABLE_B SET PLC_TIME = (SELECT MAX(PLC_TIME) FROM TABLE_A WHERE
> TAGNAME = TABLE_B.TAGNAME AND STATUS = TABLE_B.STATUS
> AND PLC_TIME<TABLE_B.PCVUE_TIME AND
> ABS(DATEDIFF(MINUTE,PLC_TIME,PCVUE_TIME))<31)
> (Note: 31 is just an estimation of the maximum difference of PLC_TIME
> and PCVUE_TIME in normal case.)
> Since both TABLE_A and TABLE_B are updated randomly and the timing may
> not be in order, i.e. Some earlier records may be inserted into
> TABLE_A and TABLE_B
> by the software, my VB program needs to "update TABLE_B set PLC_TIME"
> constantly until the CLOSEST PLC_TIME is updated.
> What i can think of is DATEDIFF() but DATEDIFF has a limitation that
> it only compares the specific MINUTE of
> PCVUE_TIME and PLC_TIME if i use DATEDIFF(minute,PLC_TIME, PCVUE_TIME)
> and this will have a problem for 1:59pm and 2:00pm in which 2:00pm
> will be treated
> as earlier than 1:59pm. Also i may need to have multiple DATEDIFF
> which compares YEAR, MONTH, DAY, HOUR and so on... which is not
> feasible.
>
> What correct SQL command can i use to update TABLE_B until the CLOSEST
> PLC_TIME is updated into TABLE_B?
>|||Hi
"Quentin Ran" wrote:
> You can try something like this:
> UPDATE TABLE_B SET PLC_TIME = A.PLC_TIME
> FROM TABLE_B B
> INNER JOIN
> (SELECT MAX(TABLE_A.PLC_TIME) PLC_TIME, TABLE_A.TAGNAME, TABLE_A.STATUS
> FROM TABLE_A
> INNER JOIN TABLE_B
> ON TABLE_A.TAGNAME = TABLE_B.TAGNAME AND TABLE_A.STATUS = TABLE_B.STATUS
> WHERE TABLE_A.PLC_TIME<TABLE_B.PCVUE_TIME
> GROUP BY TABLE_A.TAGNAME, TABLE_A.STATUS) A
> ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
>
You should also have
WHERE B.PLC_TIME IS NULL
to both the derived table and update statement to eliminate records already
updated
i.e.
UPDATE B
SET PLC_TIME = A.PLC_TIME
FROM TABLE_B B
JOIN (SELECT MAX(C.PLC_TIME) AS PLC_TIME, C.TAGNAME, C.STATUS
FROM TABLE_A C
JOIN TABLE_B D ON C.TAGNAME = D.TAGNAME AND C.STATUS = D.STATUS
AND C.PLC_TIME< D.PCVUE_TIME AND D.PLC_TIME IS NULL
GROUP BY C.TAGNAME, C.STATUS) A
ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
WHERE B.PLC_TIME IS NULL
John

Closest Time in SQL Table

Hi. I have 2 tables which have the following formats:
TABLE A
PLC_TIME TAGNAME STATUS LOCATION
-- -- -- --
TABLE B
PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
-- -- --
-- --
Both tables will be inserted with records by a software continuously
with the PLC_TIME of Table B null, ie
PLC_TIME is not inserted with records. For eg,
TABLE_A
PLC_TIME TAGNAME STATUS LOCATION
-- -- -- --
07-05-2007 1:00:32pm TAG.A ON SITEA
07-05-2007 1:40:11pm TAG.A OFF SITEA
07-05-2007 1:10:31pm TAG.A ON SITEA
07-05-2007 2:10:11pm TAG.B NORMAL SITEB
TABLE_B
PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
-- -- --
-- --
null 07-05-2007 1:00:39pm TAG.A ON SITEA
null 07-05-2007 1:41:22pm TAG.A OFF SITEA
null 07-05-2007 2:10:12pm TAG.B NORMAL SITEB
I'm required to update PLC_TIME of TABLE B with the PLC_TIME from
table A
which both of them have the same TAGNAME and STATUS and the PLC_TIME
from table A must be
the CLOSEST to the PCVUE_TIME in table b. PLC_TIME is also always
earlier than PCVUE_TIME.
I have used the following SQL command in my VB program to update the
CLOSEST PLC_TIME of TABLE_B:
UPDATE TABLE_B SET PLC_TIME = (SELECT MAX(PLC_TIME) FROM TABLE_A WHERE
TAGNAME = TABLE_B.TAGNAME AND STATUS = TABLE_B.STATUS
AND PLC_TIME<TABLE_B.PCVUE_TIME AND
ABS(DATEDIFF(MINUTE,PLC_TIME,PCVUE_TIME)
)<31)
(Note: 31 is just an estimation of the maximum difference of PLC_TIME
and PCVUE_TIME in normal case.)
Since both TABLE_A and TABLE_B are updated randomly and the timing may
not be in order, i.e. Some earlier records may be inserted into
TABLE_A and TABLE_B
by the software, my VB program needs to "update TABLE_B set PLC_TIME"
constantly until the CLOSEST PLC_TIME is updated.
What i can think of is DATEDIFF() but DATEDIFF has a limitation that
it only compares the specific MINUTE of
PCVUE_TIME and PLC_TIME if i use DATEDIFF(minute,PLC_TIME, PCVUE_TIME)
and this will have a problem for 1:59pm and 2:00pm in which 2:00pm
will be treated
as earlier than 1:59pm. Also i may need to have multiple DATEDIFF
which compares YEAR, MONTH, DAY, HOUR and so on... which is not
feasible.
What correct SQL command can i use to update TABLE_B until the CLOSEST
PLC_TIME is updated into TABLE_B?You can try something like this:
UPDATE TABLE_B SET PLC_TIME = A.PLC_TIME
FROM TABLE_B B
INNER JOIN
(SELECT MAX(TABLE_A.PLC_TIME) PLC_TIME, TABLE_A.TAGNAME, TABLE_A.STATUS
FROM TABLE_A
INNER JOIN TABLE_B
ON TABLE_A.TAGNAME = TABLE_B.TAGNAME AND TABLE_A.STATUS = TABLE_B.STATUS
WHERE TABLE_A.PLC_TIME<TABLE_B.PCVUE_TIME
GROUP BY TABLE_A.TAGNAME, TABLE_A.STATUS) A
ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
"albertleng" <albertleng@.gmail.com> wrote in message
news:1180449709.930509.45380@.r19g2000prf.googlegroups.com...
> Hi. I have 2 tables which have the following formats:
> TABLE A
> PLC_TIME TAGNAME STATUS LOCATION
> -- -- -- --
> TABLE B
> PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
> -- -- --
> -- --
> Both tables will be inserted with records by a software continuously
> with the PLC_TIME of Table B null, ie
> PLC_TIME is not inserted with records. For eg,
> TABLE_A
> PLC_TIME TAGNAME STATUS LOCATION
> -- -- -- --
> 07-05-2007 1:00:32pm TAG.A ON SITEA
> 07-05-2007 1:40:11pm TAG.A OFF SITEA
> 07-05-2007 1:10:31pm TAG.A ON SITEA
> 07-05-2007 2:10:11pm TAG.B NORMAL SITEB
>
> TABLE_B
> PLC_TIME PCVUE_TIME TAGNAME STATUS LOCATION
> -- -- --
> -- --
> null 07-05-2007 1:00:39pm TAG.A ON SITEA
> null 07-05-2007 1:41:22pm TAG.A OFF SITEA
> null 07-05-2007 2:10:12pm TAG.B NORMAL SITEB
> I'm required to update PLC_TIME of TABLE B with the PLC_TIME from
> table A
> which both of them have the same TAGNAME and STATUS and the PLC_TIME
> from table A must be
> the CLOSEST to the PCVUE_TIME in table b. PLC_TIME is also always
> earlier than PCVUE_TIME.
> I have used the following SQL command in my VB program to update the
> CLOSEST PLC_TIME of TABLE_B:
> UPDATE TABLE_B SET PLC_TIME = (SELECT MAX(PLC_TIME) FROM TABLE_A WHERE
> TAGNAME = TABLE_B.TAGNAME AND STATUS = TABLE_B.STATUS
> AND PLC_TIME<TABLE_B.PCVUE_TIME AND
> ABS(DATEDIFF(MINUTE,PLC_TIME,PCVUE_TIME)
)<31)
> (Note: 31 is just an estimation of the maximum difference of PLC_TIME
> and PCVUE_TIME in normal case.)
> Since both TABLE_A and TABLE_B are updated randomly and the timing may
> not be in order, i.e. Some earlier records may be inserted into
> TABLE_A and TABLE_B
> by the software, my VB program needs to "update TABLE_B set PLC_TIME"
> constantly until the CLOSEST PLC_TIME is updated.
> What i can think of is DATEDIFF() but DATEDIFF has a limitation that
> it only compares the specific MINUTE of
> PCVUE_TIME and PLC_TIME if i use DATEDIFF(minute,PLC_TIME, PCVUE_TIME)
> and this will have a problem for 1:59pm and 2:00pm in which 2:00pm
> will be treated
> as earlier than 1:59pm. Also i may need to have multiple DATEDIFF
> which compares YEAR, MONTH, DAY, HOUR and so on... which is not
> feasible.
>
> What correct SQL command can i use to update TABLE_B until the CLOSEST
> PLC_TIME is updated into TABLE_B?
>|||Hi
"Quentin Ran" wrote:

> You can try something like this:
> UPDATE TABLE_B SET PLC_TIME = A.PLC_TIME
> FROM TABLE_B B
> INNER JOIN
> (SELECT MAX(TABLE_A.PLC_TIME) PLC_TIME, TABLE_A.TAGNAME, TABLE_A.STATUS
> FROM TABLE_A
> INNER JOIN TABLE_B
> ON TABLE_A.TAGNAME = TABLE_B.TAGNAME AND TABLE_A.STATUS = TABLE_B.STATUS
> WHERE TABLE_A.PLC_TIME<TABLE_B.PCVUE_TIME
> GROUP BY TABLE_A.TAGNAME, TABLE_A.STATUS) A
> ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
>
You should also have
WHERE B.PLC_TIME IS NULL
to both the derived table and update statement to eliminate records already
updated
i.e.
UPDATE B
SET PLC_TIME = A.PLC_TIME
FROM TABLE_B B
JOIN (SELECT MAX(C.PLC_TIME) AS PLC_TIME, C.TAGNAME, C.STATUS
FROM TABLE_A C
JOIN TABLE_B D ON C.TAGNAME = D.TAGNAME AND C.STATUS = D.STATUS
AND C.PLC_TIME< D.PCVUE_TIME AND D.PLC_TIME IS NULL
GROUP BY C.TAGNAME, C.STATUS) A
ON A.TAGNAME = B.TAGNAME AND A.STATUS = B.STATUS
WHERE B.PLC_TIME IS NULL
John

Close to real time sync for data in relational and flat schemas

Hi all,
Can someone point me to the right direction?
I need to synchronize data updates in relational schema with data in it's
flat (denormalized) equivalent in near real time mode.
What is the best way to achieve this if using triggers is not an option ?
Are there any proven approaches or tools (incl. 3rd party) which do that ?
Is there more appropriate microsoft.public.sqlserver.server sub-group for
this topic ?
Any good idea or suggestion is greatly appreciated,
Thank you,
VT
"VT" <VT@.discussions.microsoft.com> wrote in message
news:E782A78E-692C-4945-B7B5-6BA764AE3A0A@.microsoft.com...
> Hi all,
> Can someone point me to the right direction?
> I need to synchronize data updates in relational schema with data in it's
> flat (denormalized) equivalent in near real time mode.
> What is the best way to achieve this if using triggers is not an option ?
> Are there any proven approaches or tools (incl. 3rd party) which do that ?
> Is there more appropriate microsoft.public.sqlserver.server sub-group for
> this topic ?
> Any good idea or suggestion is greatly appreciated,
> Thank you,
> VT
>
A denomalized schema is simply the result of joins in a normalized schema.
That's easily achieved just by using views.
I suspect you want something more than a denormalized schema though. Are you
talking about data warehousing? If so there is a whole industry of products
to support you but I hesitate to suggest anything without knowing more about
what you are trying to achieve.
Have you seen the new Change Data Capture feature in SQL Server 2008?
David Portas
|||"VT" <VT@.discussions.microsoft.com> wrote in message
news:E782A78E-692C-4945-B7B5-6BA764AE3A0A@.microsoft.com...
> Hi all,
> Can someone point me to the right direction?
> I need to synchronize data updates in relational schema with data in it's
> flat (denormalized) equivalent in near real time mode.
> What is the best way to achieve this if using triggers is not an option ?
> Are there any proven approaches or tools (incl. 3rd party) which do that ?
> Is there more appropriate microsoft.public.sqlserver.server sub-group for
> this topic ?
> Any good idea or suggestion is greatly appreciated,
> Thank you,
> VT
Like David pointed out, you can use views, indexed views, and several other
possible alternatives including third-party tools. Can you be more specific
about your situation and ultimate goal?
|||Hi all,
Thank you for your responces.
What I am trying to achieve is to search against flat schema which as it was
rightfully pointed out - same as a join.
Profiling of the search against relational schema shows bulk of the time
with each query goes into joining of the tables.
I think of flat table as a substitute for that join.
Datawarehousing solutions may not be applicable for this as they are not
designed to work in realtime. They are more like batch type processes which
happen periodically.
Using view is great, but problem is that join has outer joins and such view
has nulls and therefore cannot be indexed.
Thanks,
VT
"Mike C#" wrote:

> "VT" <VT@.discussions.microsoft.com> wrote in message
> news:E782A78E-692C-4945-B7B5-6BA764AE3A0A@.microsoft.com...
> Like David pointed out, you can use views, indexed views, and several other
> possible alternatives including third-party tools. Can you be more specific
> about your situation and ultimate goal?
>
>