Tuesday, March 27, 2012
Cluster question
ng can we continue to use 2nd node as application server but as failover for sql server?
Yes it is possible, but you would have to have enough resources to run all
the applications. BTW, I think you don't quite understand the
term-active-active. That refers to a specific SQL 7.0 technology that
allowed each nost node to run a separate instance. SQL 2000 either has
single-instance or multi-instance clustering with the host nodes as complete
peers.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"bert" <anonymous@.discussions.microsoft.com> wrote in message
news:3C41330A-E708-40F3-B957-5AF11CC5B984@.microsoft.com...
> Our company are looking at purchasing 2 new servers. One will hold the
databases the other will be purely an application server which amongst other
things will access the database. If we cluster the two servers and setup sql
server active/passive clustering can we continue to use 2nd node as
application server but as failover for sql server?
Sunday, March 25, 2012
Cluster Mirroring
I've read a lot of the information about this topic in MSDN. My boss askme to understand and to let working an example application of Cluster Mirroring. The problem is that i dont understand well yet what is a Cluster and what is the main idea behind it.
If someone really understand and can explain me clearly whats behind it, i will be very grateful. Thanks a lot for the help. ( Sorry for me english :( )
Ok, Basically a cluster is a two-headed monster. Chop one of it's heads off and nothing will happen, it will still carry on chasing you.
In computer terms, the simplest cluster is a two-node cluster (two headed monster) , two physical computers sharing the same name and IP address. If one of the computers making up the cluster fails (chop one head off) the cluster will still carry on functioning (chasing you) on the other computer participating in the cluster.
The general idea behind it is to provide high-availability for your apps by eliminating a single point of failure.
|||
Thanks bobbins!
But i have another question ... when you have a mirror database with a witness, you also provide high-availability. Because if something happen the mirror will continue chasing you .. Why is much better the cluster ? Thanks a lot again
|||There are many discussions/articles on Clustering vs Mirroring vs Replication vs Log Shipping
In short, I think clustering is still a better solution for high-availability, PROVIDED that you do have a SAN
We're using mirroring now until we can get a SAN in, then we'll move to clustering
sqlsqlCluster Log error message
I'm investigating an application outage issue on a 2-node W2K Adv Server
cluster. A Generic Application is running on the cluster in a remote site,
and connects to a UNIX-based Oracle db at my site. There are intermittent
MRxSMB 3019 Event IDs which I'm not too worried about. I just used Q138365
to adjust the autodisconnect value to -1.
But I'm finding this error message in the Cluster log:
Network Name <Network Name CLUSTER VIRTUAL NAME>: Failed to register DNS PTR
record X.X.X.X.in-addr.arpa. for host CLUSTER VIRTUAL NAME FQDN, status 9005
I can't find anything documented about this error, so I can't tell if this
is something to be concerned about. Is it nothing, a cluster service
problem, DNS registraion problem?
Any thoughts would be greatly appreciated.
Thanks in advance.
Hey I had this error as well when I was setting up my cluster

To fix it:
o Right-click the network connection for your heartbeat adapter, and then
click Properties.
o Click Internet Protocol (TCP/IP), and then click Properties
o On the DNS tab, verify that no values are defined. Make sure that the
Register this connection's address in DNS is cleared.
Hope this helps!!
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:932C28D9-F8C2-4F83-BE57-AA149AF27065@.microsoft.com...
> Hello to all,
> I'm investigating an application outage issue on a 2-node W2K Adv Server
> cluster. A Generic Application is running on the cluster in a remote
site,
> and connects to a UNIX-based Oracle db at my site. There are intermittent
> MRxSMB 3019 Event IDs which I'm not too worried about. I just used
Q138365
> to adjust the autodisconnect value to -1.
> But I'm finding this error message in the Cluster log:
> Network Name <Network Name CLUSTER VIRTUAL NAME>: Failed to register DNS
PTR
> record X.X.X.X.in-addr.arpa. for host CLUSTER VIRTUAL NAME FQDN, status
9005
> I can't find anything documented about this error, so I can't tell if this
> is something to be concerned about. Is it nothing, a cluster service
> problem, DNS registraion problem?
> Any thoughts would be greatly appreciated.
> Thanks in advance.
sqlsql
Cluster Issue
I have a Wndows 2003 Sql 2005 Cluster consisting of two servers, on Failover
from node2 to node1 the application user is unable to login until the sql
server is restarted, on failover all the resourses come online but the user
is unable to access the database and if I look in SQl Server management
studio I am unable to see the properties of the DB, if I initiate a failure
of SQL server in the cluster I am still unable to access the DB but if I
restart SQL server in management studio the database comes online.
Also when I do a failover to node2, it shows all resourses are online but
the application cannot access the DB like it is not there, but I can access
the Quorum and the shared DB drive.
Thanks in advance,
Bob Smith
Definitely a problem. Check the Application log on either node (Event
logging on a cluster is cluster-wide to each node) to see if SQL is
complaining about starting up.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"Bob Smith" <BobSmith@.discussions.microsoft.com> wrote in message
news:F586AE13-D88E-4893-A6A3-0465392C13F2@.microsoft.com...
> Hello,
> I have a Wndows 2003 Sql 2005 Cluster consisting of two servers, on
> Failover
> from node2 to node1 the application user is unable to login until the sql
> server is restarted, on failover all the resourses come online but the
> user
> is unable to access the database and if I look in SQl Server management
> studio I am unable to see the properties of the DB, if I initiate a
> failure
> of SQL server in the cluster I am still unable to access the DB but if I
> restart SQL server in management studio the database comes online.
> Also when I do a failover to node2, it shows all resourses are online but
> the application cannot access the DB like it is not there, but I can
> access
> the Quorum and the shared DB drive.
> Thanks in advance,
> Bob Smith
|||Hi Geoff,
Thanks for the reply, the only thing I can see is logwriter errors, which I
googled and found more questions than answers, let me ask this: Can the
Quroum and the physical disk be in different groups or should the be in the
same, mine is set the the Quorum is in the Cluster group and the Physical
disk is in group 0. By the way I see these errors on both nodes. What is best
practice for the cluster group, what resourses should exist in it: Cluster
IP, Cluster Name, MSDTC -?, Quorum -?
Thanks again,
Bob Smith
"Geoff N. Hiten" wrote:
> Definitely a problem. Check the Application log on either node (Event
> logging on a cluster is cluster-wide to each node) to see if SQL is
> complaining about starting up.
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
>
> "Bob Smith" <BobSmith@.discussions.microsoft.com> wrote in message
> news:F586AE13-D88E-4893-A6A3-0465392C13F2@.microsoft.com...
>
|||Quorum must be in the cluster group. It is the reason the group exists.
The Quorum disk prevents split-brain problems. The Cluster group should
also have an IP Address and a Network Name Resource.
You can put the MSDTC resource in there too, but it can cause problems on
some very high volume systems.
All SQL resources should be in a different group. Note that disks are
physical resources. Logical OS partitions of the same disk cannot be split
into multiple groups.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"Bob Smith" <BobSmith@.discussions.microsoft.com> wrote in message
news:1BD77D34-DC9D-4CB6-8325-9D32F50C9B58@.microsoft.com...[vbcol=seagreen]
> Hi Geoff,
> Thanks for the reply, the only thing I can see is logwriter errors, which
> I
> googled and found more questions than answers, let me ask this: Can the
> Quroum and the physical disk be in different groups or should the be in
> the
> same, mine is set the the Quorum is in the Cluster group and the Physical
> disk is in group 0. By the way I see these errors on both nodes. What is
> best
> practice for the cluster group, what resourses should exist in it: Cluster
> IP, Cluster Name, MSDTC -?, Quorum -?
> Thanks again,
> Bob Smith
> "Geoff N. Hiten" wrote:
|||Geoff,
Thanks, that is how it is setup Cluster group (Cluster IP, Cluster Name,
Disk Q and MSDTC) all other resourses are in another group, I am still
confused as too why SQL says it is online but the user/app cannot connect,
by restarting SQL server in studion management it comes online.
Thanks again,
Bob Smith
"Geoff N. Hiten" <SQLCraftsman@.gmail.com> wrote in message
news:%23VBACR5VIHA.4140@.TK2MSFTNGP04.phx.gbl...
> Quorum must be in the cluster group. It is the reason the group exists.
> The Quorum disk prevents split-brain problems. The Cluster group should
> also have an IP Address and a Network Name Resource.
> You can put the MSDTC resource in there too, but it can cause problems on
> some very high volume systems.
> All SQL resources should be in a different group. Note that disks are
> physical resources. Logical OS partitions of the same disk cannot be
> split into multiple groups.
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
>
>
> "Bob Smith" <BobSmith@.discussions.microsoft.com> wrote in message
> news:1BD77D34-DC9D-4CB6-8325-9D32F50C9B58@.microsoft.com...
>
|||Maybe an IP address conflict or a dependency error. Again, the application
log should say something.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"Bob Smith" <bob@.neconsulting.net> wrote in message
news:4E7A31CB-2900-4ED3-A7A9-1B0800B2128C@.microsoft.com...
> Geoff,
> Thanks, that is how it is setup Cluster group (Cluster IP, Cluster Name,
> Disk Q and MSDTC) all other resourses are in another group, I am still
> confused as too why SQL says it is online but the user/app cannot connect,
> by restarting SQL server in studion management it comes online.
> Thanks again,
> Bob Smith
> "Geoff N. Hiten" <SQLCraftsman@.gmail.com> wrote in message
> news:%23VBACR5VIHA.4140@.TK2MSFTNGP04.phx.gbl...
>
Thursday, March 22, 2012
Cluster Indexes / Non-Cluster Indexes
One of the ways to enhance database performance is make sure that your
cluster indexes and non-cluster indexes are using the correct fields.
This cluster indexes were not set on the primary key for several tables.
What is the best way to test dropping Non-Cluster index (Primary Key) and
dropping Cluster (Non Primary Key)
Creating the Primary Key Cluster index and Non-Primary Key to Non-Cluster
index.
What is the easiest way to test the performance increase by dropping and
create indexes that were set up on the incorrect fields?
Thank You,
Yes, you would need to drop/ recreate. Profiler would be the easiest way to
look at the speed improvements.
On a side note, there may be times when you don't want clustering on the PK.
(Usually on a reporting server.) For example, you may want to have the
clustering on a date field as most reports are off of date ranges.
TIA,
ChrisR
"Joe K." wrote:
> I have application that I'm trying to enhance database performance.
> One of the ways to enhance database performance is make sure that your
> cluster indexes and non-cluster indexes are using the correct fields.
> This cluster indexes were not set on the primary key for several tables.
> What is the best way to test dropping Non-Cluster index (Primary Key) and
> dropping Cluster (Non Primary Key)
> Creating the Primary Key Cluster index and Non-Primary Key to Non-Cluster
> index.
> What is the easiest way to test the performance increase by dropping and
> create indexes that were set up on the incorrect fields?
> Thank You,
>
>
>
Cluster Indexes / Non-Cluster Indexes
One of the ways to enhance database performance is make sure that your
cluster indexes and non-cluster indexes are using the correct fields.
This cluster indexes were not set on the primary key for several tables.
What is the best way to test dropping Non-Cluster index (Primary Key) and
dropping Cluster (Non Primary Key)
Creating the Primary Key Cluster index and Non-Primary Key to Non-Cluster
index.
What is the easiest way to test the performance increase by dropping and
create indexes that were set up on the incorrect fields?
Thank You,Yes, you would need to drop/ recreate. Profiler would be the easiest way to
look at the speed improvements.
On a side note, there may be times when you don't want clustering on the PK.
(Usually on a reporting server.) For example, you may want to have the
clustering on a date field as most reports are off of date ranges.
TIA,
ChrisR
"Joe K." wrote:
> I have application that I'm trying to enhance database performance.
> One of the ways to enhance database performance is make sure that your
> cluster indexes and non-cluster indexes are using the correct fields.
> This cluster indexes were not set on the primary key for several tables.
> What is the best way to test dropping Non-Cluster index (Primary Key) and
> dropping Cluster (Non Primary Key)
> Creating the Primary Key Cluster index and Non-Primary Key to Non-Cluster
> index.
> What is the easiest way to test the performance increase by dropping and
> create indexes that were set up on the incorrect fields?
> Thank You,
>
>
>
Cluster Indexes / Non-Cluster Indexes
One of the ways to enhance database performance is make sure that your
cluster indexes and non-cluster indexes are using the correct fields.
This cluster indexes were not set on the primary key for several tables.
What is the best way to test dropping Non-Cluster index (Primary Key) and
dropping Cluster (Non Primary Key)
Creating the Primary Key Cluster index and Non-Primary Key to Non-Cluster
index.
What is the easiest way to test the performance increase by dropping and
create indexes that were set up on the incorrect fields?
Thank You,Yes, you would need to drop/ recreate. Profiler would be the easiest way to
look at the speed improvements.
On a side note, there may be times when you don't want clustering on the PK.
(Usually on a reporting server.) For example, you may want to have the
clustering on a date field as most reports are off of date ranges.
--
TIA,
ChrisR
"Joe K." wrote:
> I have application that I'm trying to enhance database performance.
> One of the ways to enhance database performance is make sure that your
> cluster indexes and non-cluster indexes are using the correct fields.
> This cluster indexes were not set on the primary key for several tables.
> What is the best way to test dropping Non-Cluster index (Primary Key) and
> dropping Cluster (Non Primary Key)
> Creating the Primary Key Cluster index and Non-Primary Key to Non-Cluster
> index.
> What is the easiest way to test the performance increase by dropping and
> create indexes that were set up on the incorrect fields?
> Thank You,
>
>
>
Cluster Failover
back end SQL Cluster. Sometimes a user will try and run a commission report
and the SQL server takes a big hit on the CPU's (Dual 2.0 Ghz Xeon's) and I
will get a error in MOM 2005 that the "Server Performance Thresholds
SQLSERVR Process > 90% CPU for 15 minutes" Of course the end user will try
and run it two or three more times taking up more resources.
If I do a fail over to the other SQL server in the cluster the CPU's will
drop down to their normal ranges and everything works fine.
My question is when performing the "failover" that 10 - 15 seconds it takes
to do this what happens to any data that is trying to write to the database?
Is it lost? And can this cause corruption in the database itself?
Thanks!
Failovers don't cause corruption. Rather, any active transactions are
rolled back when the backup node takes over from the primary node.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
..
"Scopus69" <Scopus69@.nospam.postalias> wrote in message
news:DF8080ED-0DF2-42C7-B1DA-4D9375D0B2EA@.microsoft.com...
We have a 3rd party vendor application that runs on IIS and connects to a
back end SQL Cluster. Sometimes a user will try and run a commission report
and the SQL server takes a big hit on the CPU's (Dual 2.0 Ghz Xeon's) and I
will get a error in MOM 2005 that the "Server Performance Thresholds
SQLSERVR Process > 90% CPU for 15 minutes" Of course the end user will try
and run it two or three more times taking up more resources.
If I do a fail over to the other SQL server in the cluster the CPU's will
drop down to their normal ranges and everything works fine.
My question is when performing the "failover" that 10 - 15 seconds it takes
to do this what happens to any data that is trying to write to the database?
Is it lost? And can this cause corruption in the database itself?
Thanks!
|||When you mean "rolled back" the transactions are basically held in cache
until the backup server has taken complete control? Is that correct?
One user received the following error during failover in their web session:
[DBNETLIB][ConnectionRead (recv()).]General network error. Check your
network documentation. in Microsoft OLE DB Provider for SQL Server
Is this a concern or only the period before the failover is complete?
Thanks!
"Tom Moreau" wrote:
> Failovers don't cause corruption. Rather, any active transactions are
> rolled back when the backup node takes over from the primary node.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> ..
> "Scopus69" <Scopus69@.nospam.postalias> wrote in message
> news:DF8080ED-0DF2-42C7-B1DA-4D9375D0B2EA@.microsoft.com...
> We have a 3rd party vendor application that runs on IIS and connects to a
> back end SQL Cluster. Sometimes a user will try and run a commission report
> and the SQL server takes a big hit on the CPU's (Dual 2.0 Ghz Xeon's) and I
> will get a error in MOM 2005 that the "Server Performance Thresholds
> SQLSERVR Process > 90% CPU for 15 minutes" Of course the end user will try
> and run it two or three more times taking up more resources.
> If I do a fail over to the other SQL server in the cluster the CPU's will
> drop down to their normal ranges and everything works fine.
> My question is when performing the "failover" that 10 - 15 seconds it takes
> to do this what happens to any data that is trying to write to the database?
> Is it lost? And can this cause corruption in the database itself?
> Thanks!
>
>
|||Not exactly. Anything in cache evaporates. Transactions are written to the
transaction log. If a COMMIT record is not written, then when SQL Server
comes up, all of the work done thus far in that transaction is backed out.
This is true whether you are using a cluster or not.
The network errors go away once SQL Server has come back up again. Clean
apps will try to reconnect.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Toronto, ON Canada
"Scopus69" <Scopus69@.nospam.postalias> wrote in message
news:49B44171-B185-463B-833E-E4190AD923A9@.microsoft.com...
When you mean "rolled back" the transactions are basically held in cache
until the backup server has taken complete control? Is that correct?
One user received the following error during failover in their web session:
[DBNETLIB][ConnectionRead (recv()).]General network error. Check your
network documentation. in Microsoft OLE DB Provider for SQL Server
Is this a concern or only the period before the failover is complete?
Thanks!
"Tom Moreau" wrote:
> Failovers don't cause corruption. Rather, any active transactions are
> rolled back when the backup node takes over from the primary node.
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
> SQL Server MVP
> Toronto, ON Canada
> ..
> "Scopus69" <Scopus69@.nospam.postalias> wrote in message
> news:DF8080ED-0DF2-42C7-B1DA-4D9375D0B2EA@.microsoft.com...
> We have a 3rd party vendor application that runs on IIS and connects to a
> back end SQL Cluster. Sometimes a user will try and run a commission
> report
> and the SQL server takes a big hit on the CPU's (Dual 2.0 Ghz Xeon's) and
> I
> will get a error in MOM 2005 that the "Server Performance Thresholds
> SQLSERVR Process > 90% CPU for 15 minutes" Of course the end user will
> try
> and run it two or three more times taking up more resources.
> If I do a fail over to the other SQL server in the cluster the CPU's will
> drop down to their normal ranges and everything works fine.
> My question is when performing the "failover" that 10 - 15 seconds it
> takes
> to do this what happens to any data that is trying to write to the
> database?
> Is it lost? And can this cause corruption in the database itself?
> Thanks!
>
>
Sunday, March 11, 2012
CLR SP Execution
I am currently working on an application which calls some CLR stored
procedures in SQL Server 2005. The first time one of the CLR stored
procedures (it doesn't matter which one) is called it takes a lot longer to
execute. After the first CLR stored procedure has been executed the CLR
stored procedures all execute in a reasonable amount of time.
I have been unable to find any articles explaining the process that occurs
when a CLR stored procedure is called. I am currently assuming the initial
delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
Server or only when needed?
If anyone can point me in the direction of an article to aid my
understanding or can explain the process I would be grateful.
Thanks in advance.
Alistair HarrisonHow long of a delay do you experience? Do you get the delay with a trivial
proc like the one below? I get a sub-second response when I execute this
after a fresh SQL Server restart.
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void MyProc()
{
Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
}
};
I'm no expert on SQL CLR internals but I can't think of any overhead beyond
the usual object-specific overhead.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> Hello,
> I am currently working on an application which calls some CLR stored
> procedures in SQL Server 2005. The first time one of the CLR stored
> procedures (it doesn't matter which one) is called it takes a lot longer
> to
> execute. After the first CLR stored procedure has been executed the CLR
> stored procedures all execute in a reasonable amount of time.
> I have been unable to find any articles explaining the process that occurs
> when a CLR stored procedure is called. I am currently assuming the initial
> delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> Server or only when needed?
> If anyone can point me in the direction of an article to aid my
> understanding or can explain the process I would be grateful.
> Thanks in advance.
> Alistair Harrison|||Dan,
Thanks for your response.
The following are examples of execution times recorded by the Client
Statistics when the query is executed:
Initial execution of my original sp: 1600 to 1700 ms
Subsequent execution: <100 ms
Initial execution of your test sp: 700 to 800 ms
Subsequent execution: <50 ms
Another point I noticed was that if the test sp is executed first followed
by my original sp following a restart the original sp takes around 1200 ms t
o
execute. This is less than the 1600 to 1700 ms it seems to take when
executing the sp first but is still much greater than the subsequent
execution time.
It might be worth mentioning that my original clr sp executes a couple of
tsql sps using a context connection and then returns some xml as an output
parameter.
Thanks,
Alistair
"Dan Guzman" wrote:
> How long of a delay do you experience? Do you get the delay with a trivia
l
> proc like the one below? I get a sub-second response when I execute this
> after a fresh SQL Server restart.
> public partial class StoredProcedures
> {
> [Microsoft.SqlServer.Server.SqlProcedure]
> public static void MyProc()
> {
> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
> }
> };
> I'm no expert on SQL CLR internals but I can't think of any overhead beyon
d
> the usual object-specific overhead.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
>
>|||It's likely that subsequent executions are faster simply a result of data
caching. I'd expect execution times to average about 1200ms if you run DBCC
DROPCLEANBUFFERS before each execution. For performance testing, I usually
execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
I don't think the sub-second delay the first time the proc is run is
anything to be concerned about.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...[vbcol=seagreen]
> Dan,
> Thanks for your response.
> The following are examples of execution times recorded by the Client
> Statistics when the query is executed:
> Initial execution of my original sp: 1600 to 1700 ms
> Subsequent execution: <100 ms
> Initial execution of your test sp: 700 to 800 ms
> Subsequent execution: <50 ms
> Another point I noticed was that if the test sp is executed first followed
> by my original sp following a restart the original sp takes around 1200 ms
> to
> execute. This is less than the 1600 to 1700 ms it seems to take when
> executing the sp first but is still much greater than the subsequent
> execution time.
> It might be worth mentioning that my original clr sp executes a couple of
> tsql sps using a context connection and then returns some xml as an output
> parameter.
> Thanks,
> Alistair
>
> "Dan Guzman" wrote:
>|||Dan
Thanks again for your response.
I have retested the two stored procedures running DROPCLEANBUFFERS and
FREEPROCCACHE before each execution. The execution times increase slightly
but are still consistently below 300ms.
I suppose as the longer execution only seems to occur following a restart it
is not too much of a worry just intriguing as to why it seems to occur.
Alistair
"Dan Guzman" wrote:
> It's likely that subsequent executions are faster simply a result of data
> caching. I'd expect execution times to average about 1200ms if you run DB
CC
> DROPCLEANBUFFERS before each execution. For performance testing, I usuall
y
> execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
> I don't think the sub-second delay the first time the proc is run is
> anything to be concerned about.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
>
>|||Hello Alistair,
It might be worth remembering that the Assembly (and its dependent assemblie
s)
still needs to be fetched, verified, JITTed and the app domain created befor
e
the CLR code runs in many cases. That could be why you have a slower initial
startup.
Thank you,
Kent Tegels
DevelopMentor
http://staff.develop.com/ktegels/
CLR SP Execution
I am currently working on an application which calls some CLR stored
procedures in SQL Server 2005. The first time one of the CLR stored
procedures (it doesn't matter which one) is called it takes a lot longer to
execute. After the first CLR stored procedure has been executed the CLR
stored procedures all execute in a reasonable amount of time.
I have been unable to find any articles explaining the process that occurs
when a CLR stored procedure is called. I am currently assuming the initial
delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
Server or only when needed?
If anyone can point me in the direction of an article to aid my
understanding or can explain the process I would be grateful.
Thanks in advance.
Alistair Harrison
How long of a delay do you experience? Do you get the delay with a trivial
proc like the one below? I get a sub-second response when I execute this
after a fresh SQL Server restart.
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void MyProc()
{
Microsoft.SqlServer.Server.SqlContext.Pipe.Send("T est message");
}
};
I'm no expert on SQL CLR internals but I can't think of any overhead beyond
the usual object-specific overhead.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> Hello,
> I am currently working on an application which calls some CLR stored
> procedures in SQL Server 2005. The first time one of the CLR stored
> procedures (it doesn't matter which one) is called it takes a lot longer
> to
> execute. After the first CLR stored procedure has been executed the CLR
> stored procedures all execute in a reasonable amount of time.
> I have been unable to find any articles explaining the process that occurs
> when a CLR stored procedure is called. I am currently assuming the initial
> delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> Server or only when needed?
> If anyone can point me in the direction of an article to aid my
> understanding or can explain the process I would be grateful.
> Thanks in advance.
> Alistair Harrison
|||Dan,
Thanks for your response.
The following are examples of execution times recorded by the Client
Statistics when the query is executed:
Initial execution of my original sp: 1600 to 1700 ms
Subsequent execution: <100 ms
Initial execution of your test sp: 700 to 800 ms
Subsequent execution: <50 ms
Another point I noticed was that if the test sp is executed first followed
by my original sp following a restart the original sp takes around 1200 ms to
execute. This is less than the 1600 to 1700 ms it seems to take when
executing the sp first but is still much greater than the subsequent
execution time.
It might be worth mentioning that my original clr sp executes a couple of
tsql sps using a context connection and then returns some xml as an output
parameter.
Thanks,
Alistair
"Dan Guzman" wrote:
> How long of a delay do you experience? Do you get the delay with a trivial
> proc like the one below? I get a sub-second response when I execute this
> after a fresh SQL Server restart.
> public partial class StoredProcedures
> {
> [Microsoft.SqlServer.Server.SqlProcedure]
> public static void MyProc()
> {
> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("T est message");
> }
> };
> I'm no expert on SQL CLR internals but I can't think of any overhead beyond
> the usual object-specific overhead.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
>
>
|||It's likely that subsequent executions are faster simply a result of data
caching. I'd expect execution times to average about 1200ms if you run DBCC
DROPCLEANBUFFERS before each execution. For performance testing, I usually
execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
I don't think the sub-second delay the first time the proc is run is
anything to be concerned about.
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...[vbcol=seagreen]
> Dan,
> Thanks for your response.
> The following are examples of execution times recorded by the Client
> Statistics when the query is executed:
> Initial execution of my original sp: 1600 to 1700 ms
> Subsequent execution: <100 ms
> Initial execution of your test sp: 700 to 800 ms
> Subsequent execution: <50 ms
> Another point I noticed was that if the test sp is executed first followed
> by my original sp following a restart the original sp takes around 1200 ms
> to
> execute. This is less than the 1600 to 1700 ms it seems to take when
> executing the sp first but is still much greater than the subsequent
> execution time.
> It might be worth mentioning that my original clr sp executes a couple of
> tsql sps using a context connection and then returns some xml as an output
> parameter.
> Thanks,
> Alistair
>
> "Dan Guzman" wrote:
|||Dan
Thanks again for your response.
I have retested the two stored procedures running DROPCLEANBUFFERS and
FREEPROCCACHE before each execution. The execution times increase slightly
but are still consistently below 300ms.
I suppose as the longer execution only seems to occur following a restart it
is not too much of a worry just intriguing as to why it seems to occur.
Alistair
"Dan Guzman" wrote:
> It's likely that subsequent executions are faster simply a result of data
> caching. I'd expect execution times to average about 1200ms if you run DBCC
> DROPCLEANBUFFERS before each execution. For performance testing, I usually
> execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
> I don't think the sub-second delay the first time the proc is run is
> anything to be concerned about.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
>
>
|||Hello Alistair,
It might be worth remembering that the Assembly (and its dependent assemblies)
still needs to be fetched, verified, JITTed and the app domain created before
the CLR code runs in many cases. That could be why you have a slower initial
startup.
Thank you,
Kent Tegels
DevelopMentor
http://staff.develop.com/ktegels/
CLR SP Execution
I am currently working on an application which calls some CLR stored
procedures in SQL Server 2005. The first time one of the CLR stored
procedures (it doesn't matter which one) is called it takes a lot longer to
execute. After the first CLR stored procedure has been executed the CLR
stored procedures all execute in a reasonable amount of time.
I have been unable to find any articles explaining the process that occurs
when a CLR stored procedure is called. I am currently assuming the initial
delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
Server or only when needed?
If anyone can point me in the direction of an article to aid my
understanding or can explain the process I would be grateful.
Thanks in advance.
Alistair HarrisonHow long of a delay do you experience? Do you get the delay with a trivial
proc like the one below? I get a sub-second response when I execute this
after a fresh SQL Server restart.
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void MyProc()
{
Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
}
};
I'm no expert on SQL CLR internals but I can't think of any overhead beyond
the usual object-specific overhead.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> Hello,
> I am currently working on an application which calls some CLR stored
> procedures in SQL Server 2005. The first time one of the CLR stored
> procedures (it doesn't matter which one) is called it takes a lot longer
> to
> execute. After the first CLR stored procedure has been executed the CLR
> stored procedures all execute in a reasonable amount of time.
> I have been unable to find any articles explaining the process that occurs
> when a CLR stored procedure is called. I am currently assuming the initial
> delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> Server or only when needed?
> If anyone can point me in the direction of an article to aid my
> understanding or can explain the process I would be grateful.
> Thanks in advance.
> Alistair Harrison|||Dan,
Thanks for your response.
The following are examples of execution times recorded by the Client
Statistics when the query is executed:
Initial execution of my original sp: 1600 to 1700 ms
Subsequent execution: <100 ms
Initial execution of your test sp: 700 to 800 ms
Subsequent execution: <50 ms
Another point I noticed was that if the test sp is executed first followed
by my original sp following a restart the original sp takes around 1200 ms to
execute. This is less than the 1600 to 1700 ms it seems to take when
executing the sp first but is still much greater than the subsequent
execution time.
It might be worth mentioning that my original clr sp executes a couple of
tsql sps using a context connection and then returns some xml as an output
parameter.
Thanks,
Alistair
"Dan Guzman" wrote:
> How long of a delay do you experience? Do you get the delay with a trivial
> proc like the one below? I get a sub-second response when I execute this
> after a fresh SQL Server restart.
> public partial class StoredProcedures
> {
> [Microsoft.SqlServer.Server.SqlProcedure]
> public static void MyProc()
> {
> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
> }
> };
> I'm no expert on SQL CLR internals but I can't think of any overhead beyond
> the usual object-specific overhead.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> > Hello,
> >
> > I am currently working on an application which calls some CLR stored
> > procedures in SQL Server 2005. The first time one of the CLR stored
> > procedures (it doesn't matter which one) is called it takes a lot longer
> > to
> > execute. After the first CLR stored procedure has been executed the CLR
> > stored procedures all execute in a reasonable amount of time.
> >
> > I have been unable to find any articles explaining the process that occurs
> > when a CLR stored procedure is called. I am currently assuming the initial
> > delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> > Server or only when needed?
> >
> > If anyone can point me in the direction of an article to aid my
> > understanding or can explain the process I would be grateful.
> >
> > Thanks in advance.
> >
> > Alistair Harrison
>
>|||It's likely that subsequent executions are faster simply a result of data
caching. I'd expect execution times to average about 1200ms if you run DBCC
DROPCLEANBUFFERS before each execution. For performance testing, I usually
execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
I don't think the sub-second delay the first time the proc is run is
anything to be concerned about.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
> Dan,
> Thanks for your response.
> The following are examples of execution times recorded by the Client
> Statistics when the query is executed:
> Initial execution of my original sp: 1600 to 1700 ms
> Subsequent execution: <100 ms
> Initial execution of your test sp: 700 to 800 ms
> Subsequent execution: <50 ms
> Another point I noticed was that if the test sp is executed first followed
> by my original sp following a restart the original sp takes around 1200 ms
> to
> execute. This is less than the 1600 to 1700 ms it seems to take when
> executing the sp first but is still much greater than the subsequent
> execution time.
> It might be worth mentioning that my original clr sp executes a couple of
> tsql sps using a context connection and then returns some xml as an output
> parameter.
> Thanks,
> Alistair
>
> "Dan Guzman" wrote:
>> How long of a delay do you experience? Do you get the delay with a
>> trivial
>> proc like the one below? I get a sub-second response when I execute this
>> after a fresh SQL Server restart.
>> public partial class StoredProcedures
>> {
>> [Microsoft.SqlServer.Server.SqlProcedure]
>> public static void MyProc()
>> {
>> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
>> }
>> };
>> I'm no expert on SQL CLR internals but I can't think of any overhead
>> beyond
>> the usual object-specific overhead.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
>> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
>> > Hello,
>> >
>> > I am currently working on an application which calls some CLR stored
>> > procedures in SQL Server 2005. The first time one of the CLR stored
>> > procedures (it doesn't matter which one) is called it takes a lot
>> > longer
>> > to
>> > execute. After the first CLR stored procedure has been executed the CLR
>> > stored procedures all execute in a reasonable amount of time.
>> >
>> > I have been unable to find any articles explaining the process that
>> > occurs
>> > when a CLR stored procedure is called. I am currently assuming the
>> > initial
>> > delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
>> > Server or only when needed?
>> >
>> > If anyone can point me in the direction of an article to aid my
>> > understanding or can explain the process I would be grateful.
>> >
>> > Thanks in advance.
>> >
>> > Alistair Harrison
>>|||Dan
Thanks again for your response.
I have retested the two stored procedures running DROPCLEANBUFFERS and
FREEPROCCACHE before each execution. The execution times increase slightly
but are still consistently below 300ms.
I suppose as the longer execution only seems to occur following a restart it
is not too much of a worry just intriguing as to why it seems to occur.
Alistair
"Dan Guzman" wrote:
> It's likely that subsequent executions are faster simply a result of data
> caching. I'd expect execution times to average about 1200ms if you run DBCC
> DROPCLEANBUFFERS before each execution. For performance testing, I usually
> execute DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test.
> I don't think the sub-second delay the first time the proc is run is
> anything to be concerned about.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> message news:6A2A987D-E008-440F-867E-0DE4D6A3AA85@.microsoft.com...
> > Dan,
> >
> > Thanks for your response.
> >
> > The following are examples of execution times recorded by the Client
> > Statistics when the query is executed:
> >
> > Initial execution of my original sp: 1600 to 1700 ms
> > Subsequent execution: <100 ms
> >
> > Initial execution of your test sp: 700 to 800 ms
> > Subsequent execution: <50 ms
> >
> > Another point I noticed was that if the test sp is executed first followed
> > by my original sp following a restart the original sp takes around 1200 ms
> > to
> > execute. This is less than the 1600 to 1700 ms it seems to take when
> > executing the sp first but is still much greater than the subsequent
> > execution time.
> >
> > It might be worth mentioning that my original clr sp executes a couple of
> > tsql sps using a context connection and then returns some xml as an output
> > parameter.
> >
> > Thanks,
> >
> > Alistair
> >
> >
> > "Dan Guzman" wrote:
> >
> >> How long of a delay do you experience? Do you get the delay with a
> >> trivial
> >> proc like the one below? I get a sub-second response when I execute this
> >> after a fresh SQL Server restart.
> >>
> >> public partial class StoredProcedures
> >> {
> >> [Microsoft.SqlServer.Server.SqlProcedure]
> >> public static void MyProc()
> >> {
> >> Microsoft.SqlServer.Server.SqlContext.Pipe.Send("Test message");
> >> }
> >> };
> >>
> >> I'm no expert on SQL CLR internals but I can't think of any overhead
> >> beyond
> >> the usual object-specific overhead.
> >>
> >> --
> >> Hope this helps.
> >>
> >> Dan Guzman
> >> SQL Server MVP
> >>
> >> "Alistair Harrison" <AlistairHarrison@.discussions.microsoft.com> wrote in
> >> message news:71A086F6-487F-46FD-B501-A7844F6936E5@.microsoft.com...
> >> > Hello,
> >> >
> >> > I am currently working on an application which calls some CLR stored
> >> > procedures in SQL Server 2005. The first time one of the CLR stored
> >> > procedures (it doesn't matter which one) is called it takes a lot
> >> > longer
> >> > to
> >> > execute. After the first CLR stored procedure has been executed the CLR
> >> > stored procedures all execute in a reasonable amount of time.
> >> >
> >> > I have been unable to find any articles explaining the process that
> >> > occurs
> >> > when a CLR stored procedure is called. I am currently assuming the
> >> > initial
> >> > delay is related to the CLR loading the DLL? Is the CLR loaded with SQL
> >> > Server or only when needed?
> >> >
> >> > If anyone can point me in the direction of an article to aid my
> >> > understanding or can explain the process I would be grateful.
> >> >
> >> > Thanks in advance.
> >> >
> >> > Alistair Harrison
> >>
> >>
> >>
>
>
Thursday, March 8, 2012
clr integration....
sir here are the functions on my test application...
<Microsoft.SqlServer.Server.SqlFunction()> _
Public Shared Function WeekInYear(ByVal dt As DateTime) As Integer
Return DatePart("ww", dt, FirstDayOfWeek.Monday, _
FirstWeekOfYear.FirstFourDays)
End Function
<Microsoft.SqlServer.Server.SqlFunction()> _
Public Shared Function HttpPost(ByVal uri As String, ByVal parameters As String) As String
Try
Dim urlRegEx As New System.Text.RegularExpressions.Regex("http://192.168.1.23:81/.*")
Dim p As New System.Net.WebPermission(System.Net.NetworkAccess.Connect, urlRegEx)
p.Assert()
Dim req As System.Net.WebRequest = System.Net.WebRequest.Create(uri)
req.ContentType = "application/x-www-form-urlencoded;charset=utf-8"
req.Method = "POST"
Try
Dim bytes() As Byte = System.Text.Encoding.UTF8.GetBytes(parameters)
req.ContentLength = bytes.Length
Dim os As System.IO.Stream = req.GetRequestStream
os.Write(bytes, 0, bytes.Length)
'Push it out there
os.Close()
Catch ex As Exception
Throw New Exception(ex.Message)
End Try
Dim resp As System.Net.WebResponse = req.GetResponse
If resp Is Nothing Then
Return Nothing
End If
Return "pass completed"
Catch ex As Exception
Return ex.Message
Finally
End Try
End Function
the first function works fine, the second function doesn't work, it seems that if i ever use another namespace it requires a security permission.
can you specify why do i recieved this error at runtime?
Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.
You have to give the assembly the appropiate permissions in SQL Server. By default only a few assemblies / namespaces are *trusted*. Try to give more access to the assembly by registering the assenblies as UNSAFE or external_Access.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
|||As Jens says, the most likely issue is that you haven't created the assembly with a permission set that allows you to do web "stuff". For that to work you need to create the assembly with a permission set of EXTERNAL_ACCESS.
Niels
CLR Configuration option problem
I develope one application using .Net framework 2.0 and sql server
2005.Some time the following message displayed
Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
configuration option
how to redtify the error
Regards
T.A.Krishna
Hi
"Krishna" wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
>
This error normally refers to the 'CLR Integration' setting, which you can
set by using the Surface Area Configuration (SAC) tool. If you choose to set
the option by feature then under Database Engine you can see 'CLR
Integration' and the Enable CLR Integration checked box should be checked.
> how to redtify the error
> Regards
> T.A.Krishna
John
|||Krishna wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
CLR integration is disabled by default.
You must change the configuration option:
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
Best regards,
Marcin Guzowski
http://guzowski.info
Wednesday, March 7, 2012
CLR Configuration option problem
I develope one application using .Net framework 2.0 and sql server
2005.Some time the following message displayed
Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
configuration option
how to redtify the error
Regards
T.A.KrishnaHi
"Krishna" wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
>
This error normally refers to the 'CLR Integration' setting, which you can
set by using the Surface Area Configuration (SAC) tool. If you choose to set
the option by feature then under Database Engine you can see 'CLR
Integration' and the Enable CLR Integration checked box should be checked.
> how to redtify the error
> Regards
> T.A.Krishna
John|||Krishna wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
CLR integration is disabled by default.
You must change the configuration option:
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
Best regards,
Marcin Guzowski
http://guzowski.info
CLR Configuration option problem
I develope one application using .Net framework 2.0 and sql server
2005.Some time the following message displayed
Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
configuration option
how to redtify the error
Regards
T.A.KrishnaHi
"Krishna" wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
>
This error normally refers to the 'CLR Integration' setting, which you can
set by using the Surface Area Configuration (SAC) tool. If you choose to set
the option by feature then under Database Engine you can see 'CLR
Integration' and the Enable CLR Integration checked box should be checked.
> how to redtify the error
> Regards
> T.A.Krishna
John|||Krishna wrote:
> Hai,
> I develope one application using .Net framework 2.0 and sql server
> 2005.Some time the following message displayed
> Exceution of user code .Net framework is disabled .Enabled 'clr' emabled
> configuration option
CLR integration is disabled by default.
You must change the configuration option:
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
Best regards,
Marcin Guzowski
http://guzowski.info
closing SQL Connection
hello to all
i am finding my self in confusing problem.
everyay log file of my application increase by 10 GB.
every time i found more than 100 connections open in SDQL Server 2005 under my database.
i an using SQLHELPER Class by microsoft.i ma not putting my code in try-catch block. example
of one of my frequently used function is as follows:
protected Int64 GetMemberID()
{
String SqlSelect = "SELECT * FROM MemberMaster WHERE MemberUserName= '" +
Session["UserName"].ToString() + "'";
SqlDataReader dr1 =
SqlHelper.ExecuteReader(ConfigurationManager.ConnectionStrings["CowcallConnectionString2"].T
oString(), CommandType.Text, SqlSelect);
while (dr1.Read())
{
CurrentMemberID = Int64.Parse(dr1["MemberID"].ToString().Trim());
}
return CurrentMemberID;
}
well i doubt i am opening connection and do not closing it. moreover i do not know how to close connection in above code when i am using sql helper class.
please give me yours suggestion regarding my code even if it is not solving above two problems.
Hi munishbhatia,
use theCommandBehavior to close the connection after use like: SqlHelper.ExecuteReader(CommandBehavior.CloseConnection);
Closing all open connections/sessions
Is there a way to close all open connections/sessions to my db at a certain
time? I have a application that uses a web browser to access my database. At
the end of the day i will have loads of open connections/sessions to my db.
My plan was to run a job or script in the eveing to close all
connections/sessions. I have been doing it by going to EM and kill all the
open connections/sessions 1 by 1 but, it is tiring as at times i have 700
open connections/sessions and they keep accumulating everyday. Thank you in
advance.You should address the problem in the application itself and have it close
the unused connections, rather than kill them at the end of the day on the
server.
This posting is provided "AS IS" with no warranties, and confers no rights.
HTH,
~ Remus Rusanu
SQL Service Broker
http://msdn2.microsoft.com/en-us/library/ms166043(en-US,SQL.90).aspx
"MittyKom" <MittyKom@.discussions.microsoft.com> wrote in message
news:DAF29302-3EE5-4E92-AE0F-CA59A1049BF6@.microsoft.com...
> Hi all
> Is there a way to close all open connections/sessions to my db at a
> certain
> time? I have a application that uses a web browser to access my database.
> At
> the end of the day i will have loads of open connections/sessions to my
> db.
> My plan was to run a job or script in the eveing to close all
> connections/sessions. I have been doing it by going to EM and kill all the
> open connections/sessions 1 by 1 but, it is tiring as at times i have 700
> open connections/sessions and they keep accumulating everyday. Thank you
> in
> advance.|||Try set db to single_user, then back to multi_user. Search BOL for details.
James
"MittyKom" wrote:
> Hi all
> Is there a way to close all open connections/sessions to my db at a certai
n
> time? I have a application that uses a web browser to access my database.
At
> the end of the day i will have loads of open connections/sessions to my db
.
> My plan was to run a job or script in the eveing to close all
> connections/sessions. I have been doing it by going to EM and kill all the
> open connections/sessions 1 by 1 but, it is tiring as at times i have 700
> open connections/sessions and they keep accumulating everyday. Thank you i
n
> advance.
Closing a conversation later on and Microsoft Service Broker Interface (sample)
In a messaging application, a client sends a message to the server. The server uses Service Broker Interface, and is CLR based. On the server, the message has a life cycle - going throughout multiple transitions until is final (Pending, In Transit, Delivered, TimedOut, Undeliverable). As the server receives the message, it sends back to the client a reply status message of In Transit. It does so using the same message’s conversation. It also saves – along with the message – its conversation handle. The dialog is not closed. After a while, the server may decide that the message has gotten to a final state (Delivered or TimedOut). I would like at that time to send a reply status message (Final State) back to the client and also close the conversation. The conversation handle has been saved by both, client and server. Having the conversation handle available - how do I get back to the client later on?
Thanks,
I don't understand what you mean by 'get back to the client later on'. Do you want to send a message using the conversation handle which was saved earlier? That's simple, you only have to execute the SEND statement with the saved conversation handle (or call Conversation.Send(...) in the ServiceBrokerInterface sample). Note that you cannot send after the conversation has been closed at your endpoint.|||Yes, indeed, I would like to send a message using the conversation handle thant was saved earlier. This will be done by a different thread. All I have saved is the conversation hanlde and I am thinking to create a Conversation object based on the handle only. Can it be done?
Thanks,
|||I'll have to look it up if the Conversation class has a public constructor. If not, the ServiceBrokerInterface library is free for you to modify :-)|||That's right :). I am thinking about this way: (new Service(server service parameters)).GetConversation (mySavedConversationHandle). Thanks for following up.Saturday, February 25, 2012
Clone a set of rows from two-related-tables to a the same-two-related-tables
involves cloning related data two.
Having this structure (simplified but representative of my problem)
CREATE TABLE Enterprise (enteid INTEGER IDENTITY PRIMARY KEY, entename
VARCHAR(30) NOT NULL)
CREATE TABLE Department (deptid INTEGER IDENTITY PRIMARY KEY, deptname
VARCHAR(30) NOT NULL, enteid INTEGER NOT NULL REFERENCES Enterprise
(enteid))
CREATE TABLE Employee (employeeid INTEGER IDENTITY PRIMARY KEY, ssn
CHAR(10) NOT NULL, employeename VARCHAR(30) NOT NULL, deptid INTEGER
NOT NULL REFERENCES Department (deptid))
I want to:
Clone a complete Enterprise, having @.enteid as a parameter (the id of
the enterprise to clone ) :
1 - Copy one enterrpise row. Obtaining a @.new_enteid
2 - Copy all Departaments rows associated to @.enteid (@.deptid's).
Obtaining @.new_deptid's and associating this new rows to @.new_enteid
3 - Copy all Employee rows related to all @.deptid's and associating
this new rows to the correspondig @.new_deptid's
Is this possible to make in one SQL sentence
Best Regards
Fabio Cavassini
http://fabioon.blogspot.com/Fabio Cavassini wrote:
> I want to implement a "cloning" feature in my application and this
> involves cloning related data two.
> Having this structure (simplified but representative of my problem)
> CREATE TABLE Enterprise (enteid INTEGER IDENTITY PRIMARY KEY, entename
> VARCHAR(30) NOT NULL)
> CREATE TABLE Department (deptid INTEGER IDENTITY PRIMARY KEY, deptname
> VARCHAR(30) NOT NULL, enteid INTEGER NOT NULL REFERENCES Enterprise
> (enteid))
> CREATE TABLE Employee (employeeid INTEGER IDENTITY PRIMARY KEY, ssn
> CHAR(10) NOT NULL, employeename VARCHAR(30) NOT NULL, deptid INTEGER
> NOT NULL REFERENCES Department (deptid))
> I want to:
> Clone a complete Enterprise, having @.enteid as a parameter (the id of
> the enterprise to clone ) :
> 1 - Copy one enterrpise row. Obtaining a @.new_enteid
> 2 - Copy all Departaments rows associated to @.enteid (@.deptid's).
> Obtaining @.new_deptid's and associating this new rows to @.new_enteid
> 3 - Copy all Employee rows related to all @.deptid's and associating
> this new rows to the correspondig @.new_deptid's
> Is this possible to make in one SQL sentence
> Best Regards
> Fabio Cavassini
> http://fabioon.blogspot.com/
Why would you ever want to duplicate data in a table? Don't you have
any other keys? Take two enterprises of the same name for example - how
will you tell them apart? Duplicating the data in the Employee table
would violate Second Normal Form.
The right answer is to design a better normalized data model. The wrong
answer is to wreck data integrity even further by using the
SCOPE_IDENTITY() function, which will allow you to fill your tables
with redundant data in exactly the way you have proposed.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||The example is only representative of the problem, in fact I extracted
the data model from an old post that you posted ;)
(http://groups.google.com/group/micr...618a7b00b?hl=en)
The real data model is the following:
My enterprise works with Service_Orders, these service orders are
populated with:
Sell_Articles: This is actually what my enterprise sells to it's
customers
and
Buy_Articles: This is what my enterprise buys to it's providers to get
the Sell Articles
This is the real data model:
CREATE TABLE OrderService (osid INTEGER IDENTITY PRIMARY KEY)
CREATE TABLE Sell_Article_Type (satid INTEGER IDENTITY PRIMARY KEY,
satname
VARCHAR(30) NOT NULL)
CREATE TABLE Buy_Article_Type (batid INTEGER IDENTITY PRIMARY KEY,
batname
VARCHAR(30) NOT NULL)
CREATE TABLE Sell_Article (said INTEGER IDENTITY PRIMARY KEY, satid
INTEGER NOT NULL REFERENCES Sell_Article_Type (satid), osid
INTEGER NOT NULL REFERENCES OrderService (osid), amount DECIMAL NOT
NULL)
CREATE TABLE Buy_Article (baid INTEGER IDENTITY PRIMARY KEY, batid
INTEGER NOT NULL REFERENCES Buy_Article_Type (batid),said
INTEGER NOT NULL REFERENCES Sell_Article (said), amount DECIMAL NOT
NULL)
Now, it's very common that most clients always requires same (or very
near) ServiceOrders, so that's why I need to implement a feature to
"clone" ServiceOrders, This clone will be used as a template for the
user to modify it if needed.
What I need is the SQL Statement that allows me to "clone" a service
order including all it's relationships: Sell_Articles and Buy_Articles.
Hope you can help me
Regards
Fabio|||Fabio Cavassini wrote:
> The example is only representative of the problem, in fact I extracted
> the data model from an old post that you posted ;)
> (http://groups.google.com/group/micr...618a7b00b?hl=en)
>
You did not. You changed it by taking out the alternate keys which
makes the model useless and your requirement highly suspect.
> The real data model is the following:
> My enterprise works with Service_Orders, these service orders are
> populated with:
> Sell_Articles: This is actually what my enterprise sells to it's
> customers
> and
> Buy_Articles: This is what my enterprise buys to it's providers to get
> the Sell Articles
> This is the real data model:
>
Again all the keys are missing. IDENTITY should not be the only key of
any table, that's what UNIQUE constraints are for. You need to fix this
first.
> Now, it's very common that most clients always requires same (or very
> near) ServiceOrders, so that's why I need to implement a feature to
> "clone" ServiceOrders, This clone will be used as a template for the
> user to modify it if needed.
>
A database designer is supposed to eliminate redundancy - not design it
in. With proper design the idea of cloning data in a table would be A)
impossible because it would violate integrity constraints, B)
unecessary. If you don't understand database design principles such as
normalization then I recommend you study before you try to implement
this. Alternatively, add the missing candidate keys to your DDL and
post again so we can help you better.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||>Again all the keys are missing. IDENTITY should not be the only key of
>any table, that's what UNIQUE constraints are for. You need to fix this
>first.
Can you tell me how would you change the model to insert these keys?
---
CREATE TABLE OrderService (osid INTEGER IDENTITY PRIMARY KEY)
CREATE TABLE Sell_Article_Type (satid INTEGER IDENTITY PRIMARY KEY,
satname
VARCHAR(30) NOT NULL)
CREATE TABLE Buy_Article_Type (batid INTEGER IDENTITY PRIMARY KEY,
batname
VARCHAR(30) NOT NULL)
CREATE TABLE Sell_Article (said INTEGER IDENTITY PRIMARY KEY, satid
INTEGER NOT NULL REFERENCES Sell_Article_Type (satid), osid
INTEGER NOT NULL REFERENCES OrderService (osid), amount DECIMAL NOT
NULL)
CREATE TABLE Buy_Article (baid INTEGER IDENTITY PRIMARY KEY, batid
INTEGER NOT NULL REFERENCES Buy_Article_Type (batid),said
INTEGER NOT NULL REFERENCES Sell_Article (said), amount DECIMAL NOT
NULL)
---
>A database designer is supposed to eliminate redundancy - not design it
>in. With proper design the idea of cloning data in a table would be A)
>impossible because it would violate integrity constraints, B)
>unecessary. If you don't understand database design principles such as
>normalization then I recommend you study before you try to implement
>this. Alternatively, add the missing candidate keys to your DDL and
>post again so we can help you better.
Well, I know normalization principles, but I think you missed some of
my description of the problem or maybe I should never used the world
"clon".
The idea it's not to make exact clones of set of Order Services, the
idea it's to use existing Order Services as "templates" for new ones,
but this will be different, not the same.
The idea is so simple as Word Templates are, when you use a Word
Template, the New Document you create includes the Template Code
(redundancy) and you can modify it at your desire. IMHO sometimes
redundancy it's necessary to keep a system simple
Regards
Fabio|||Fabio Cavassini wrote:
> Can you tell me how would you change the model to insert these keys?
> ---
>
For example:
CREATE TABLE Sell_Article_Type (satid INTEGER IDENTITY PRIMARY KEY,
satname VARCHAR(30) NOT NULL UNIQUE)
I don't know what the purpose of the OrderService table is. Apparently
it contains no real data. I could understand if it was just used to
generate a value used as a unique key in another table but that appears
not to be the case here.
I don't know about Sell_Article or Buy_Article either. Apparently they
represent different amounts for each type. At a guess the logical
candidate key in Sell_Article may be (satid, osid)? Possibly you have
some entities missing. It's very hard to guess the meaning just from a
list of column names.
The IDENTITY is called a *surrogate* key. That means it stands in for
some other candidate key in a foreign key reference. In your logical
model you should always be able to substitute the business key (aka
natural key or logical key) for the surrogate in the same table. To
enforce entity integrity you need to declare the business key as well
for the obvious reason that IDENTITY does not prevent duplicates in a
table - it just enumerates them.
Now do a little exercise with your logical model. Remove the IDENTITY
columns and try to normalize the schema to BCNF. The results may be
enlightening. :-)
> The idea it's not to make exact clones of set of Order Services, the
> idea it's to use existing Order Services as "templates" for new ones,
> but this will be different, not the same.
Even if this was desirable from a logical point of view (it clearly
isn't) it would be very impractical since it would prevent you from
ever declaring any unique keys. The problem is you are trying to
implement GUI functionality in the database. I suggest you persist the
real data when the user has something real to store - not before.
Creating a redundant row and then updating it isn't likely scale well.
> IMHO sometimes
> redundancy it's necessary to keep a system simple
If you seriously think that redundancy simplifies databases then just
trawl the archives of this group for the thousands of posts from those
desperately trying to eliminate redundancy from bad data models. Better
read some books on relational design too. I recommend Date's "Database
in Depth" BTW.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||To answer the question in your original post, if you look at my replies
in the link you posted you'll see two solutions for inserting the rows
to the referencing table. The solution with the alternate keys uses
single set-based INSERT statements. The solution without the keys
requires a WHILE loop, several statements and row-at-a-time INSERTs.
I'll leave you to decide which is "simpler". I know which I prefer...
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--
Friday, February 24, 2012
client/server performance problems
house. Recently we started experiencing performance problems at the top of
each hour, it lasts for about 30 seconds. I have run several traces using
these events:
lock: acquired,
lock: deadlock,
rpc: completed,
sql: stmtcompleted
Each trace runs from 10 till the hour until 10 after the hour. I've dumped
them out to tables so I could query them for the top 200 records by cpu,
duration, reads and writes. I then query them by each minute of starttime to
find the worst queries. I have fixed a lot of bad queries over the past
couple of weeks but still users are complaining. We are runnings Sql Server
2000 sp3a on a clustered Win2k SP4. I dont know the hardware config for the
server but our I'm told it has a lot of cpu and at I think 8GB of memory. I
am not a dba, I am a developer but I cant understand why I'm not solving the
problem. The queries that I've fixed are running much faster, but still the
problem persists.
Am I not using the right events in SQL Profiler? Is it possible that we're
really having some sort of network issue at that time? I would use Perfmon
but that doesnt tell me much.
Any help is appreciated,
Doug
My guess is you have a scheduled job that runs on the hour each hour. It
could also be related to checkpoints but they rarely follow such an exact
cycle. You can see these with the SQL counter CHeckpoin Pages per second.
You should monitor for CPU, Disks etc as well. See if these help:
http://www.microsoft.com/sql/techinf...perftuning.asp
Performance WP's
http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
http://www.sql-server-performance.co...ance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.co...mance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=q224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/de...rfmon_24u1.asp
Disk Monitoring
Andrew J. Kelly SQL MVP
"Doug Stiers" <doug@.nospam.com> wrote in message
news:%23t7lrXYCFHA.3120@.TK2MSFTNGP12.phx.gbl...
> We have 1000 users on a client/server CRM application that we've written
> in
> house. Recently we started experiencing performance problems at the top of
> each hour, it lasts for about 30 seconds. I have run several traces using
> these events:
> lock: acquired,
> lock: deadlock,
> rpc: completed,
> sql: stmtcompleted
> Each trace runs from 10 till the hour until 10 after the hour. I've dumped
> them out to tables so I could query them for the top 200 records by cpu,
> duration, reads and writes. I then query them by each minute of starttime
> to
> find the worst queries. I have fixed a lot of bad queries over the past
> couple of weeks but still users are complaining. We are runnings Sql
> Server
> 2000 sp3a on a clustered Win2k SP4. I dont know the hardware config for
> the
> server but our I'm told it has a lot of cpu and at I think 8GB of memory.
> I
> am not a dba, I am a developer but I cant understand why I'm not solving
> the
> problem. The queries that I've fixed are running much faster, but still
> the
> problem persists.
> Am I not using the right events in SQL Profiler? Is it possible that we're
> really having some sort of network issue at that time? I would use Perfmon
> but that doesnt tell me much.
> Any help is appreciated,
> Doug
>