Tuesday, March 27, 2012
Cluster resource SQL Server (InstanceName) failed to come offline
Recently I added some additional memory to a sever thats part of an active
passive cluster (for a total of 8GB). After which we started receiving an
error message when trying to move the cluster over to the passive server:
Source:ClusSvc
Event ID: 1117
Description: Cluster resouce SQL Server(InstanceName) failed to come offline
I did some searching online but the only thing I could find was a thread on
google news from October 2003 (link included below). I am wondering if
anyone else has also experienced this particular problem and if so how they
may have solved it.
Google Groups Link:
http://groups.google.com/groups?hl=e...3DN%26tab%3Dwg
Thanks!
Brad Baker
Anything in the SQL Error log?
Did you re-configure SQL to use the additional RAM?
--
Mike Epprecht, Microsoft SQL Server MVP
Johannesburg, South Africa
Mobile: +27-82-552-0268
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Brad Baker" <bab5470@.no_sp@.m.rochester.rr..com> wrote in message
news:e78npbWfEHA.3964@.TK2MSFTNGP12.phx.gbl...
> Hi -
>
> Recently I added some additional memory to a sever thats part of an active
> passive cluster (for a total of 8GB). After which we started receiving an
> error message when trying to move the cluster over to the passive server:
>
> Source:ClusSvc
> Event ID: 1117
> Description: Cluster resouce SQL Server(InstanceName) failed to come
offline
>
> I did some searching online but the only thing I could find was a thread
on
> google news from October 2003 (link included below). I am wondering if
> anyone else has also experienced this particular problem and if so how
they
> may have solved it.
>
> Google Groups Link:
>
http://groups.google.com/groups?hl=e...3DN%26tab%3Dwg
>
> Thanks!
> Brad Baker
>
>
>
|||Anything in the SQL Error log?
> Other than the error mentioned in the event viewer I didn't see anything
> really pertinant.
Did you re-configure SQL to use the additional RAM?
> We enabled AWE and also enabled SQL to use 7GB of RAM. The only thing
> that might be noteworthy is that the active server has 8GB of RAM versus
> the passive system which has 4GB of RAM. We've run unbalanced cluster for
> quite some time though and only experienced this problem after adding the
> last 2GB.
Brad
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:%23AWVeqWfEHA.4092@.TK2MSFTNGP10.phx.gbl...
> Anything in the SQL Error log?
> Did you re-configure SQL to use the additional RAM?
> --
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Johannesburg, South Africa
> Mobile: +27-82-552-0268
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Brad Baker" <bab5470@.no_sp@.m.rochester.rr..com> wrote in message
> news:e78npbWfEHA.3964@.TK2MSFTNGP12.phx.gbl...
> offline
> on
> they
> http://groups.google.com/groups?hl=e...3DN%26tab%3Dwg
>
|||The error message indicates that the SQL Server recource fail to come offline. Instead of moving the group, try taking the SQL Server resource offline and see the results/errors ? If it goes offline then I would take all
resources in the SQL group offline, move the group to the other node and bring one resource online at a time -- starting with the resources that are not dependent on any resources i.e the disks first followed by sql ip
resoruce, sql network name and then sql server. Also, I will uncheck "restart" property for the resources to troubleshoot this issue (set it back to default after issue is resolved).
NOTE: Since AWE is enabled, max server memory is set to 7GB and passive node has 4GB, when you move the SQL Group to the passive node, SQL server instance will acquire almost all of the available memory
and leave ONLY up to 128MB fo memory free. For more info, please refer to SQL Server BOL topic "Managing AWE Memory".
Best Regards,
Uttam Parui
Microsoft Corporation
This posting is provided "AS IS" with no warranties, and confers no rights.
Are you secure? For information about the Strategic Technology Protection Program and to order your FREE Security Tool Kit, please visit http://www.microsoft.com/security.
Microsoft highly recommends that users with Internet access update their Microsoft software to better protect against viruses and security vulnerabilities. The easiest way to do this is to visit the following websites:
http://www.microsoft.com/protect
http://www.microsoft.com/security/guidance/default.mspx
Sunday, March 25, 2012
Cluster memory issue
cluster.
Both servers are identical running windows 2000 Advance
server.
SQL Server Enterprise 2000 with settings exactly the same
on both servers,
8GB of memory, both servers are new builds, Windows sees
all the memory, boot.ini as /3gb /pae switches and SQL's
are awe configured.
Instance 1 running as default virtual server on node 1
Max Server memory 7406
SQL runs using 7.23GB of memory.
Instance 2 running as named instance of virtual server on
node 2
Max Server memory 7406
SQL runs using only 3.73 GB as max
How can I get node 2 to also use 7.23GB.
But I'm getting excessive paging so would use more memory
if it could get it.
First, you must use Performance monitor to get true memory settings. Task
manager will not report correct settings. If you are paging, you need to
back down on the max memory setting. I would recommend 6.5 GB for an 8GB
system as a starting point. You can adjust upwards until the system begins
to page then back down slightly. The server needs some memory for the OS
and some for non-sql apps (like the login console). Also, you really should
run these systems at about 3.5GB each so in case one fails over, you can
hold both on the same server. Right now, you may not be able to
successfully fail over since neither system has enough free memory to start
the other instance.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Ray" <anonymous@.discussions.microsoft.com> wrote in message
news:27e1401c46401$a942d1d0$a501280a@.phx.gbl...
> I'm currently running 2 SQl Instance as an active/active
> cluster.
> Both servers are identical running windows 2000 Advance
> server.
> SQL Server Enterprise 2000 with settings exactly the same
> on both servers,
> 8GB of memory, both servers are new builds, Windows sees
> all the memory, boot.ini as /3gb /pae switches and SQL's
> are awe configured.
> Instance 1 running as default virtual server on node 1
> Max Server memory 7406
> SQL runs using 7.23GB of memory.
> Instance 2 running as named instance of virtual server on
> node 2
> Max Server memory 7406
> SQL runs using only 3.73 GB as max
> How can I get node 2 to also use 7.23GB.
> But I'm getting excessive paging so would use more memory
> if it could get it.
>
|||I agree with Geoff. The main reason that you are using SQL Server Failover Clustering is that High Availability is your highest priority. So, you do not want to compromise that. In this scenario, it is recommended
that each instance of SQL Server on either node use upto 3.5GB of memory.
Here is an example from " Microsoft SQL Server 2000 High Availability (MS Press) ISBN 0-7356-1920-4 " that will make this easier to understand.
Consider this example: you have a two-node cluster with three SQL Server instances. Instance 1, which currently resides on Node 1, has 7 GB of memory configured using AWE. Node 2 houses Instance 2, which
has 5 GB of memory allocated with AWE, and one instance that is not using any advanced options and is currently using 1.5 GB of memory. Each node individually has a total of 8 GB of physical memory. A
problem occurs on Node 1, causing a failover. The instance tries to restart on Node 2, but it cannot. You now have a failed set of SQL Server resources that cannot come online, causing an availability problem.
Why? Well, to use AWE, you need to guarantee that the memory you told SQL Server to use will be there. A failover is basically a stop and start on another server. In this case, you were already using 6.5 of the
available 8 GB of memory. 6.5 + 7 does not equal 8, and it does not matter how big your page file is. Your instance will not grab the memory for AWE. It might, however, start up, but it will only grab the amount of
memory that it can up to 2 GB . Now you are risking memory starvation of the operating system, and this can affect all instances.
So how do you prevent this? By planning, these are the types of scenarios you need to play out in your head when proposing the number of instances on a cluster. Your management might see a two-node cluster
and say, "Hey, why is this second server doing nothing? Get something on there!", leaving you in the state of the previous example. The way you can balance memory in this case would be to give two
instances that need a fixed amount of memory 3 GB each and let the other one be dynamic (or set it to 1 GB or 1.5 GB to allow room for the operating system).If you cannot live with this, you will need another cluster
or set of servers to handle this workload. The recommendation was basically to halve the memory,which might lead some to believe that you are wasting resources, but again, is performance or availability your
goal? By definition, if you are reading this book and implementing things like a cluster, you are probably saying that availability is your highest priority. If you never have to failover and you left the instances at 7
GB, 5 GB, and dynamic, things would work great. But once a failover happened, all bets would be off.
Additional Information
=======================
-- In your case, you do not need to use /3GB. AWE and /PAE should be sufficient.
-- When starting with both the /PAE and the /3GB switches, the system may not start
http://support.microsoft.com/?kbid=817566
HOW TO: Configure memory for more than 2 GB in SQL Server
http://support.microsoft.com/?kbid=274750
Microsoft Whitepaper - SQL Server 2000 Failover Clustering
http://www.microsoft.com/SQL/techinf...vercluster.asp
Microsoft SQL Server 2000 High Availability Series
http://www.microsoft.com/technet/pro...y/sqlhalp.mspx
Microsoft Webcasts
Introduction to Microsoft SQL Server 2000 Clustering
http://support.microsoft.com/default...lurb051001.asp
Microsoft SQL Server 2000 Virtual Server: Things You Should Know
http://support.microsoft.com/default...lurb032602.asp
Microsoft SQL Server 2000 Virtual Server Basic Setup, Maintenance, and Service Pack http://support.microsoft.com/default...lurb061002.asp
Microsoft SQL Server 2000 Failover Clustering Disaster Recovery Procedures
http://support.microsoft.com/default...lurb101802.asp
Troubleshooting SQL 2000 Virtual Server and Service Pack Setups for Failover Clustering
http://support.microsoft.com/default...lurb020703.asp
Q243218 INF: Installation Order for SQL 2000 Enterprise Edition
http://support.microsoft.com/?kbid=243218
Q260758 - INF: Frequently Asked Questions - SQL Server 2000 - Failover Clustering
http://support.microsoft.com/?kbid=260758
Best Regards,
Uttam Parui
Microsoft Corporation
This posting is provided "AS IS" with no warranties, and confers no rights.
Are you secure? For information about the Strategic Technology Protection Program and to order your FREE Security Tool Kit, please visit http://www.microsoft.com/security.
Microsoft highly recommends that users with Internet access update their Microsoft software to better protect against viruses and security vulnerabilities. The easiest way to do this is to visit the following websites:
http://www.microsoft.com/protect
http://www.microsoft.com/security/guidance/default.mspx
Thursday, March 22, 2012
cluster hardware - CPU, memory
are there any restrictions on the hardware. That is, do the 2 nodes
necessarily should have the same number of CPU and memory?
2. when we have 2 node with sql server in cluster, can we install another
non-clustered instance on the same server?
3. is there a link which describes the difference between 1 sql instance
cluster and 2 sql instance cluster?
1). Certified cluster configurations must be symmetrical. That is all
nodes indentical. It is possible to run asymmetrical clusters, but they are
often more trouble and less reliable than a stand-alone SQL instance.
2). Yes.
3). The Microsoft SQL Server 2000 Resource Kit has some good information on
clustering, including using more than 2 nodes and multi-instance support.
BOL also has some basic information.
Geoff N. Hiten
Microsoft SQL Server MVP
"Pari" <Pari@.discussions.microsoft.com> wrote in message
news:2E64C5E8-A0BB-41AB-8174-FF26137A1F17@.microsoft.com...
> 1. In MSCS and SQL Server 2-node clustering on windows 2000 advance
> server,
> are there any restrictions on the hardware. That is, do the 2 nodes
> necessarily should have the same number of CPU and memory?
> 2. when we have 2 node with sql server in cluster, can we install another
> non-clustered instance on the same server?
> 3. is there a link which describes the difference between 1 sql instance
> cluster and 2 sql instance cluster?
sqlsql
Tuesday, March 20, 2012
Cluster Active-Active - Parallelism
Memory: How can I determine how much memory I need to set ? If I leave
dynamically and one node falls down I receive message error about allocate
memory.
CPU: Processor - Parallelism - Use all available processors. There are 2
processors on each server. In this case I can set this option ON ? If the one
node falls down SQLserver will try to use 2 or 4 processors ?
thanks in advance
set max server memory on both servers so that fail over of either server does
not result in total memory greater than total server memory on each box.
Also with cpu if you have configured to use all available cpus then all
available cpus will be utilized. ie. sql will use only the number of
processors on the server.
HTH
"Shima" wrote:
> Cluster active-active enviroment.
> Memory: How can I determine how much memory I need to set ? If I leave
> dynamically and one node falls down I receive message error about allocate
> memory.
> CPU: Processor - Parallelism - Use all available processors. There are 2
> processors on each server. In this case I can set this option ON ? If the one
> node falls down SQLserver will try to use 2 or 4 processors ?
> thanks in advance
>
|||It is useful that resurces used by instances, does not be greater than one
node...
I
"Olu Adedeji" wrote:
[vbcol=seagreen]
> set max server memory on both servers so that fail over of either server does
> not result in total memory greater than total server memory on each box.
> Also with cpu if you have configured to use all available cpus then all
> available cpus will be utilized. ie. sql will use only the number of
> processors on the server.
>
> HTH
>
> "Shima" wrote:
Cluster Active-Active - Parallelism
Memory: How can I determine how much memory I need to set ? If I leave
dynamically and one node falls down I receive message error about allocate
memory.
CPU: Processor - Parallelism - Use all available processors. There are 2
processors on each server. In this case I can set this option ON ? If the one
node falls down SQLserver will try to use 2 or 4 processors ?
thanks in advanceset max server memory on both servers so that fail over of either server does
not result in total memory greater than total server memory on each box.
Also with cpu if you have configured to use all available cpus then all
available cpus will be utilized. ie. sql will use only the number of
processors on the server.
HTH
"Shima" wrote:
> Cluster active-active enviroment.
> Memory: How can I determine how much memory I need to set ? If I leave
> dynamically and one node falls down I receive message error about allocate
> memory.
> CPU: Processor - Parallelism - Use all available processors. There are 2
> processors on each server. In this case I can set this option ON ? If the one
> node falls down SQLserver will try to use 2 or 4 processors ?
> thanks in advance
>|||It is useful that resurces used by instances, does not be greater than one
node...
I
"Olu Adedeji" wrote:
> set max server memory on both servers so that fail over of either server does
> not result in total memory greater than total server memory on each box.
> Also with cpu if you have configured to use all available cpus then all
> available cpus will be utilized. ie. sql will use only the number of
> processors on the server.
>
> HTH
>
> "Shima" wrote:
> > Cluster active-active enviroment.
> > Memory: How can I determine how much memory I need to set ? If I leave
> > dynamically and one node falls down I receive message error about allocate
> > memory.
> > CPU: Processor - Parallelism - Use all available processors. There are 2
> > processors on each server. In this case I can set this option ON ? If the one
> > node falls down SQLserver will try to use 2 or 4 processors ?
> > thanks in advance
> >
Cluster Active-Active - Parallelism
Memory: How can I determine how much memory I need to set ? If I leave
dynamically and one node falls down I receive message error about allocate
memory.
CPU: Processor - Parallelism - Use all available processors. There are 2
processors on each server. In this case I can set this option ON ? If the on
e
node falls down SQLserver will try to use 2 or 4 processors ?
thanks in advanceset max server memory on both servers so that fail over of either server doe
s
not result in total memory greater than total server memory on each box.
Also with cpu if you have configured to use all available cpus then all
available cpus will be utilized. ie. sql will use only the number of
processors on the server.
HTH
"Shima" wrote:
> Cluster active-active enviroment.
> Memory: How can I determine how much memory I need to set ? If I leave
> dynamically and one node falls down I receive message error about allocate
> memory.
> CPU: Processor - Parallelism - Use all available processors. There are 2
> processors on each server. In this case I can set this option ON ? If the
one
> node falls down SQLserver will try to use 2 or 4 processors ?
> thanks in advance
>|||It is useful that resurces used by instances, does not be greater than one
node...
I
"Olu Adedeji" wrote:
[vbcol=seagreen]
> set max server memory on both servers so that fail over of either server d
oes
> not result in total memory greater than total server memory on each box.
> Also with cpu if you have configured to use all available cpus then all
> available cpus will be utilized. ie. sql will use only the number of
> processors on the server.
>
> HTH
>
> "Shima" wrote:
>sqlsql
Thursday, March 8, 2012
CLR out of memory
Server Express; however, it throws OOM exceptions on the full SQL
Server 2005.
The function is very simple: it takes an XML file and a stylesheet,
transforms the XML using the stylesheet, and returns the result as a
string. On my workstation, with 2 gb RAM (1gb allocated to SQL Server),
I hardly ever have any problem with it. On the server, with 4gb, the
same function running on the same data terminates with the following
message:
.NET Framework execution was aborted by escalation policy because of
out of memory.
System.Threading.ThreadAbortException: Thread was being aborted.
(...stack follows...)
This happens when input XML data size exceeds ~30-50 mb or so.
I checked performance counters and memory clerks. The problem seems to
be that CLR never grabs enough memory even when it should be available.
I ran the same request on the same data, first on my workstation, then
on the server. On the workstation, once the function was started, CLR
memory utilization went up to about 108 megs, and the function
completed normally. On the server, the CLR memory utilization only went
up to about 75 megs, then the function was terminated.
I checked command line parameters - the -g switch was not used at
either the server or the workstation. Just in case, I added the default
-g256 at the server, to no effect.
So, my question is - what could cause the CLR to not use available
memory? There's something wrong with the server configuration, but I
can't figure out what. Any help would be greatly appreciated!
Below is the function in question.
[SqlFunction(Name="_clrApplyStylesheet",
DataAccess=DataAccessKind.Read)]
public static SqlString _clrApplyStylesheet(SqlXml XmlData,
SqlXml XmlStylesheet)
{
XPathDocument stylesheet, xmlData;
XslCompiledTransform xTransform;
System.Text.StringBuilder sBuilder;
XmlWriter xWriter;
stylesheet = new
XPathDocument(XmlStylesheet.CreateReader());
xmlData = new XPathDocument(XmlData.CreateReader());
sBuilder = new System.Text.StringBuilder();
xWriter = XmlWriter.Create(sBuilder);
xTransform = new XslCompiledTransform();
xTransform.Load(stylesheet);
xTransform.Transform(xmlData, xWriter);
return sBuilder.ToString();
}Hi Abe,
I mentioned this problem briefly here:
http://blogs.msdn.com/sqlclr/archive/2006/03/24/560154.aspx
The issue you are seeing is not because of a lack in physical memory but in
a lack of Virtual Address Space. If you have 1 GB allocated to SQL on your
workstation, then the SQL Buffer Pool will reserve 1 GB of VAS for its use,
leaving 1 GB for all the allocations made outside the buffer pool, including
CLR. However, as you found and contrary to what you expected, on your
server the Buffer Pool is able to use much more memory (depending on if you
are using AWE or /3GB) leaving only the default 256 MB of VAS for everything
else.
Unfortunately, I think your only real options would be to use the -g flag to
reserve more memory for allocations outside the buffer pool or, if possible,
use 64-bit hardware.
Steven
"Abe" <revres_lqs@.yahoo.com> wrote in message
news:1152829807.975377.50710@.35g2000cwc.googlegroups.com...
> I've got a CLR function that runs fine on my workstation running SQL
> Server Express; however, it throws OOM exceptions on the full SQL
> Server 2005.
> The function is very simple: it takes an XML file and a stylesheet,
> transforms the XML using the stylesheet, and returns the result as a
> string. On my workstation, with 2 gb RAM (1gb allocated to SQL Server),
> I hardly ever have any problem with it. On the server, with 4gb, the
> same function running on the same data terminates with the following
> message:
> .NET Framework execution was aborted by escalation policy because of
> out of memory.
> System.Threading.ThreadAbortException: Thread was being aborted.
> (...stack follows...)
> This happens when input XML data size exceeds ~30-50 mb or so.
> I checked performance counters and memory clerks. The problem seems to
> be that CLR never grabs enough memory even when it should be available.
> I ran the same request on the same data, first on my workstation, then
> on the server. On the workstation, once the function was started, CLR
> memory utilization went up to about 108 megs, and the function
> completed normally. On the server, the CLR memory utilization only went
> up to about 75 megs, then the function was terminated.
> I checked command line parameters - the -g switch was not used at
> either the server or the workstation. Just in case, I added the default
> -g256 at the server, to no effect.
> So, my question is - what could cause the CLR to not use available
> memory? There's something wrong with the server configuration, but I
> can't figure out what. Any help would be greatly appreciated!
> Below is the function in question.
> [SqlFunction(Name="_clrApplyStylesheet",
> DataAccess=DataAccessKind.Read)]
> public static SqlString _clrApplyStylesheet(SqlXml XmlData,
> SqlXml XmlStylesheet)
> {
> XPathDocument stylesheet, xmlData;
> XslCompiledTransform xTransform;
> System.Text.StringBuilder sBuilder;
> XmlWriter xWriter;
> stylesheet = new
> XPathDocument(XmlStylesheet.CreateReader());
> xmlData = new XPathDocument(XmlData.CreateReader());
>
> sBuilder = new System.Text.StringBuilder();
> xWriter = XmlWriter.Create(sBuilder);
> xTransform = new XslCompiledTransform();
> xTransform.Load(stylesheet);
> xTransform.Transform(xmlData, xWriter);
> return sBuilder.ToString();
> }
>|||Hi Steven,
Thank you so much - I increased memory allocation with the -g switch,
and it worked!
I wonder why it's such an obscure issue - your article (which I read
even before posting but wasn't sure if it was applicable) seems to be
almost the only one relevant to the issue.
So, when you do use the -g switch: can the Buffer Pool reclaim the
memory from "memtoleave" when it's not required, or are you actually
decreasing the memory available to Buffer Pool at all times? It's a
shared server, and I don't want to slow down everyone just so that my
code could work.
Abe
CLR out of memory
Server Express; however, it throws OOM exceptions on the full SQL
Server 2005.
The function is very simple: it takes an XML file and a stylesheet,
transforms the XML using the stylesheet, and returns the result as a
string. On my workstation, with 2 gb RAM (1gb allocated to SQL Server),
I hardly ever have any problem with it. On the server, with 4gb, the
same function running on the same data terminates with the following
message:
.NET Framework execution was aborted by escalation policy because of
out of memory.
System.Threading.ThreadAbortException: Thread was being aborted.
(...stack follows...)
This happens when input XML data size exceeds ~30-50 mb or so.
I checked performance counters and memory clerks. The problem seems to
be that CLR never grabs enough memory even when it should be available.
I ran the same request on the same data, first on my workstation, then
on the server. On the workstation, once the function was started, CLR
memory utilization went up to about 108 megs, and the function
completed normally. On the server, the CLR memory utilization only went
up to about 75 megs, then the function was terminated.
I checked command line parameters - the -g switch was not used at
either the server or the workstation. Just in case, I added the default
-g256 at the server, to no effect.
So, my question is - what could cause the CLR to not use available
memory? There's something wrong with the server configuration, but I
can't figure out what. Any help would be greatly appreciated!
Below is the function in question.
[SqlFunction(Name="_clrApplyStylesheet",
DataAccess=DataAccessKind.Read)]
public static SqlString _clrApplyStylesheet(SqlXml XmlData,
SqlXml XmlStylesheet)
{
XPathDocument stylesheet, xmlData;
XslCompiledTransform xTransform;
System.Text.StringBuilder sBuilder;
XmlWriter xWriter;
stylesheet = new
XPathDocument(XmlStylesheet.CreateReader());
xmlData = new XPathDocument(XmlData.CreateReader());
sBuilder = new System.Text.StringBuilder();
xWriter = XmlWriter.Create(sBuilder);
xTransform = new XslCompiledTransform();
xTransform.Load(stylesheet);
xTransform.Transform(xmlData, xWriter);
return sBuilder.ToString();
}Hi Abe,
I mentioned this problem briefly here:
http://blogs.msdn.com/sqlclr/archiv.../24/560154.aspx
The issue you are seeing is not because of a lack in physical memory but in
a lack of Virtual Address Space. If you have 1 GB allocated to SQL on your
workstation, then the SQL Buffer Pool will reserve 1 GB of VAS for its use,
leaving 1 GB for all the allocations made outside the buffer pool, including
CLR. However, as you found and contrary to what you expected, on your
server the Buffer Pool is able to use much more memory (depending on if you
are using AWE or /3GB) leaving only the default 256 MB of VAS for everything
else.
Unfortunately, I think your only real options would be to use the -g flag to
reserve more memory for allocations outside the buffer pool or, if possible,
use 64-bit hardware.
Steven
"Abe" <revres_lqs@.yahoo.com> wrote in message
news:1152829807.975377.50710@.35g2000cwc.googlegroups.com...
> I've got a CLR function that runs fine on my workstation running SQL
> Server Express; however, it throws OOM exceptions on the full SQL
> Server 2005.
> The function is very simple: it takes an XML file and a stylesheet,
> transforms the XML using the stylesheet, and returns the result as a
> string. On my workstation, with 2 gb RAM (1gb allocated to SQL Server),
> I hardly ever have any problem with it. On the server, with 4gb, the
> same function running on the same data terminates with the following
> message:
> .NET Framework execution was aborted by escalation policy because of
> out of memory.
> System.Threading.ThreadAbortException: Thread was being aborted.
> (...stack follows...)
> This happens when input XML data size exceeds ~30-50 mb or so.
> I checked performance counters and memory clerks. The problem seems to
> be that CLR never grabs enough memory even when it should be available.
> I ran the same request on the same data, first on my workstation, then
> on the server. On the workstation, once the function was started, CLR
> memory utilization went up to about 108 megs, and the function
> completed normally. On the server, the CLR memory utilization only went
> up to about 75 megs, then the function was terminated.
> I checked command line parameters - the -g switch was not used at
> either the server or the workstation. Just in case, I added the default
> -g256 at the server, to no effect.
> So, my question is - what could cause the CLR to not use available
> memory? There's something wrong with the server configuration, but I
> can't figure out what. Any help would be greatly appreciated!
> Below is the function in question.
> [SqlFunction(Name="_clrApplyStylesheet",
> DataAccess=DataAccessKind.Read)]
> public static SqlString _clrApplyStylesheet(SqlXml XmlData,
> SqlXml XmlStylesheet)
> {
> XPathDocument stylesheet, xmlData;
> XslCompiledTransform xTransform;
> System.Text.StringBuilder sBuilder;
> XmlWriter xWriter;
> stylesheet = new
> XPathDocument(XmlStylesheet.CreateReader());
> xmlData = new XPathDocument(XmlData.CreateReader());
>
> sBuilder = new System.Text.StringBuilder();
> xWriter = XmlWriter.Create(sBuilder);
> xTransform = new XslCompiledTransform();
> xTransform.Load(stylesheet);
> xTransform.Transform(xmlData, xWriter);
> return sBuilder.ToString();
> }
>|||Hi Steven,
Thank you so much - I increased memory allocation with the -g switch,
and it worked!
I wonder why it's such an obscure issue - your article (which I read
even before posting but wasn't sure if it was applicable) seems to be
almost the only one relevant to the issue.
So, when you do use the -g switch: can the Buffer Pool reclaim the
memory from "memtoleave" when it's not required, or are you actually
decreasing the memory available to Buffer Pool at all times? It's a
shared server, and I don't want to slow down everyone just so that my
code could work.
Abe
CLR memory usage
Hi All -
We have some CLR sprocs and tvf's we run in a batch job that recently have been getting the out of memory issue. I want to increase the amount of memory allocated to the CLR using the -g startup switch but i want to make an intelligent decision on how much to allocate. What are some of the best ways that you have found to estimate how much to give the CLR?
Thanks,
Cameron
The first step in this process is to look at your CLR sprocs and TVFs and look at how they allocate memory. Are they doing things like filling datasets with potentially large amounts of data? Are they using readers, but constructing lots of objects that grow in proportion to the query size against the database? If so, there may be a fundemental design problem that needs to be addressed. If that isn't the case, then looking at how memory is used in your CLR sprocs and TVFs should begin to help you understand what new memory limits to apply. Only by understanding where and when and how much memory is allocated for your worst case workload can you begin to understand how to change the limits.