Monday, March 26, 2012
link server reset connection
server. Both are SQL 2000 build 818 (SP3).
The login time between each reset is about 10 minutes. In the past the
login time was at least 24 to 48 hours.
I have rebooted both servers and it's still happening. Any ideas?
>>> On Sun, Nov 19, 2006 at 6:49 PM, in message
<1163987375.312919.163700@.e3g2000cwe.googlegroups. com>,
<bic1ster@.gmail.com> wrote:
> I have recently begun getting a sp_reset_connection on one linked
> server. Both are SQL 2000 build 818 (SP3).
> The login time between each reset is about 10 minutes. In the past
> the
> login time was at least 24 to 48 hours.
> I have rebooted both servers and it's still happening. Any ideas?
Did anything change on the servers or the network (such as a new
firewall between the hosts)?
|||I will check. I did look at the Compaq Insight NIC application and
they don't have any packet receive errors etc.. I also looked at event
logs and nothing pecular was in them. Maybe it's time for some
perfmon.
We did have a switch blade failure last month. I don't remember if
these were on that blade.
|||It's an undocumented stored procedure used internally by SQL
Server. It shows up when using connection pooling
and it is called to reset the connection options,settings,
etc before reusing the connection in the connection pool so
that the settings don't persist to another client
connection. In and of itself, just seeing it execute every
10 minutes doesn't necessarily indicate any problems. Why do
you have concerns? If the issue is that it's every 10 mins
now instead of 24 to 48 hours, you can run a trace and see
what the connection is executing to see what's going on with
whatever application, process. It may be nothing though.
-Sue
On 19 Nov 2006 17:49:35 -0800, bic1ster@.gmail.com wrote:
>I have recently begun getting a sp_reset_connection on one linked
>server. Both are SQL 2000 build 818 (SP3).
>The login time between each reset is about 10 minutes. In the past the
>login time was at least 24 to 48 hours.
>I have rebooted both servers and it's still happening. Any ideas?
|||> connection. In and of itself, just seeing it execute every
> 10 minutes doesn't necessarily indicate any problems. Why do
> you have concerns? If the issue is that it's every 10 mins
It wasn't happening two weeks ago as I monitor who is connected to the
server. ok, I will look at profiler for a while and see what happens.
I think the app was updated recently too. Maybe something else to
check with the developer.
sql
link server reset connection
server. Both are SQL 2000 build 818 (SP3).
The login time between each reset is about 10 minutes. In the past the
login time was at least 24 to 48 hours.
I have rebooted both servers and it's still happening. Any ideas?>>> On Sun, Nov 19, 2006 at 6:49 PM, in message
<1163987375.312919.163700@.e3g2000cwe.googlegroups.com>,
<bic1ster@.gmail.com> wrote:
> I have recently begun getting a sp_reset_connection on one linked
> server. Both are SQL 2000 build 818 (SP3).
> The login time between each reset is about 10 minutes. In the past
> the
> login time was at least 24 to 48 hours.
> I have rebooted both servers and it's still happening. Any ideas?
Did anything change on the servers or the network (such as a new
firewall between the hosts)?|||I will check. I did look at the Compaq Insight NIC application and
they don't have any packet receive errors etc.. I also looked at event
logs and nothing pecular was in them. Maybe it's time for some
perfmon.
We did have a switch blade failure last month. I don't remember if
these were on that blade.|||It's an undocumented stored procedure used internally by SQL
Server. It shows up when using connection pooling
and it is called to reset the connection options,settings,
etc before reusing the connection in the connection pool so
that the settings don't persist to another client
connection. In and of itself, just seeing it execute every
10 minutes doesn't necessarily indicate any problems. Why do
you have concerns? If the issue is that it's every 10 mins
now instead of 24 to 48 hours, you can run a trace and see
what the connection is executing to see what's going on with
whatever application, process. It may be nothing though.
-Sue
On 19 Nov 2006 17:49:35 -0800, bic1ster@.gmail.com wrote:
>I have recently begun getting a sp_reset_connection on one linked
>server. Both are SQL 2000 build 818 (SP3).
>The login time between each reset is about 10 minutes. In the past the
>login time was at least 24 to 48 hours.
>I have rebooted both servers and it's still happening. Any ideas?|||> connection. In and of itself, just seeing it execute every
> 10 minutes doesn't necessarily indicate any problems. Why do
> you have concerns? If the issue is that it's every 10 mins
It wasn't happening two weeks ago as I monitor who is connected to the
server. ok, I will look at profiler for a while and see what happens.
I think the app was updated recently too. Maybe something else to
check with the developer.
link server reset connection
server. Both are SQL 2000 build 818 (SP3).
The login time between each reset is about 10 minutes. In the past the
login time was at least 24 to 48 hours.
I have rebooted both servers and it's still happening. Any ideas?>> On Sun, Nov 19, 2006 at 6:49 PM, in message
<1163987375.312919.163700@.e3g2000cwe.googlegroups.com>,
<bic1ster@.gmail.com> wrote:
> I have recently begun getting a sp_reset_connection on one linked
> server. Both are SQL 2000 build 818 (SP3).
> The login time between each reset is about 10 minutes. In the past
> the
> login time was at least 24 to 48 hours.
> I have rebooted both servers and it's still happening. Any ideas?
Did anything change on the servers or the network (such as a new
firewall between the hosts)?|||I will check. I did look at the Compaq Insight NIC application and
they don't have any packet receive errors etc.. I also looked at event
logs and nothing pecular was in them. Maybe it's time for some
perfmon.
We did have a switch blade failure last month. I don't remember if
these were on that blade.|||It's an undocumented stored procedure used internally by SQL
Server. It shows up when using connection pooling
and it is called to reset the connection options,settings,
etc before reusing the connection in the connection pool so
that the settings don't persist to another client
connection. In and of itself, just seeing it execute every
10 minutes doesn't necessarily indicate any problems. Why do
you have concerns? If the issue is that it's every 10 mins
now instead of 24 to 48 hours, you can run a trace and see
what the connection is executing to see what's going on with
whatever application, process. It may be nothing though.
-Sue
On 19 Nov 2006 17:49:35 -0800, bic1ster@.gmail.com wrote:
>I have recently begun getting a sp_reset_connection on one linked
>server. Both are SQL 2000 build 818 (SP3).
>The login time between each reset is about 10 minutes. In the past the
>login time was at least 24 to 48 hours.
>I have rebooted both servers and it's still happening. Any ideas?|||> connection. In and of itself, just seeing it execute every
> 10 minutes doesn't necessarily indicate any problems. Why do
> you have concerns? If the issue is that it's every 10 mins
It wasn't happening two weeks ago as I monitor who is connected to the
server. ok, I will look at profiler for a while and see what happens.
I think the app was updated recently too. Maybe something else to
check with the developer.
Monday, March 19, 2012
Line Chart with a "Single Category Group"
I have a line chart showing some data vs time. The chart is okay when
we have multiple periods (say Q1 2006 and Q2 2006). However, the chart
is empty when only period (the category group) is being displayed.
This leaves me with two questions...,
1. is this because line charts are supposed to show trends over
multiple category groups and thus are Not applicable for a single
category group?
2. is there a fix for this?
Any help would be greatly appreciated :)
Thnx a ton.Oops..., I meant the chart is empty when only a single period (say only
Q1 2006) is selected (i.e. is to be displayed).
> I have a line chart showing some data vs time. The chart is okay when
> we have multiple periods (say Q1 2006 and Q2 2006). However, the chart
> is empty when only period (the category group) is being displayed.|||Allright, the problem has been solved :D
Turns out I need to edit the data field, and enable the "Show Markers"
checkbox :)
Profundus wrote:
> Hi Folks
> I have a line chart showing some data vs time. The chart is okay when
> we have multiple periods (say Q1 2006 and Q2 2006). However, the chart
> is empty when only period (the category group) is being displayed.
> This leaves me with two questions...,
> 1. is this because line charts are supposed to show trends over
> multiple category groups and thus are Not applicable for a single
> category group?
> 2. is there a fix for this?
> Any help would be greatly appreciated :)
> Thnx a ton.
Line Chart question
series will be missing data within the time range.
Currently the line is drawn from the last point to the next valid point. Is
it possible to have the report drawn with the line missing (or not drawn) for
that specific point for that specific series?
Thanks,
KevinHi
Set the border property in RS2005 to none , try something like this
=iif( Fields!<FieldName>.value=nothing or Fields!
<FieldName>.value='' , "None",
"Solid")
Cheers
Shai
On Dec 12, 4:08 am, Kevinst <Kevi...@.discussions.microsoft.com> wrote:
> I have a line chart with three series group where it is possible that one
> series will be missing data within the time range.
> Currently the line is drawn from the last point to the next valid point. Is
> it possible to have the report drawn with the line missing (or not drawn) for
> that specific point for that specific series?
> Thanks,
> Kevin|||Thanks for the info...
I changed the report to a x-y scatter and modified the sql query, but your
bit of code helped with another issue...
Thanks,
Kevin
"shaikat.das@.gmail.com" wrote:
> Hi
> Set the border property in RS2005 to none , try something like this
> =iif( Fields!<FieldName>.value=nothing or Fields!
> <FieldName>.value='' , "None",
> "Solid")
> Cheers
> Shai
> On Dec 12, 4:08 am, Kevinst <Kevi...@.discussions.microsoft.com> wrote:
> > I have a line chart with three series group where it is possible that one
> > series will be missing data within the time range.
> >
> > Currently the line is drawn from the last point to the next valid point. Is
> > it possible to have the report drawn with the line missing (or not drawn) for
> > that specific point for that specific series?
> >
> > Thanks,
> > Kevin
>
Monday, March 12, 2012
Limiting Transaction Time?
dabble as a DBA for our shop.
Our application is distributed and works through web services to
access information stored in SQL Server 2005. Each request to the web
service is a contained transaction, in other words, transactions do
NOT cross calls web service calls.
Today something unknown and yet unidentified went wrong in the code
and a transaction was open indefinitely (over an hour until I noticed
it by accident and killed the offending process).
Question: Is there a setting that I'm not seeing for the server or
database that will limit how long a transaction can hold locks for?
For example, in our scenario no transaction should be longer than a
few seconds, so if I could set something in sql server that would
rollback and kill any transaction that lasted for more than 1 minute,
I'd be able to limit the impact of this problem.
Thanks in advance for any help you can give . . .
For this case, I don't think its locks you should be concerned about?
Before killing a connection, I would look in the master..sysprocesses
table or run sp_who2 and find out what application. On spid run dbcc
inputbuffer(spid) and see what what's being or what was executed. Is
the connection in a sleeping state? Chances you are not getting a
commit or rollback from the application or service stopped and some
how could not initiate a rollback. You could write a query against the
master..sysprocesses table that checks for connection in an open_tran
> 0 and in a sleeping state and last_batch is greater than a specified
time. Then run a kill statement on it. You can acheive this via cursor
from the query result. If you are using some connection mechanism that
leaves your connetion in open_tran state of 1 (I've seen some java
drivers do this) then try open_tran >1. However, I don't advocate
doing this. Best thing to do let it happen again and troubleshoot the
application. The connection will eventually be closed see See
Orphaned Sessions in BOL.
|||I agree with Ken that you should perform root cause analysis. Consider
adding SET XACT_ABORT ON to stored procedures that begin explicit
transactions. This setting will rollback the transaction immediately and
cancel the batch in the event of most errors, including command timeouts.
If you start transactions from application code, execute SET XACT_ABORT ON
on the connection after opening to help ensure pooled connections
immediately rollback connections in the event of an error. Also check to
ensure a commit or rollback is issued on all code paths.
Problems with performance (e.g. scans) can often lead to performance and
concurrency (blocking) problems, which in turn cause other queries and
transactions to run longer and then lead to even more problems. The best
way to break the cycle is by optimizing queries and adding indexes for
efficient data access. Start by analyzing the execution plan in SSMS for
those queries that are the longest running and most frequently executed.
See http://support.microsoft.com/kb/224453 for some tips. Although the
article was written for SQL 7 and 2000, much applies to SQL 2005 too.
If you need to gather SQL trace data from a busy server, run a server-side
trace with a filter (e.g. RPC and T-SQL batch completed events with duration
greater than 1000000 microseconds) instead of running the Profiler tool
directly against a busy production server. You can develop the desired
trace using Profiler on a dev box and then script that trace so that you can
run on the prod server as a server-side trace.
Hope this helps.
Dan Guzman
SQL Server MVP
"karlag92" <karlag92@.hotmail.com> wrote in message
news:1186013382.486135.138640@.r34g2000hsd.googlegr oups.com...
> Please be gentle, I'm an application developer, not a dba. I just
> dabble as a DBA for our shop.
> Our application is distributed and works through web services to
> access information stored in SQL Server 2005. Each request to the web
> service is a contained transaction, in other words, transactions do
> NOT cross calls web service calls.
> Today something unknown and yet unidentified went wrong in the code
> and a transaction was open indefinitely (over an hour until I noticed
> it by accident and killed the offending process).
> Question: Is there a setting that I'm not seeing for the server or
> database that will limit how long a transaction can hold locks for?
> For example, in our scenario no transaction should be longer than a
> few seconds, so if I could set something in sql server that would
> rollback and kill any transaction that lasted for more than 1 minute,
> I'd be able to limit the impact of this problem.
> Thanks in advance for any help you can give . . .
>
|||On Aug 2, 6:51 am, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
wrote:
> I agree with Ken that you should perform root cause analysis. Consider
> adding SET XACT_ABORT ON to stored procedures that begin explicit
> transactions. This setting will rollback the transaction immediately and
> cancel the batch in the event of most errors, including command timeouts.
> If you start transactions from application code, execute SET XACT_ABORT ON
> on the connection after opening to help ensure pooled connections
> immediately rollback connections in the event of an error. Also check to
> ensure a commit or rollback is issued on all code paths.
> Problems with performance (e.g. scans) can often lead to performance and
> concurrency (blocking) problems, which in turn cause other queries and
> transactions to run longer and then lead to even more problems. The best
> way to break the cycle is by optimizing queries and adding indexes for
> efficient data access. Start by analyzing the execution plan in SSMS for
> those queries that are the longest running and most frequently executed.
> Seehttp://support.microsoft.com/kb/224453for some tips. Although the
> article was written for SQL 7 and 2000, much applies to SQL 2005 too.
> If you need to gather SQL trace data from a busy server, run a server-side
> trace with a filter (e.g. RPC and T-SQL batch completed events with duration
> greater than 1000000 microseconds) instead of running the Profiler tool
> directly against a busy production server. You can develop the desired
> trace using Profiler on a dev box and then script that trace so that you can
> run on the prod server as a server-side trace.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "karlag92" <karla...@.hotmail.com> wrote in message
> news:1186013382.486135.138640@.r34g2000hsd.googlegr oups.com...
>
>
>
>
> - Show quoted text -
Thanks for the input guys, but I still need a simple way to configure
sql server to protect itself. Maybe what Ken suggests would work, but
it seems like a lot of work to accomplish something so basic.
I do intend to do root cause analysis and find the problem in the code
that causes this. However, I do not have even a part time DBA to
monitor and watch for these things to happen. If I could kill the
transaction automatically form the DB side when it runs too long, this
would help stop it from becoming a major problem but not deprive me of
what I need to fix the problem.
Transactions are managed in the client code in the web service. I
have no doubt that something broke there and will chase that down.
I just need a simple way to have SQL Server not allow a problem to
fester overly long . . . It would reduce the impact and severity of
the problem dramatically.
|||I think your question expressed as a need is valid.
There is probably good reason(s) why Microsoft doesn't allow you to
enforce this policy or rule on sql server.
I invite the SQL MVPs to comment on why that is.
|||> There is probably good reason(so) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
IMHO, this is something that should be implemented in the client API (like
CommandTimeout) rather than on the server side. I would be very concerned
about specifying a global transaction timeout on the server. As a server
policy, the timeout would need to be accompanied by more restrictive
criteria, such as host or application name name.
If this feature is important to you or karlag92, consider submitted a
product enhancement request via Connect Feedback
(http://connect.microsoft.com/SQLServer).
Hope this helps.
Dan Guzman
SQL Server MVP
"Ken" <kshapley@.sbcglobal.net> wrote in message
news:1186168209.808571.152240@.d55g2000hsg.googlegr oups.com...
>I think your question expressed as a need is valid.
> There is probably good reason(s) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
> I invite the SQL MVPs to comment on why that is.
>
Limiting Transaction Time?
dabble as a DBA for our shop.
Our application is distributed and works through web services to
access information stored in SQL Server 2005. Each request to the web
service is a contained transaction, in other words, transactions do
NOT cross calls web service calls.
Today something unknown and yet unidentified went wrong in the code
and a transaction was open indefinitely (over an hour until I noticed
it by accident and killed the offending process).
Question: Is there a setting that I'm not seeing for the server or
database that will limit how long a transaction can hold locks for?
For example, in our scenario no transaction should be longer than a
few seconds, so if I could set something in sql server that would
rollback and kill any transaction that lasted for more than 1 minute,
I'd be able to limit the impact of this problem.
Thanks in advance for any help you can give . . .For this case, I don't think its locks you should be concerned about?
Before killing a connection, I would look in the master..sysprocesses
table or run sp_who2 and find out what application. On spid run dbcc
inputbuffer(spid) and see what what's being or what was executed. Is
the connection in a sleeping state? Chances you are not getting a
commit or rollback from the application or service stopped and some
how could not initiate a rollback. You could write a query against the
master..sysprocesses table that checks for connection in an open_tran
> 0 and in a sleeping state and last_batch is greater than a specified
time. Then run a kill statement on it. You can acheive this via cursor
from the query result. If you are using some connection mechanism that
leaves your connetion in open_tran state of 1 (I've seen some java
drivers do this) then try open_tran >1. However, I don't advocate
doing this. Best thing to do let it happen again and troubleshoot the
application. The connection will eventually be closed see See
Orphaned Sessions in BOL.|||I agree with Ken that you should perform root cause analysis. Consider
adding SET XACT_ABORT ON to stored procedures that begin explicit
transactions. This setting will rollback the transaction immediately and
cancel the batch in the event of most errors, including command timeouts.
If you start transactions from application code, execute SET XACT_ABORT ON
on the connection after opening to help ensure pooled connections
immediately rollback connections in the event of an error. Also check to
ensure a commit or rollback is issued on all code paths.
Problems with performance (e.g. scans) can often lead to performance and
concurrency (blocking) problems, which in turn cause other queries and
transactions to run longer and then lead to even more problems. The best
way to break the cycle is by optimizing queries and adding indexes for
efficient data access. Start by analyzing the execution plan in SSMS for
those queries that are the longest running and most frequently executed.
See http://support.microsoft.com/kb/224453 for some tips. Although the
article was written for SQL 7 and 2000, much applies to SQL 2005 too.
If you need to gather SQL trace data from a busy server, run a server-side
trace with a filter (e.g. RPC and T-SQL batch completed events with duration
greater than 1000000 microseconds) instead of running the Profiler tool
directly against a busy production server. You can develop the desired
trace using Profiler on a dev box and then script that trace so that you can
run on the prod server as a server-side trace.
Hope this helps.
Dan Guzman
SQL Server MVP
"karlag92" <karlag92@.hotmail.com> wrote in message
news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
> Please be gentle, I'm an application developer, not a dba. I just
> dabble as a DBA for our shop.
> Our application is distributed and works through web services to
> access information stored in SQL Server 2005. Each request to the web
> service is a contained transaction, in other words, transactions do
> NOT cross calls web service calls.
> Today something unknown and yet unidentified went wrong in the code
> and a transaction was open indefinitely (over an hour until I noticed
> it by accident and killed the offending process).
> Question: Is there a setting that I'm not seeing for the server or
> database that will limit how long a transaction can hold locks for?
> For example, in our scenario no transaction should be longer than a
> few seconds, so if I could set something in sql server that would
> rollback and kill any transaction that lasted for more than 1 minute,
> I'd be able to limit the impact of this problem.
> Thanks in advance for any help you can give . . .
>|||On Aug 2, 6:51 am, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
wrote:
> I agree with Ken that you should perform root cause analysis. Consider
> adding SET XACT_ABORT ON to stored procedures that begin explicit
> transactions. This setting will rollback the transaction immediately and
> cancel the batch in the event of most errors, including command timeouts.
> If you start transactions from application code, execute SET XACT_ABORT ON
> on the connection after opening to help ensure pooled connections
> immediately rollback connections in the event of an error. Also check to
> ensure a commit or rollback is issued on all code paths.
> Problems with performance (e.g. scans) can often lead to performance and
> concurrency (blocking) problems, which in turn cause other queries and
> transactions to run longer and then lead to even more problems. The best
> way to break the cycle is by optimizing queries and adding indexes for
> efficient data access. Start by analyzing the execution plan in SSMS for
> those queries that are the longest running and most frequently executed.
> Seehttp://support.microsoft.com/kb/224453for some tips. Although the
> article was written for SQL 7 and 2000, much applies to SQL 2005 too.
> If you need to gather SQL trace data from a busy server, run a server-side
> trace with a filter (e.g. RPC and T-SQL batch completed events with durati
on
> greater than 1000000 microseconds) instead of running the Profiler tool
> directly against a busy production server. You can develop the desired
> trace using Profiler on a dev box and then script that trace so that you c
an
> run on the prod server as a server-side trace.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "karlag92" <karla...@.hotmail.com> wrote in message
> news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
>
>
>
>
>
>
> - Show quoted text -
Thanks for the input guys, but I still need a simple way to configure
sql server to protect itself. Maybe what Ken suggests would work, but
it seems like a lot of work to accomplish something so basic.
I do intend to do root cause analysis and find the problem in the code
that causes this. However, I do not have even a part time DBA to
monitor and watch for these things to happen. If I could kill the
transaction automatically form the DB side when it runs too long, this
would help stop it from becoming a major problem but not deprive me of
what I need to fix the problem.
Transactions are managed in the client code in the web service. I
have no doubt that something broke there and will chase that down.
I just need a simple way to have SQL Server not allow a problem to
fester overly long . . . It would reduce the impact and severity of
the problem dramatically.|||I think your question expressed as a need is valid.
There is probably good reason(s) why Microsoft doesn't allow you to
enforce this policy or rule on sql server.
I invite the SQL MVPs to comment on why that is.|||> There is probably good reason(so) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
IMHO, this is something that should be implemented in the client API (like
CommandTimeout) rather than on the server side. I would be very concerned
about specifying a global transaction timeout on the server. As a server
policy, the timeout would need to be accompanied by more restrictive
criteria, such as host or application name name.
If this feature is important to you or karlag92, consider submitted a
product enhancement request via Connect Feedback
(http://connect.microsoft.com/SQLServer).
Hope this helps.
Dan Guzman
SQL Server MVP
"Ken" <kshapley@.sbcglobal.net> wrote in message
news:1186168209.808571.152240@.d55g2000hsg.googlegroups.com...
>I think your question expressed as a need is valid.
> There is probably good reason(s) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
> I invite the SQL MVPs to comment on why that is.
>
Limiting Transaction Time?
dabble as a DBA for our shop.
Our application is distributed and works through web services to
access information stored in SQL Server 2005. Each request to the web
service is a contained transaction, in other words, transactions do
NOT cross calls web service calls.
Today something unknown and yet unidentified went wrong in the code
and a transaction was open indefinitely (over an hour until I noticed
it by accident and killed the offending process).
Question: Is there a setting that I'm not seeing for the server or
database that will limit how long a transaction can hold locks for?
For example, in our scenario no transaction should be longer than a
few seconds, so if I could set something in sql server that would
rollback and kill any transaction that lasted for more than 1 minute,
I'd be able to limit the impact of this problem.
Thanks in advance for any help you can give . . .For this case, I don't think its locks you should be concerned about?
Before killing a connection, I would look in the master..sysprocesses
table or run sp_who2 and find out what application. On spid run dbcc
inputbuffer(spid) and see what what's being or what was executed. Is
the connection in a sleeping state? Chances you are not getting a
commit or rollback from the application or service stopped and some
how could not initiate a rollback. You could write a query against the
master..sysprocesses table that checks for connection in an open_tran
> 0 and in a sleeping state and last_batch is greater than a specified
time. Then run a kill statement on it. You can acheive this via cursor
from the query result. If you are using some connection mechanism that
leaves your connetion in open_tran state of 1 (I've seen some java
drivers do this) then try open_tran >1. However, I don't advocate
doing this. Best thing to do let it happen again and troubleshoot the
application. The connection will eventually be closed see See
Orphaned Sessions in BOL.|||I agree with Ken that you should perform root cause analysis. Consider
adding SET XACT_ABORT ON to stored procedures that begin explicit
transactions. This setting will rollback the transaction immediately and
cancel the batch in the event of most errors, including command timeouts.
If you start transactions from application code, execute SET XACT_ABORT ON
on the connection after opening to help ensure pooled connections
immediately rollback connections in the event of an error. Also check to
ensure a commit or rollback is issued on all code paths.
Problems with performance (e.g. scans) can often lead to performance and
concurrency (blocking) problems, which in turn cause other queries and
transactions to run longer and then lead to even more problems. The best
way to break the cycle is by optimizing queries and adding indexes for
efficient data access. Start by analyzing the execution plan in SSMS for
those queries that are the longest running and most frequently executed.
See http://support.microsoft.com/kb/224453 for some tips. Although the
article was written for SQL 7 and 2000, much applies to SQL 2005 too.
If you need to gather SQL trace data from a busy server, run a server-side
trace with a filter (e.g. RPC and T-SQL batch completed events with duration
greater than 1000000 microseconds) instead of running the Profiler tool
directly against a busy production server. You can develop the desired
trace using Profiler on a dev box and then script that trace so that you can
run on the prod server as a server-side trace.
Hope this helps.
Dan Guzman
SQL Server MVP
"karlag92" <karlag92@.hotmail.com> wrote in message
news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
> Please be gentle, I'm an application developer, not a dba. I just
> dabble as a DBA for our shop.
> Our application is distributed and works through web services to
> access information stored in SQL Server 2005. Each request to the web
> service is a contained transaction, in other words, transactions do
> NOT cross calls web service calls.
> Today something unknown and yet unidentified went wrong in the code
> and a transaction was open indefinitely (over an hour until I noticed
> it by accident and killed the offending process).
> Question: Is there a setting that I'm not seeing for the server or
> database that will limit how long a transaction can hold locks for?
> For example, in our scenario no transaction should be longer than a
> few seconds, so if I could set something in sql server that would
> rollback and kill any transaction that lasted for more than 1 minute,
> I'd be able to limit the impact of this problem.
> Thanks in advance for any help you can give . . .
>|||On Aug 2, 6:51 am, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
wrote:
> I agree with Ken that you should perform root cause analysis. Consider
> adding SET XACT_ABORT ON to stored procedures that begin explicit
> transactions. This setting will rollback the transaction immediately and
> cancel the batch in the event of most errors, including command timeouts.
> If you start transactions from application code, execute SET XACT_ABORT ON
> on the connection after opening to help ensure pooled connections
> immediately rollback connections in the event of an error. Also check to
> ensure a commit or rollback is issued on all code paths.
> Problems with performance (e.g. scans) can often lead to performance and
> concurrency (blocking) problems, which in turn cause other queries and
> transactions to run longer and then lead to even more problems. The best
> way to break the cycle is by optimizing queries and adding indexes for
> efficient data access. Start by analyzing the execution plan in SSMS for
> those queries that are the longest running and most frequently executed.
> Seehttp://support.microsoft.com/kb/224453for some tips. Although the
> article was written for SQL 7 and 2000, much applies to SQL 2005 too.
> If you need to gather SQL trace data from a busy server, run a server-side
> trace with a filter (e.g. RPC and T-SQL batch completed events with duration
> greater than 1000000 microseconds) instead of running the Profiler tool
> directly against a busy production server. You can develop the desired
> trace using Profiler on a dev box and then script that trace so that you can
> run on the prod server as a server-side trace.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "karlag92" <karla...@.hotmail.com> wrote in message
> news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
>
> > Please be gentle, I'm an application developer, not a dba. I just
> > dabble as a DBA for our shop.
> > Our application is distributed and works through web services to
> > access information stored in SQL Server 2005. Each request to the web
> > service is a contained transaction, in other words, transactions do
> > NOT cross calls web service calls.
> > Today something unknown and yet unidentified went wrong in the code
> > and a transaction was open indefinitely (over an hour until I noticed
> > it by accident and killed the offending process).
> > Question: Is there a setting that I'm not seeing for the server or
> > database that will limit how long a transaction can hold locks for?
> > For example, in our scenario no transaction should be longer than a
> > few seconds, so if I could set something in sql server that would
> > rollback and kill any transaction that lasted for more than 1 minute,
> > I'd be able to limit the impact of this problem.
> > Thanks in advance for any help you can give . . .- Hide quoted text -
> - Show quoted text -
Thanks for the input guys, but I still need a simple way to configure
sql server to protect itself. Maybe what Ken suggests would work, but
it seems like a lot of work to accomplish something so basic.
I do intend to do root cause analysis and find the problem in the code
that causes this. However, I do not have even a part time DBA to
monitor and watch for these things to happen. If I could kill the
transaction automatically form the DB side when it runs too long, this
would help stop it from becoming a major problem but not deprive me of
what I need to fix the problem.
Transactions are managed in the client code in the web service. I
have no doubt that something broke there and will chase that down.
I just need a simple way to have SQL Server not allow a problem to
fester overly long . . . It would reduce the impact and severity of
the problem dramatically.|||I think your question expressed as a need is valid.
There is probably good reason(s) why Microsoft doesn't allow you to
enforce this policy or rule on sql server.
I invite the SQL MVPs to comment on why that is.|||> There is probably good reason(so) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
IMHO, this is something that should be implemented in the client API (like
CommandTimeout) rather than on the server side. I would be very concerned
about specifying a global transaction timeout on the server. As a server
policy, the timeout would need to be accompanied by more restrictive
criteria, such as host or application name name.
If this feature is important to you or karlag92, consider submitted a
product enhancement request via Connect Feedback
(http://connect.microsoft.com/SQLServer).
Hope this helps.
Dan Guzman
SQL Server MVP
"Ken" <kshapley@.sbcglobal.net> wrote in message
news:1186168209.808571.152240@.d55g2000hsg.googlegroups.com...
>I think your question expressed as a need is valid.
> There is probably good reason(s) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
> I invite the SQL MVPs to comment on why that is.
>
Friday, February 24, 2012
Limitation on Conn.Execute in classic .ASP & ADO with SQL Server
I have a project (using classic ASP & SQL Server) which adds one execute sql statement at a time to a temporary array, and then I join that array with a chr(30) (record separator), to a string variable called strSQL. I then run the following line of code:
conn.execute(strSQL)
I was wondering if there was any limitation to how large the strSQL variable can be? Reason I ask is because thru log writes I can see all of my sql execute lines exist in the variable strSQL prior to running the "conn.execute(strSQL)" command; however, not all of the lines run at the time of execution. Remember, this bug only is occuring whenever I have say over 600 sql lines to execute.
My understanding is that there was no limitation on the size of the string strSQL; however, in the interest of getting the bug fixed quick enough, I decided to just run a loop for each sql statment and run "conn.execute(strSQL)" every 50 times. This, in turn, has solved the problem and I do save all of my data; however, my original bug still exists.
Does anyone know why I have to split the sql commands and execute.com every 50 times instead of just being able to do it once ?
Please let me know. Thanks in advance.It's probably your data provider that is limiting you. Which one are you using, OleDb for SQL Server (SQLOLEDB)? The batch separator for that provider is a semicolon (;) as far as I know. Maybe you'd have better luck with that? I am doubting it but it would be worth a try.
Terri|||I am using ADO thru an ASP page to connect to SQL Server. I have tried numerous separator's (comma,vertibar) with still no luck. That's why I ended up going with the chr(30).
Doing the loop and execute.conn every 50 times through the loop has seemed to definately fix the bug. I am still just curious as to why I couldn't execute the entire command immediately.
What is strange is that I can use ADO and write a general Visual Basic 6.0 application and execute the sql string just fine. Considering ADO is used in both the VB6.0 and .ASP applications you would think it would work?
Oh well, thanks for the thought. Have a good one.|||But which PROVIDER are you using? Are you using the same provider in your VB program that you are in your ASP application (ie, what does your connection string look like for both)? Is the same MDAC version on both machines?
Terri|||Terri,
Yes, MDAC 2.8 is on the web server using the .ASP & ADO connection. However, the VB Project I've created to test out this large sql string is only on my local machine. It does not use a web server or anything like that.|||Sorry forgot to mention that the execute statement is the same in both.
Execute.Conn sqlstr
Where "sqlstr" is representing the hundreds and hundreds of insert statements I need to execute.|||But which PROVIDER are you using? What does the Conn.ConnectionString look like?
Terri|||the connection string looks like:
"DRIVER=SQL Server;SERVER=ServerName;User ID=UserID;PASSWORD=PASSWORD;DATABASE=Database"
I hope this is what you are referring too. Sorry for the confusion.|||You're using the ODBC provider. Use the OleDB provider instead and see if it makes a difference, with semicolons (;) separating your commands.
Your connection string would look like this:
"Provider=sqloledb;Data Source=ServerName;Initial Catalog=Database;User Id=UserID;Password=PASSWORD;"
Seehttp://www.connectionstrings.com for more help on connection strings.
Terri
limit years from cube?
years in certain cube. How would I limit the number of years from a
dimension?Easiest way would be to use a standard SQL view for your dim in AS.
Ray Higdon MCSE, MCDBA, CCNA
--
"JJ" <swrothfuss@.hotmail.com> wrote in message
news:%23$avJRhBEHA.1544@.TK2MSFTNGP09.phx.gbl...
> I have 10 years in my time dimension but I only want to display 5 of those
> years in certain cube. How would I limit the number of years from a
> dimension?
>
limit the resources of a job
we develop a stored procedure that run a dll, the dll consumes all the
processor, and take a long of time (2 days) processing data.
its posible limit the amount of cpu uses by a single stored procedure?
i know that using the windows system resource manager its posible limit
the amount of cpu uses by all sql server, but it's possible for only a
stored procedure?
thanksSQL Server can't control what your external process does. You said it
is the DLL and not the proc that consumes your resources.
Two days is an extremely long time to execute a proc. Why not invoke
your code from outside SQL Server using .NET or VB or something else?
That way you may be able to add some code to monitor and control what
happens during processing.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||Alternatively, you may try to relax the grip on the processor (e.g. by not
using tight loops) in the code of the DLL if that doesn't lead to
unacceptable performance.
Linchi
"hongo32" wrote:
> hi
> we develop a stored procedure that run a dll, the dll consumes all the
> processor, and take a long of time (2 days) processing data.
> its posible limit the amount of cpu uses by a single stored procedure?
> i know that using the windows system resource manager its posible limit
> the amount of cpu uses by all sql server, but it's possible for only a
> stored procedure?
> thanks
>
limit the resources of a job
we develop a stored procedure that run a dll, the dll consumes all the
processor, and take a long of time (2 days) processing data.
its posible limit the amount of cpu uses by a single stored procedure?
i know that using the windows system resource manager its posible limit
the amount of cpu uses by all sql server, but it's possible for only a
stored procedure?
thanks
SQL Server can't control what your external process does. You said it
is the DLL and not the proc that consumes your resources.
Two days is an extremely long time to execute a proc. Why not invoke
your code from outside SQL Server using .NET or VB or something else?
That way you may be able to add some code to monitor and control what
happens during processing.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
|||Alternatively, you may try to relax the grip on the processor (e.g. by not
using tight loops) in the code of the DLL if that doesn't lead to
unacceptable performance.
Linchi
"hongo32" wrote:
> hi
> we develop a stored procedure that run a dll, the dll consumes all the
> processor, and take a long of time (2 days) processing data.
> its posible limit the amount of cpu uses by a single stored procedure?
> i know that using the windows system resource manager its posible limit
> the amount of cpu uses by all sql server, but it's possible for only a
> stored procedure?
> thanks
>
limit the resources of a job
we develop a stored procedure that run a dll, the dll consumes all the
processor, and take a long of time (2 days) processing data.
its posible limit the amount of cpu uses by a single stored procedure?
i know that using the windows system resource manager its posible limit
the amount of cpu uses by all sql server, but it's possible for only a
stored procedure?
thanksSQL Server can't control what your external process does. You said it
is the DLL and not the proc that consumes your resources.
Two days is an extremely long time to execute a proc. Why not invoke
your code from outside SQL Server using .NET or VB or something else?
That way you may be able to add some code to monitor and control what
happens during processing.
--
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||Alternatively, you may try to relax the grip on the processor (e.g. by not
using tight loops) in the code of the DLL if that doesn't lead to
unacceptable performance.
Linchi
"hongo32" wrote:
> hi
> we develop a stored procedure that run a dll, the dll consumes all the
> processor, and take a long of time (2 days) processing data.
> its posible limit the amount of cpu uses by a single stored procedure?
> i know that using the windows system resource manager its posible limit
> the amount of cpu uses by all sql server, but it's possible for only a
> stored procedure?
> thanks
>
Monday, February 20, 2012
Limit the Amount of Time for each job
of amount you would like this step to execute. If the job steps executes
greater than this amount kill the step and go to the next step.
Please help me with this issue.
Thanks,maximum amount of what?
"Joe K." wrote:
> Is there away to place at the beginning of a SQL Server job step the maxim
um
> of amount you would like this step to execute. If the job steps executes
> greater than this amount kill the step and go to the next step.
> Please help me with this issue.
> Thanks,
>|||Fix the step
"Joe K." <JoeK@.discussions.microsoft.com> wrote in message
news:6C8530C7-174C-4E05-88D0-90CA2AD4EF52@.microsoft.com...
> Is there away to place at the beginning of a SQL Server job step the
maximum
> of amount you would like this step to execute. If the job steps executes
> greater than this amount kill the step and go to the next step.
> Please help me with this issue.
> Thanks,
>
Limit the Amount of Time for each job
of amount you would like this step to execute. If the job steps executes
greater than this amount kill the step and go to the next step.
Please help me with this issue.
Thanks,
maximum amount of what?
"Joe K." wrote:
> Is there away to place at the beginning of a SQL Server job step the maximum
> of amount you would like this step to execute. If the job steps executes
> greater than this amount kill the step and go to the next step.
> Please help me with this issue.
> Thanks,
>
|||Fix the step
"Joe K." <JoeK@.discussions.microsoft.com> wrote in message
news:6C8530C7-174C-4E05-88D0-90CA2AD4EF52@.microsoft.com...
> Is there away to place at the beginning of a SQL Server job step the
maximum
> of amount you would like this step to execute. If the job steps executes
> greater than this amount kill the step and go to the next step.
> Please help me with this issue.
> Thanks,
>
Limit the Amount of Time for each job
of amount you would like this step to execute. If the job steps executes
greater than this amount kill the step and go to the next step.
Please help me with this issue.
Thanks,maximum amount of what?
"Joe K." wrote:
> Is there away to place at the beginning of a SQL Server job step the maximum
> of amount you would like this step to execute. If the job steps executes
> greater than this amount kill the step and go to the next step.
> Please help me with this issue.
> Thanks,
>|||Fix the step
"Joe K." <JoeK@.discussions.microsoft.com> wrote in message
news:6C8530C7-174C-4E05-88D0-90CA2AD4EF52@.microsoft.com...
> Is there away to place at the beginning of a SQL Server job step the
maximum
> of amount you would like this step to execute. If the job steps executes
> greater than this amount kill the step and go to the next step.
> Please help me with this issue.
> Thanks,
>
Limit server resources per query
I've got a huge query that takes a fair amount of time to
run, and ideally this query will be run in the middle of
the night, so I wont have any issues with any customer
facing applications...
However in testing, I need to develop this report in the
daytime, and dont have the liberty of having a development
server. I was curious if in a sql statement, I could
specify that I'd rather have a query take longer, than
prevent other applications from being able to process data
in a timely fashion. (I get timeouts etc in the other apps)
As it sits this query takes about 4 minutes on a quite
fast sql server, and I dont mind it so much, but it seems
in that 4 minutes, other services are really hurting.
Thanks in advance,
Weston Weems
There is no option such as the one you describe but you can add MAXDOP hints
to the sql statements that will limit the number of processors used by the
query. So if you have 4 procs you can set it to 2 and leave 2 for the other
users. It may take longer but should be more respectful of the other users.
Andrew J. Kelly SQL MVP
"Weston Weems" <anonymous@.discussions.microsoft.com> wrote in message
news:0ae401c51848$8b06fc90$a501280a@.phx.gbl...
> Hello,
> I've got a huge query that takes a fair amount of time to
> run, and ideally this query will be run in the middle of
> the night, so I wont have any issues with any customer
> facing applications...
> However in testing, I need to develop this report in the
> daytime, and dont have the liberty of having a development
> server. I was curious if in a sql statement, I could
> specify that I'd rather have a query take longer, than
> prevent other applications from being able to process data
> in a timely fashion. (I get timeouts etc in the other apps)
> As it sits this query takes about 4 minutes on a quite
> fast sql server, and I dont mind it so much, but it seems
> in that 4 minutes, other services are really hurting.
> Thanks in advance,
> Weston Weems
Limit server resources per query
I've got a huge query that takes a fair amount of time to
run, and ideally this query will be run in the middle of
the night, so I wont have any issues with any customer
facing applications...
However in testing, I need to develop this report in the
daytime, and dont have the liberty of having a development
server. I was curious if in a sql statement, I could
specify that I'd rather have a query take longer, than
prevent other applications from being able to process data
in a timely fashion. (I get timeouts etc in the other apps)
As it sits this query takes about 4 minutes on a quite
fast sql server, and I dont mind it so much, but it seems
in that 4 minutes, other services are really hurting.
Thanks in advance,
Weston WeemsThere is no option such as the one you describe but you can add MAXDOP hints
to the sql statements that will limit the number of processors used by the
query. So if you have 4 procs you can set it to 2 and leave 2 for the other
users. It may take longer but should be more respectful of the other users.
--
Andrew J. Kelly SQL MVP
"Weston Weems" <anonymous@.discussions.microsoft.com> wrote in message
news:0ae401c51848$8b06fc90$a501280a@.phx.gbl...
> Hello,
> I've got a huge query that takes a fair amount of time to
> run, and ideally this query will be run in the middle of
> the night, so I wont have any issues with any customer
> facing applications...
> However in testing, I need to develop this report in the
> daytime, and dont have the liberty of having a development
> server. I was curious if in a sql statement, I could
> specify that I'd rather have a query take longer, than
> prevent other applications from being able to process data
> in a timely fashion. (I get timeouts etc in the other apps)
> As it sits this query takes about 4 minutes on a quite
> fast sql server, and I dont mind it so much, but it seems
> in that 4 minutes, other services are really hurting.
> Thanks in advance,
> Weston Weems
Limit server resources per query
I've got a huge query that takes a fair amount of time to
run, and ideally this query will be run in the middle of
the night, so I wont have any issues with any customer
facing applications...
However in testing, I need to develop this report in the
daytime, and dont have the liberty of having a development
server. I was curious if in a sql statement, I could
specify that I'd rather have a query take longer, than
prevent other applications from being able to process data
in a timely fashion. (I get timeouts etc in the other apps)
As it sits this query takes about 4 minutes on a quite
fast sql server, and I dont mind it so much, but it seems
in that 4 minutes, other services are really hurting.
Thanks in advance,
Weston WeemsThere is no option such as the one you describe but you can add MAXDOP hints
to the sql statements that will limit the number of processors used by the
query. So if you have 4 procs you can set it to 2 and leave 2 for the other
users. It may take longer but should be more respectful of the other users.
Andrew J. Kelly SQL MVP
"Weston Weems" <anonymous@.discussions.microsoft.com> wrote in message
news:0ae401c51848$8b06fc90$a501280a@.phx.gbl...
> Hello,
> I've got a huge query that takes a fair amount of time to
> run, and ideally this query will be run in the middle of
> the night, so I wont have any issues with any customer
> facing applications...
> However in testing, I need to develop this report in the
> daytime, and dont have the liberty of having a development
> server. I was curious if in a sql statement, I could
> specify that I'd rather have a query take longer, than
> prevent other applications from being able to process data
> in a timely fashion. (I get timeouts etc in the other apps)
> As it sits this query takes about 4 minutes on a quite
> fast sql server, and I dont mind it so much, but it seems
> in that 4 minutes, other services are really hurting.
> Thanks in advance,
> Weston Weems
Limit query response time to 5 mins
know that from our application perspective, we do not expect our customers
to be around if the response time is greater than say 5 mins. Is there a way
that we can control this at the db level or server level from the SQL side
vs the application layer ?
Also right now, if I have to profile and say for example run a query that
runs for 5 mins.. If i cancel the query in 2 mins, the profiler reports a
batch completed event for 2 mins. Is there any way to find out that that
query was a cancelled query and did not run to completion ?
Using SQL 2KHassan
Yes, SQL Server Profiler is your friend here, but why would want to cancel
the query? Do you really want the users cancel queries?
"Hassan" <Hassan@.hotmail.com> wrote in message
news:%23VoNa4hfGHA.5104@.TK2MSFTNGP04.phx.gbl...
> Right now we have queries running from 20 ms to 200 secs or even more.. We
> know that from our application perspective, we do not expect our customers
> to be around if the response time is greater than say 5 mins. Is there a
> way that we can control this at the db level or server level from the SQL
> side vs the application layer ?
> Also right now, if I have to profile and say for example run a query that
> runs for 5 mins.. If i cancel the query in 2 mins, the profiler reports a
> batch completed event for 2 mins. Is there any way to find out that that
> query was a cancelled query and did not run to completion ?
> Using SQL 2K
>
>|||I am just looking for a way to do so.. Not really narrowed it down to how we
would use it eventually
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:%23wjjDoifGHA.4568@.TK2MSFTNGP03.phx.gbl...
> Hassan
> Yes, SQL Server Profiler is your friend here, but why would want to cancel
> the query? Do you really want the users cancel queries?
>
> "Hassan" <Hassan@.hotmail.com> wrote in message
> news:%23VoNa4hfGHA.5104@.TK2MSFTNGP04.phx.gbl...
>