Showing posts with label gentle. Show all posts
Showing posts with label gentle. Show all posts

Monday, March 12, 2012

Limiting Transaction Time?

Please be gentle, I'm an application developer, not a dba. I just
dabble as a DBA for our shop.
Our application is distributed and works through web services to
access information stored in SQL Server 2005. Each request to the web
service is a contained transaction, in other words, transactions do
NOT cross calls web service calls.
Today something unknown and yet unidentified went wrong in the code
and a transaction was open indefinitely (over an hour until I noticed
it by accident and killed the offending process).
Question: Is there a setting that I'm not seeing for the server or
database that will limit how long a transaction can hold locks for?
For example, in our scenario no transaction should be longer than a
few seconds, so if I could set something in sql server that would
rollback and kill any transaction that lasted for more than 1 minute,
I'd be able to limit the impact of this problem.
Thanks in advance for any help you can give . . .
For this case, I don't think its locks you should be concerned about?
Before killing a connection, I would look in the master..sysprocesses
table or run sp_who2 and find out what application. On spid run dbcc
inputbuffer(spid) and see what what's being or what was executed. Is
the connection in a sleeping state? Chances you are not getting a
commit or rollback from the application or service stopped and some
how could not initiate a rollback. You could write a query against the
master..sysprocesses table that checks for connection in an open_tran
> 0 and in a sleeping state and last_batch is greater than a specified
time. Then run a kill statement on it. You can acheive this via cursor
from the query result. If you are using some connection mechanism that
leaves your connetion in open_tran state of 1 (I've seen some java
drivers do this) then try open_tran >1. However, I don't advocate
doing this. Best thing to do let it happen again and troubleshoot the
application. The connection will eventually be closed see See
Orphaned Sessions in BOL.
|||I agree with Ken that you should perform root cause analysis. Consider
adding SET XACT_ABORT ON to stored procedures that begin explicit
transactions. This setting will rollback the transaction immediately and
cancel the batch in the event of most errors, including command timeouts.
If you start transactions from application code, execute SET XACT_ABORT ON
on the connection after opening to help ensure pooled connections
immediately rollback connections in the event of an error. Also check to
ensure a commit or rollback is issued on all code paths.
Problems with performance (e.g. scans) can often lead to performance and
concurrency (blocking) problems, which in turn cause other queries and
transactions to run longer and then lead to even more problems. The best
way to break the cycle is by optimizing queries and adding indexes for
efficient data access. Start by analyzing the execution plan in SSMS for
those queries that are the longest running and most frequently executed.
See http://support.microsoft.com/kb/224453 for some tips. Although the
article was written for SQL 7 and 2000, much applies to SQL 2005 too.
If you need to gather SQL trace data from a busy server, run a server-side
trace with a filter (e.g. RPC and T-SQL batch completed events with duration
greater than 1000000 microseconds) instead of running the Profiler tool
directly against a busy production server. You can develop the desired
trace using Profiler on a dev box and then script that trace so that you can
run on the prod server as a server-side trace.
Hope this helps.
Dan Guzman
SQL Server MVP
"karlag92" <karlag92@.hotmail.com> wrote in message
news:1186013382.486135.138640@.r34g2000hsd.googlegr oups.com...
> Please be gentle, I'm an application developer, not a dba. I just
> dabble as a DBA for our shop.
> Our application is distributed and works through web services to
> access information stored in SQL Server 2005. Each request to the web
> service is a contained transaction, in other words, transactions do
> NOT cross calls web service calls.
> Today something unknown and yet unidentified went wrong in the code
> and a transaction was open indefinitely (over an hour until I noticed
> it by accident and killed the offending process).
> Question: Is there a setting that I'm not seeing for the server or
> database that will limit how long a transaction can hold locks for?
> For example, in our scenario no transaction should be longer than a
> few seconds, so if I could set something in sql server that would
> rollback and kill any transaction that lasted for more than 1 minute,
> I'd be able to limit the impact of this problem.
> Thanks in advance for any help you can give . . .
>
|||On Aug 2, 6:51 am, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
wrote:
> I agree with Ken that you should perform root cause analysis. Consider
> adding SET XACT_ABORT ON to stored procedures that begin explicit
> transactions. This setting will rollback the transaction immediately and
> cancel the batch in the event of most errors, including command timeouts.
> If you start transactions from application code, execute SET XACT_ABORT ON
> on the connection after opening to help ensure pooled connections
> immediately rollback connections in the event of an error. Also check to
> ensure a commit or rollback is issued on all code paths.
> Problems with performance (e.g. scans) can often lead to performance and
> concurrency (blocking) problems, which in turn cause other queries and
> transactions to run longer and then lead to even more problems. The best
> way to break the cycle is by optimizing queries and adding indexes for
> efficient data access. Start by analyzing the execution plan in SSMS for
> those queries that are the longest running and most frequently executed.
> Seehttp://support.microsoft.com/kb/224453for some tips. Although the
> article was written for SQL 7 and 2000, much applies to SQL 2005 too.
> If you need to gather SQL trace data from a busy server, run a server-side
> trace with a filter (e.g. RPC and T-SQL batch completed events with duration
> greater than 1000000 microseconds) instead of running the Profiler tool
> directly against a busy production server. You can develop the desired
> trace using Profiler on a dev box and then script that trace so that you can
> run on the prod server as a server-side trace.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "karlag92" <karla...@.hotmail.com> wrote in message
> news:1186013382.486135.138640@.r34g2000hsd.googlegr oups.com...
>
>
>
>
> - Show quoted text -
Thanks for the input guys, but I still need a simple way to configure
sql server to protect itself. Maybe what Ken suggests would work, but
it seems like a lot of work to accomplish something so basic.
I do intend to do root cause analysis and find the problem in the code
that causes this. However, I do not have even a part time DBA to
monitor and watch for these things to happen. If I could kill the
transaction automatically form the DB side when it runs too long, this
would help stop it from becoming a major problem but not deprive me of
what I need to fix the problem.
Transactions are managed in the client code in the web service. I
have no doubt that something broke there and will chase that down.
I just need a simple way to have SQL Server not allow a problem to
fester overly long . . . It would reduce the impact and severity of
the problem dramatically.
|||I think your question expressed as a need is valid.
There is probably good reason(s) why Microsoft doesn't allow you to
enforce this policy or rule on sql server.
I invite the SQL MVPs to comment on why that is.
|||> There is probably good reason(so) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
IMHO, this is something that should be implemented in the client API (like
CommandTimeout) rather than on the server side. I would be very concerned
about specifying a global transaction timeout on the server. As a server
policy, the timeout would need to be accompanied by more restrictive
criteria, such as host or application name name.
If this feature is important to you or karlag92, consider submitted a
product enhancement request via Connect Feedback
(http://connect.microsoft.com/SQLServer).
Hope this helps.
Dan Guzman
SQL Server MVP
"Ken" <kshapley@.sbcglobal.net> wrote in message
news:1186168209.808571.152240@.d55g2000hsg.googlegr oups.com...
>I think your question expressed as a need is valid.
> There is probably good reason(s) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
> I invite the SQL MVPs to comment on why that is.
>

Limiting Transaction Time?

Please be gentle, I'm an application developer, not a dba. I just
dabble as a DBA for our shop.
Our application is distributed and works through web services to
access information stored in SQL Server 2005. Each request to the web
service is a contained transaction, in other words, transactions do
NOT cross calls web service calls.
Today something unknown and yet unidentified went wrong in the code
and a transaction was open indefinitely (over an hour until I noticed
it by accident and killed the offending process).
Question: Is there a setting that I'm not seeing for the server or
database that will limit how long a transaction can hold locks for?
For example, in our scenario no transaction should be longer than a
few seconds, so if I could set something in sql server that would
rollback and kill any transaction that lasted for more than 1 minute,
I'd be able to limit the impact of this problem.
Thanks in advance for any help you can give . . .For this case, I don't think its locks you should be concerned about?
Before killing a connection, I would look in the master..sysprocesses
table or run sp_who2 and find out what application. On spid run dbcc
inputbuffer(spid) and see what what's being or what was executed. Is
the connection in a sleeping state? Chances you are not getting a
commit or rollback from the application or service stopped and some
how could not initiate a rollback. You could write a query against the
master..sysprocesses table that checks for connection in an open_tran
> 0 and in a sleeping state and last_batch is greater than a specified
time. Then run a kill statement on it. You can acheive this via cursor
from the query result. If you are using some connection mechanism that
leaves your connetion in open_tran state of 1 (I've seen some java
drivers do this) then try open_tran >1. However, I don't advocate
doing this. Best thing to do let it happen again and troubleshoot the
application. The connection will eventually be closed see See
Orphaned Sessions in BOL.|||I agree with Ken that you should perform root cause analysis. Consider
adding SET XACT_ABORT ON to stored procedures that begin explicit
transactions. This setting will rollback the transaction immediately and
cancel the batch in the event of most errors, including command timeouts.
If you start transactions from application code, execute SET XACT_ABORT ON
on the connection after opening to help ensure pooled connections
immediately rollback connections in the event of an error. Also check to
ensure a commit or rollback is issued on all code paths.
Problems with performance (e.g. scans) can often lead to performance and
concurrency (blocking) problems, which in turn cause other queries and
transactions to run longer and then lead to even more problems. The best
way to break the cycle is by optimizing queries and adding indexes for
efficient data access. Start by analyzing the execution plan in SSMS for
those queries that are the longest running and most frequently executed.
See http://support.microsoft.com/kb/224453 for some tips. Although the
article was written for SQL 7 and 2000, much applies to SQL 2005 too.
If you need to gather SQL trace data from a busy server, run a server-side
trace with a filter (e.g. RPC and T-SQL batch completed events with duration
greater than 1000000 microseconds) instead of running the Profiler tool
directly against a busy production server. You can develop the desired
trace using Profiler on a dev box and then script that trace so that you can
run on the prod server as a server-side trace.
Hope this helps.
Dan Guzman
SQL Server MVP
"karlag92" <karlag92@.hotmail.com> wrote in message
news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
> Please be gentle, I'm an application developer, not a dba. I just
> dabble as a DBA for our shop.
> Our application is distributed and works through web services to
> access information stored in SQL Server 2005. Each request to the web
> service is a contained transaction, in other words, transactions do
> NOT cross calls web service calls.
> Today something unknown and yet unidentified went wrong in the code
> and a transaction was open indefinitely (over an hour until I noticed
> it by accident and killed the offending process).
> Question: Is there a setting that I'm not seeing for the server or
> database that will limit how long a transaction can hold locks for?
> For example, in our scenario no transaction should be longer than a
> few seconds, so if I could set something in sql server that would
> rollback and kill any transaction that lasted for more than 1 minute,
> I'd be able to limit the impact of this problem.
> Thanks in advance for any help you can give . . .
>|||On Aug 2, 6:51 am, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
wrote:
> I agree with Ken that you should perform root cause analysis. Consider
> adding SET XACT_ABORT ON to stored procedures that begin explicit
> transactions. This setting will rollback the transaction immediately and
> cancel the batch in the event of most errors, including command timeouts.
> If you start transactions from application code, execute SET XACT_ABORT ON
> on the connection after opening to help ensure pooled connections
> immediately rollback connections in the event of an error. Also check to
> ensure a commit or rollback is issued on all code paths.
> Problems with performance (e.g. scans) can often lead to performance and
> concurrency (blocking) problems, which in turn cause other queries and
> transactions to run longer and then lead to even more problems. The best
> way to break the cycle is by optimizing queries and adding indexes for
> efficient data access. Start by analyzing the execution plan in SSMS for
> those queries that are the longest running and most frequently executed.
> Seehttp://support.microsoft.com/kb/224453for some tips. Although the
> article was written for SQL 7 and 2000, much applies to SQL 2005 too.
> If you need to gather SQL trace data from a busy server, run a server-side
> trace with a filter (e.g. RPC and T-SQL batch completed events with durati
on
> greater than 1000000 microseconds) instead of running the Profiler tool
> directly against a busy production server. You can develop the desired
> trace using Profiler on a dev box and then script that trace so that you c
an
> run on the prod server as a server-side trace.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "karlag92" <karla...@.hotmail.com> wrote in message
> news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
>
>
>
>
>
>
> - Show quoted text -
Thanks for the input guys, but I still need a simple way to configure
sql server to protect itself. Maybe what Ken suggests would work, but
it seems like a lot of work to accomplish something so basic.
I do intend to do root cause analysis and find the problem in the code
that causes this. However, I do not have even a part time DBA to
monitor and watch for these things to happen. If I could kill the
transaction automatically form the DB side when it runs too long, this
would help stop it from becoming a major problem but not deprive me of
what I need to fix the problem.
Transactions are managed in the client code in the web service. I
have no doubt that something broke there and will chase that down.
I just need a simple way to have SQL Server not allow a problem to
fester overly long . . . It would reduce the impact and severity of
the problem dramatically.|||I think your question expressed as a need is valid.
There is probably good reason(s) why Microsoft doesn't allow you to
enforce this policy or rule on sql server.
I invite the SQL MVPs to comment on why that is.|||> There is probably good reason(so) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
IMHO, this is something that should be implemented in the client API (like
CommandTimeout) rather than on the server side. I would be very concerned
about specifying a global transaction timeout on the server. As a server
policy, the timeout would need to be accompanied by more restrictive
criteria, such as host or application name name.
If this feature is important to you or karlag92, consider submitted a
product enhancement request via Connect Feedback
(http://connect.microsoft.com/SQLServer).
Hope this helps.
Dan Guzman
SQL Server MVP
"Ken" <kshapley@.sbcglobal.net> wrote in message
news:1186168209.808571.152240@.d55g2000hsg.googlegroups.com...
>I think your question expressed as a need is valid.
> There is probably good reason(s) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
> I invite the SQL MVPs to comment on why that is.
>

Limiting Transaction Time?

Please be gentle, I'm an application developer, not a dba. I just
dabble as a DBA for our shop.
Our application is distributed and works through web services to
access information stored in SQL Server 2005. Each request to the web
service is a contained transaction, in other words, transactions do
NOT cross calls web service calls.
Today something unknown and yet unidentified went wrong in the code
and a transaction was open indefinitely (over an hour until I noticed
it by accident and killed the offending process).
Question: Is there a setting that I'm not seeing for the server or
database that will limit how long a transaction can hold locks for?
For example, in our scenario no transaction should be longer than a
few seconds, so if I could set something in sql server that would
rollback and kill any transaction that lasted for more than 1 minute,
I'd be able to limit the impact of this problem.
Thanks in advance for any help you can give . . .For this case, I don't think its locks you should be concerned about?
Before killing a connection, I would look in the master..sysprocesses
table or run sp_who2 and find out what application. On spid run dbcc
inputbuffer(spid) and see what what's being or what was executed. Is
the connection in a sleeping state? Chances you are not getting a
commit or rollback from the application or service stopped and some
how could not initiate a rollback. You could write a query against the
master..sysprocesses table that checks for connection in an open_tran
> 0 and in a sleeping state and last_batch is greater than a specified
time. Then run a kill statement on it. You can acheive this via cursor
from the query result. If you are using some connection mechanism that
leaves your connetion in open_tran state of 1 (I've seen some java
drivers do this) then try open_tran >1. However, I don't advocate
doing this. Best thing to do let it happen again and troubleshoot the
application. The connection will eventually be closed see See
Orphaned Sessions in BOL.|||I agree with Ken that you should perform root cause analysis. Consider
adding SET XACT_ABORT ON to stored procedures that begin explicit
transactions. This setting will rollback the transaction immediately and
cancel the batch in the event of most errors, including command timeouts.
If you start transactions from application code, execute SET XACT_ABORT ON
on the connection after opening to help ensure pooled connections
immediately rollback connections in the event of an error. Also check to
ensure a commit or rollback is issued on all code paths.
Problems with performance (e.g. scans) can often lead to performance and
concurrency (blocking) problems, which in turn cause other queries and
transactions to run longer and then lead to even more problems. The best
way to break the cycle is by optimizing queries and adding indexes for
efficient data access. Start by analyzing the execution plan in SSMS for
those queries that are the longest running and most frequently executed.
See http://support.microsoft.com/kb/224453 for some tips. Although the
article was written for SQL 7 and 2000, much applies to SQL 2005 too.
If you need to gather SQL trace data from a busy server, run a server-side
trace with a filter (e.g. RPC and T-SQL batch completed events with duration
greater than 1000000 microseconds) instead of running the Profiler tool
directly against a busy production server. You can develop the desired
trace using Profiler on a dev box and then script that trace so that you can
run on the prod server as a server-side trace.
Hope this helps.
Dan Guzman
SQL Server MVP
"karlag92" <karlag92@.hotmail.com> wrote in message
news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
> Please be gentle, I'm an application developer, not a dba. I just
> dabble as a DBA for our shop.
> Our application is distributed and works through web services to
> access information stored in SQL Server 2005. Each request to the web
> service is a contained transaction, in other words, transactions do
> NOT cross calls web service calls.
> Today something unknown and yet unidentified went wrong in the code
> and a transaction was open indefinitely (over an hour until I noticed
> it by accident and killed the offending process).
> Question: Is there a setting that I'm not seeing for the server or
> database that will limit how long a transaction can hold locks for?
> For example, in our scenario no transaction should be longer than a
> few seconds, so if I could set something in sql server that would
> rollback and kill any transaction that lasted for more than 1 minute,
> I'd be able to limit the impact of this problem.
> Thanks in advance for any help you can give . . .
>|||On Aug 2, 6:51 am, "Dan Guzman" <guzma...@.nospam-online.sbcglobal.net>
wrote:
> I agree with Ken that you should perform root cause analysis. Consider
> adding SET XACT_ABORT ON to stored procedures that begin explicit
> transactions. This setting will rollback the transaction immediately and
> cancel the batch in the event of most errors, including command timeouts.
> If you start transactions from application code, execute SET XACT_ABORT ON
> on the connection after opening to help ensure pooled connections
> immediately rollback connections in the event of an error. Also check to
> ensure a commit or rollback is issued on all code paths.
> Problems with performance (e.g. scans) can often lead to performance and
> concurrency (blocking) problems, which in turn cause other queries and
> transactions to run longer and then lead to even more problems. The best
> way to break the cycle is by optimizing queries and adding indexes for
> efficient data access. Start by analyzing the execution plan in SSMS for
> those queries that are the longest running and most frequently executed.
> Seehttp://support.microsoft.com/kb/224453for some tips. Although the
> article was written for SQL 7 and 2000, much applies to SQL 2005 too.
> If you need to gather SQL trace data from a busy server, run a server-side
> trace with a filter (e.g. RPC and T-SQL batch completed events with duration
> greater than 1000000 microseconds) instead of running the Profiler tool
> directly against a busy production server. You can develop the desired
> trace using Profiler on a dev box and then script that trace so that you can
> run on the prod server as a server-side trace.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "karlag92" <karla...@.hotmail.com> wrote in message
> news:1186013382.486135.138640@.r34g2000hsd.googlegroups.com...
>
> > Please be gentle, I'm an application developer, not a dba. I just
> > dabble as a DBA for our shop.
> > Our application is distributed and works through web services to
> > access information stored in SQL Server 2005. Each request to the web
> > service is a contained transaction, in other words, transactions do
> > NOT cross calls web service calls.
> > Today something unknown and yet unidentified went wrong in the code
> > and a transaction was open indefinitely (over an hour until I noticed
> > it by accident and killed the offending process).
> > Question: Is there a setting that I'm not seeing for the server or
> > database that will limit how long a transaction can hold locks for?
> > For example, in our scenario no transaction should be longer than a
> > few seconds, so if I could set something in sql server that would
> > rollback and kill any transaction that lasted for more than 1 minute,
> > I'd be able to limit the impact of this problem.
> > Thanks in advance for any help you can give . . .- Hide quoted text -
> - Show quoted text -
Thanks for the input guys, but I still need a simple way to configure
sql server to protect itself. Maybe what Ken suggests would work, but
it seems like a lot of work to accomplish something so basic.
I do intend to do root cause analysis and find the problem in the code
that causes this. However, I do not have even a part time DBA to
monitor and watch for these things to happen. If I could kill the
transaction automatically form the DB side when it runs too long, this
would help stop it from becoming a major problem but not deprive me of
what I need to fix the problem.
Transactions are managed in the client code in the web service. I
have no doubt that something broke there and will chase that down.
I just need a simple way to have SQL Server not allow a problem to
fester overly long . . . It would reduce the impact and severity of
the problem dramatically.|||I think your question expressed as a need is valid.
There is probably good reason(s) why Microsoft doesn't allow you to
enforce this policy or rule on sql server.
I invite the SQL MVPs to comment on why that is.|||> There is probably good reason(so) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
IMHO, this is something that should be implemented in the client API (like
CommandTimeout) rather than on the server side. I would be very concerned
about specifying a global transaction timeout on the server. As a server
policy, the timeout would need to be accompanied by more restrictive
criteria, such as host or application name name.
If this feature is important to you or karlag92, consider submitted a
product enhancement request via Connect Feedback
(http://connect.microsoft.com/SQLServer).
Hope this helps.
Dan Guzman
SQL Server MVP
"Ken" <kshapley@.sbcglobal.net> wrote in message
news:1186168209.808571.152240@.d55g2000hsg.googlegroups.com...
>I think your question expressed as a need is valid.
> There is probably good reason(s) why Microsoft doesn't allow you to
> enforce this policy or rule on sql server.
> I invite the SQL MVPs to comment on why that is.
>