The problem is a 3 party MS Access '97 application that is no longer supported so a new system is currently being built in SQL Server. The Access database has a UI for small amounts of input and viewing but most of the work is done through a WaveLink application. The Wavelink application allows users on the shop floor via wireless devices to pick and count product which then updates Access. There has always been record locking issues that caused some delays at the wireless devices but nothing dramatic. Until now. The wireless network and wireless devices have been upgraded and now record locking (I presume that whats causing the problems) is now causing large productivity delays requiring constant intervention.
The Wavelink application is also being updated from VB6 to VB.Net to work with the new SQL Server application. One of the requests from the client is that the Wavelink upgrade somehow fix the problem with the Access database.
My idea was to link the Access database to a SQL Server so that the wireless devices go through SQL Server to Access hopefully taking the load off Access. Is this possible? Would SQL Server prevent the record locks and provide a better multi user environment or is that something that is inherent in OLEDB Jet and linking is not going to circumvent this problem?
Regards
Linking to Access from SQL will not remove transaction locks from the scanners. There is no magic bean there.
My suggestion would be more along the lines of creating a purgatory (temp) table that replicates with the live data either through a scheduled DTS package or some clever stored procedure.
Whichever option you choose, you must ensure the hardware devices will not intervene with the replication.
If it were me, but it's not, I would create the mirrored purgatory table that would house the live scans, use a stored procedure to find duplicate records from a JOIN of the stored data table...once the scans are complete, and increment/decrement the values as needed.
As a side note, having two tables that replicate will remove the locks from the stored data table so that updates/inserts are controlled by the purgatory table. I'd dabble with an INSERT trigger on the purgatory table to check for dupes and work on the real table
Just my twist on it,
Adamus
|||Using SQL Server as a backend data server 'could' help.
One of the problems is the way that Access manages locks. When a row of data needs to be updated, Access locks an entire 8k page, which may contain quite a few rows other than the one to be updated. That then blocks other users from the data in adjacent rows. As you've probably experienced, that often leads to corruption in the lock file, and you must get everyone out of the Access application and delete the lock file. A unpleasant and not too quick process...
SQL Server will lock only the single row being updated, allows other users to manage the data in adjacent rows. Also, when using SQL Server, when the 'process' is issuing a single UPDATE command, there is no reason to wrap the UPDATE command in a TRANSACTION. With SQL Server, even a single command is a implicit TRANSACTION.
|||The scanners will still lock the table.
Arnie Rowland wrote:
Using SQL Server as a backend data server 'could' help.
Do you mean it locks the table?
Arnie Rowland wrote:
SQL Server will lock only the single row being updated, allows other users to manage the data in adjacent rows.
There is 'always' a reason to use transactions especially with an UPDATE/DELETE unless you enjoy rolling back entire tables. It is good practice and a standard in most analyst position to use transactions to avoid the "oops I've updated every record"
Arnie Rowland wrote:
Also, when using SQL Server, when the 'process' is issuing a single UPDATE command, there is no reason to wrap the UPDATE command in a TRANSACTION.
This is where the problem occurs with multiple hardware devices attempting to write to the same table and the root of this thread. The implicit transaction is locking the table.
Arnie Rowland wrote:
With SQL Server, even a single command is a implicit TRANSACTION.
Just my twist on it,
Adamus
|||The scanners will still lock the table.
Adamus Turner wrote:
Arnie Rowland wrote:
Using SQL Server as a backend data server 'could' help.
Probably Not.
Do you mean it locks the table?
Arnie Rowland wrote:
SQL Server will lock only the single row being updated, allows other users to manage the data in adjacent rows.
No, I think that I wrote 'lock only the single row'.
There is 'always' a reason to use transactions especially with an UPDATE/DELETE unless you enjoy rolling back entire tables. It is good practice and a standard in most analyst position to use transactions to avoid the "oops I've updated every record"
Arnie Rowland wrote:
Also, when using SQL Server, when the 'process' is issuing a single UPDATE command, there is no reason to wrap the UPDATE command in a TRANSACTION.
Access had a problem is that it could fail in the middle of an operation and leave the table in an 'unknown' state. That is not the case with SQL Server. And actually, it is not the 'standard' to use a TRANSACTION with SQL Server for single action queries.
This is where the problem occurs with multiple hardware devices attempting to write to the same table and the root of this thread. The implicit transaction is locking the table.
Arnie Rowland wrote:
With SQL Server, even a single command is a implicit TRANSACTION.
With the current Access implementation, a PAGE LOCK is locking many rows and blocking other users. With SQL Server, even with an Implict Transaction, the minimal amount of data is being locked.
Just my twist on it,
Adamus
|||Do you mean it locks the table?
Arnie Rowland wrote:
The scanners will still lock the table.
Adamus Turner wrote:
Arnie Rowland wrote:
Using SQL Server as a backend data server 'could' help.
Probably Not.
The implicit transactions will collide and cause errors. It is not a probabitlity, it's a certainty.
Arnie Rowland wrote:
SQL Server will lock only the single row being updated, allows other users to manage the data in adjacent rows.
No, I think that I wrote 'lock only the single row'.
The query optimizer will decide whether a row lock or table lock is necessary. In most cases, on larger queries, the table will be locked.
There is 'always' a reason to use transactions especially with an UPDATE/DELETE unless you enjoy rolling back entire tables. It is good practice and a standard in most analyst position to use transactions to avoid the "oops I've updated every record"
Arnie Rowland wrote:
Also, when using SQL Server, when the 'process' is issuing a single UPDATE command, there is no reason to wrap the UPDATE command in a TRANSACTION.
Access had a problem is that it could fail in the middle of an operation and leave the table in an 'unknown' state. That is not the case with SQL Server. And actually, it is not the 'standard' to use a TRANSACTION with SQL Server for single action queries.
MS is aware of this and has provided Compact and Repair functionality to remedy this issue. The problem, on a broad scale, is that companies tend to stretch Access to mimick SQL Server and try to use it as a large application. This is not the intent of the design. Putting an Access database on a network drive and allowing multiple users to access it concurrently is asking for trouble. The resolution is in application development and SQL Server rollover.
Also, if you're defining a single action query as 1 row affected, I'll agree with you. No transaction required. However, 2+ records affected, transactions are an ethical imperative.
This is where the problem occurs with multiple hardware devices attempting to write to the same table and the root of this thread. The implicit transaction is locking the table.
Arnie Rowland wrote:
With SQL Server, even a single command is a implicit TRANSACTION.
With the current Access implementation, a PAGE LOCK is locking many rows and blocking other users. With SQL Server, even with an Implict Transaction, the minimal amount of data is being locked.
Again, the query optimizer determines this. And yes, on a very very very small scale this is accurate. The point is, the hardware devices used to commit these transactions 'collide' without fail. Changing the backend source will have no affect on the resolution.
Just my twist on it,
Adamus
|||Adam,
Based upon our history, I'm not going to continue to sustain this 'argument'. This is my last entry.
-Arnie
There is 'always' a reason to use transactions especially with an UPDATE/DELETE unless you enjoy rolling back entire tables. It is good practice and a standard in most analyst position to use transactions to avoid the "oops I've updated every record"
Adamus Turner wrote:
Do you mean it locks the table?
Arnie Rowland wrote:
The scanners will still lock the table.
Adamus Turner wrote:
Arnie Rowland wrote:
Using SQL Server as a backend data server 'could' help.
Probably Not.
The implicit transactions will collide and cause errors. It is not a probabitlity, it's a certainty.
You still miss some of the finer detials of locking and locking behaviour. The OP indicated that updates are made to Product records, using barcode scanners, and most likely updating a a single row for one product at a time, possibly touching more than one table. With the scenario as presented, I just don't see a lot of transaction 'collisions'.
Arnie Rowland wrote:
SQL Server will lock only the single row being updated, allows other users to manage the data in adjacent rows.
No, I think that I wrote 'lock only the single row'.
The query optimizer will decide whether a row lock or table lock is necessary. In most cases, on larger queries, the table will be locked.
And if it is single row updates, the QP will lock a single row. The entire table will only be locked if the QP determines that the cost of maintaining multiple individual rows locks has a higher cost than a table lock. But it would also issue a page lock, an extent lock, or an Index lock, -before escalating to a table lock.
Arnie Rowland wrote:
Also, when using SQL Server, when the 'process' is issuing a single UPDATE command, there is no reason to wrap the UPDATE command in a TRANSACTION.
Access had a problem is that it could fail in the middle of an operation and leave the table in an 'unknown' state. That is not the case with SQL Server. And actually, it is not the 'standard' to use a TRANSACTION with SQL Server for single action queries.
MS is aware of this and has provided Compact and Repair functionality to remedy this issue. The problem, on a broad scale, is that companies tend to stretch Access to mimick SQL Server and try to use it as a large application. This is not the intent of the design. Putting an Access database on a network drive and allowing multiple users to access it concurrently is asking for trouble. The resolution is in application development and SQL Server rollover.
Also, if you're defining a single action query as 1 row affected, I'll agree with you. No transaction required. However, 2+ records affected, transactions are an ethical imperative.
If you have ever had to deal with a situation where an Access query failed 'mid-stream', you will quickly recall that the 'Compact and Repair' tool is of no use to return the data to its previous state, and that you now have an inconsistant data state in the table. Single action Transactions are often a necessary evil with Access. I've had to 'rescue' and migrate Access databases that have exceeded every size/space limit placed on Access. I've had success Access Applications having as many as 50+ users on a Access mdb database. But that took deep understanding of Access and careful planning and execution.
A SQL TRANSACTION (including an implicit transaction) is a 'all or nothing' operation -one row affected, or thousands. It is dependable -unlike Access. IWith SQL Server, i is NOT necessary to issue an EXPLICIT TRANSACTION for a single action.
This is where the problem occurs with multiple hardware devices attempting to write to the same table and the root of this thread. The implicit transaction is locking the table.
Arnie Rowland wrote:
With SQL Server, even a single command is a implicit TRANSACTION.
With the current Access implementation, a PAGE LOCK is locking many rows and blocking other users. With SQL Server, even with an Implict Transaction, the minimal amount of data is being locked.
Again, the query optimizer determines this. And yes, on a very very very small scale this is accurate. The point is, the hardware devices used to commit these transactions 'collide' without fail. Changing the backend source will have no affect on the resolution.
Adam, clearly you have worked with Access a bit, and yet have a lot to learn about SQL Server (as also, do I!)
I don't think that 'the hardware devices' have anything directly to do with transaction collisions -that is a matter of software design, database design, and serenditity. We can work to control the first two to the best of our skills, the last one is beyond our control ...
I fail to understand your need to pick apart and contest my original comments to the OP. Did I step on your toes somewhere?
Just my twist on it,
Adamus
|||Arnie,
It is not my intention to rival against the masses.
At one of many previous contracts, I had to redesign an Access environment to an ASP.net/SQL Server 2000 environment for a logistics company basically nuking and paving the existing inventory system and designing it from scratch.
Part of their inventory management system used bar code scanners. Most, more pricey, scanners use bulk imports to communicate with the database (not one item at a time). They scan a lot of inventory, hit a button, and the software does the rest.
This works great with 1 person. This works great with 2 people in tandem. This does not work at all concurrently. You do not need any in depth database cognitions to anticipate the potential heartaches. Have 3+ people try to fill a water glass at the same time. Will you argue no collision will occur?
I apologize if you feel attacked, however, your suggestion isn't relavent or effective.
The problem is that more than 1 scanner is attempting to bulk insert concurrently. Changing the back end source will not change anything.
Adamus
|||One correction:
Moving from an application db (Access) to a server db (SQL Server) does enhance the query time, in effect, reducing the potential of collisions but this is simply coincidental and not a resolution.
Best of luck,
Adamus
|||Thanks Adamus and Arnie, very interesting discussion.The question was, or should have been, would Sql handle the updates to the access db in the event the table/rows were locked i.e. would Sql keep trying until 'Timeout' was reached or the lock was released on the access table. From your discussions, it seems this would not be the case.
What I've decide to do is suck the data from the access db into similar tables in Sql. The scanners would then connect only to the Sql tables. As each scanner will only access a single row, and the row is exclusive to a scanner, the table locking problem would not be a problem here.
The only record locking that may occur is when the data is pushed back to access but I think this will be more of a timing issue with some processes that seem to run in the access app that I will have to work around.
Cheers gentlemen, thanks for your input.
No comments:
Post a Comment