I have some code that is in a transaction block, that reads in data and then updates a field in the same table with a different query.
So it looks something like this
prc.services = supplierService.getServicesByCategory(event);
supplierService.updateServiceWeights(event, rc, prc);
The problem is that this is throwing a deadlock, under extreme load. I thought of putting a lock around the code as well, but I am not sure that this is the right answer. It has been a long time since I have seen these sort of errors, so any guidence recommendations would be good.
When we have seen these types of issues the problem has been resolved with changes to the SQL.
In what way, it is a very basic select statement and then followed by an update statement.
Problem is I can’t optimise this query any more than it is.
First I assume you aren’t doing something like select * from…
Are you using a view?
What I have experienced with transactions, locks and inevitably deadlocks is that they halt all effected system items until that action is complete. When you add an active load, what happens is that you build a linear lock of your system instead of utilizing the parallel threads. We have resolved this through scoping of the variables which stopped the cross exchange of variables and keeps them running in parallel threads and by doing logging of data files and manual roll backs instead of system rollbacks if there is a fail. Here is a brief example
insert into table1 (1,2,3,4,5)
select top 1 itemID
order by itemid desc
insert into table1 (#item1.item1ID#,2,3,4,5)
I am now doing
declare @itemID int;
insert into table1 (1,2,3,4,5);
set @itemID = scopeidentity();
insert into table1 (@itemID,2,3,4,5);
<cfif name.StatusCode eq 1>CompletedFailed
This isolates the inserts, and allows the inserts to actually thread on top of each other and with logging in MSSQL, it will actually do complete rollbacks on individual transactions while allowing others to run parallel. The downfall is that you may have gaps in your id’s when you are doing multithread processing because another transaction will complete while the one committed before will roll back and your autoincriment will keep rolling.
Don’t get me wrong I know what causes them, I just haven’t had to deal with them for a long time. In this example of mine, I am doing an update right at the moment it is doing a select. I didn’t want to put a lock around this because it means the request is held up for longer than I would like it too.
Out of the 3500 queries this the only one that is causing issues, but it also not a query that I would consider the normal either. The thing is that it is basically changing the order right after it is loaded. In other words we may have a query returned to us like this.
id 1, value Test 1, 1
id 2, value Test 2, 2
id 3, value Test 3, 3
id 4, value Test 4, 4
After it retrieved, it is then modified to look like this with an update.
id 1, value Test 1, 4
id 2, value Test 2, 1
id 3, value Test 3, 2
id 4, value Test 4, 3
And to make it worse, this is a relationship table that pulls back around 300 records. So it isn’t that many and is very quick, but just not quick enough when it is being called at the same time I am updating the table to do the above.
So I knew that a lock would do the trick, I just didn’t want to go there if I didn’t have too.
Have you thought about doing your select into an @table in sql, do your update to the @table, then do your update to your table then select from the @table? That would avoid doing the lock unless you have multiple people updating the same information in the same table, then your only option would be a lock, but then the information would always be incorrect. Not sure I understand the application completely, but I would be happy to help.
Yeah unfortunetly this is a weight based query, in other words each time it is hit then it has to change, so yeah a lock is the only way to go then.
Is this like a rotational banner where you have x widgets and can only show y, so you want to change the sort order so they all stay balanced?
I thought long and hard before I did what came up with, and the only solution and I know the has to be a better way. Was to load it up and store it in the session, that way a user can paginate through and it is not changed until they move away and from this area and back again.
As well as other users don’t interrupt the order while they are browsing.
The strange thing is that I designed that system over 8 years ago, but I have never tested it under load. And now I have this application back in my hands again, and decided to give it a load test and of course started getting these deadlocks. I should try the original app and see what happens, but I am almost certain it will have the same issue.
What if you run the query, load it in the session and use QOQ to manipulate for each session?
Steve that is what I am already doing.