Two simultaneous raids serious issues
shershenator opened this issue ยท 16 comments
Today our guild has first two simultaneously running raids, and after the raids it turned out that the addon didn't update DKP sometimes. It happened when we killed bosses at the same time (2 bosses were killed almost at one second), also the dkp table didn't update when we were distributing loot. So it was nightmare for us to sort this out.
Also sometimes me, as MasterLooter from raid 1, and ML from raid 2 saw a messagebox asking, should the dkp table be overwriten with outdated data from other ML. After few seconds the message disappeared.
Not sure, if this is a bug or we did something wrong.
Was neither a bug nor your fault. The addon was originally designed for my guild to use which had no plans of a multi raid configuration and was planned with only a single person conducting broadcasts. I did not anticipate so many people using it. But since it has become a growing issue with larger guilds a new broadcast system is getting made that will eliminate that possibility.
Could you please let us know when me can expect the update? I'm asking because if it's not going to be any time soon, we will manage our 2 raids to different days.
I'm doing my best to get it out soon. Hopefully this week. It's a considerably intricate system and want to get it right.
I'm doing my best to get it out soon. Hopefully this week. It's a considerably intricate system and want to get it right.
Thank you very much!
Just spitballing here, what about have someone check for the seed, if they are up to date with the seed, update the table then wipe out the seed, everyone else can login and their table stays until they get a broadcast from the person with the full table? Alternatively, allow them to convert too and if they are up to date with the seed the db should be in a consistent state anyways.
System is getting close to done. Hoping to have it out by the end of the weekend. Right now the biggest obstacle is figuring out a way to migrate everyone's current tables to the new system while keeping it dummy proof as only one person can migrate it. If multiple people do, it could screw up all their past data. Once I get that straightened out, It'll be out.
The system I'm writing uses an indexing system. When queried by an officer, you will tell them what index for each officer you currently have. And then you send what is missing. So if your highest index for OfficerA is 30, and the officer querying you has up to index of 40 for that officer, they'll send you the other 10. Then you clean up by cycling through them all to find any missing in between. The system is 100% written at the moment and working beautifully. However, I'm trying to work out the best way to handle broadcasting to someone who is new and indexes have already been archived (IE: indexes 1 through 20 might be missing due to table trimming to keep the file size manageable). As well as the best way to migrate tables to the new system. It's going to require only one officer execute the migration and then push to everyone else. But as I've learned, no matter how well I convey instructions to someone, they like to just close the box without reading. So I'm working out a better way to essentially dummy proof it.
As well as the best way to migrate tables to the new system. It's going to require only one officer execute the migration and then push to everyone else. But as I've learned, no matter how well I convey instructions to someone, they like to just close the box without reading. So I'm working out a better way to essentially dummy proof it.
Kind of what I was saying was for the migration, couldn't you:
- Have it check the GM public for the "seed", to see if you are up to date
- If you are up to date, perform the migration (wipe the seed from the note)
- Check for other clients online
- Negotiate who does the update (based on rank then some arbitrary number like most /played?)
- Set a variable saying you are migrated in the GM public
- If you are not up to date, block until you get a broadcast with the correct date and then negotiate who performs the update
- Check for other clients online
- See if they can perform the update
- Receive new table
- If there is a index instead, query for missing indexes and dump your database
Personally, I would have done a git style (hash) index instead, as it would allow you to be "out of date" but still functional and you could also then perform the migration on multiple clients and then exchange tables to compare (and reconcile if needed). This would have improved your data integrity as well as some people might have some DKP entries and some people might not (depends on if your table is in a inconsistent state).
You would have to have an overall hash, then if you are all good on the overall hash you don't need to communicate, but if you are behind on the overall hash, you need to negotiate and see which ones are missing. You could also do a weekly, monthly, and yearly hash (just on the fly) to help narrow down which one is missing.
Do you store the current index anywhere?
FYI, here is a good write-up on the git commit hash: https://blog.thoughtram.io/git/2014/11/18/the-anatomy-of-a-git-commit.html
You could do something similar:
hash(
player,
zone,
encounter,
item,
date,
time
)
I wouldn't worry too much about this though, I think for it's purposes, the index you are using will definitely be good.
I'm not personally familiar with how git creates a hash or how you'd be able to identify any sort of sequence from those hashes. But that migration was essentially the plan. If you've got more information on that style of indexing I'm all ears.
I'm not personally familiar with how git creates a hash or how you'd be able to identify any sort of sequence from those hashes.
It is a sha1 hash of relevant commit data.
I don't think you would need a sequence with your system, you would just need to tie relevant commit data to a hash. You could even do "squash" hashes(hash of hashes) to limit your data transfer.
I think it is a bit too late to change it now though, if you have already done the work.
Ah. Ya, only issue with hashing would be that it would be incredibly difficult to determine what data is missing to request. Using an index that iterates sequentially, I can simply run through, see I have 14 and 16. So 15 is missing. Submit request for that index. All data is serialized and hashed prior to broadcasting for quicker transfers.
Also all tables are still usable if indexes are missing. I feel it would be very rare for an officer to make changes when absolutely no other officers are online to receive. And as long as someone has them they can broadcast it. As long as another officer logs in to get those entries, the data propagates naturally.