Monolith DKP

Monolith DKP

687k Downloads

Enhancement: Data Pull

vjay82 opened this issue · 29 comments

commented

It would be really nice if the addon would pull it's data instead of just displaying that it is outdated. If it would send a message like "does anybody have newer data than synchronising would be a lot faster than pushing EVERYTHING every time. Also people could update themselves instead having to ask around.

We have multiple raids where the not-core-part moves freely between them. As Monolith is built now, we have to meet ingame and manually synchronise several times a week. On the other hand, there are always people online. The data could move from one to another if the addon would be able to update itself.

commented

I can suggest own solution which is used for our multi-raid configuration.

major changes:

  1. data is binded to character, not account
  2. every log(history/loot) is marked with author name and per-author index.
  3. there is no "seed" anymore (no officer notes used)
  4. every table is consistent and can be updated at any time.
  5. there is no "undo" action (delete log entry), instead, when deleting, new log entry added.

There is new "meta" table, which contains info about every eligible player and it's indexes known to addon instance: player + lowest history index and latest index available.

Basic algorithm:

  1. When authorized(who are allowed to maintain dkp table) player comes online, addon broadcasts own meta to everyone.
  2. After authencity check, every player compares own meta table with received meta, calcs diff which can result in
    2.a) Tables are equal, nothing should be done.
    2.b) Receiver table is newer, if receiver is also an officer - receiver sends own meta back to sender.
    2.c) Receiver table is older. Thus, sync is needed:
    2.c.1) If receiver table current indexes are within current indexes range of a sender (i.e. we are missing only latest history), partial sync requested, from receiver current index to sender current index.
    2.c.2) If receiver table current indexes are out of bound of sender, full sync requested. Full sync is never done, if we have any newer part of the table.
    2.d) Receiver table is in mixed state (partially newer/older) - is a combination of 2.c.1 and 2.b
  3. When any type of sync is done (local table is updated) step 1 is performed once more.

On partial sync request, part of local history log with requested author and indexes are sended back, each action is applied to local dkp table (add or removes dkp).
On full sync request, whole table is sended back.

When history table is purged of old entries, corresponding meta lower-indexes are updated.

commented

Do you have an example of that in use? It's a bit above my current understanding with regards to meta tables.

commented

By meta table, i meant extra information for history log. just a table with name meta and specific content within, i.e.

meta = {
  custodian = {
    lower = 1,
    current = 10
  },
  roeshambo = {
    lower = 5,
    current = 7
  }
}

While dkp history is

MonDKP_DKPHistory = {
	{
		["players"] = "Зетмарк,Ытьыть,Дс,Серпантин,Спидрагос,Варглэйв,Мистоган,Мяу,",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "Other - 5 DKP weekly",
		["index"] = { "custodian-1" }
	}, 
	{
		["players"] = "Зетмарк,Ытьыть,Дс,Серпантин,Спидрагос,Варглэйв,Мистоган,Мяу,",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "Other - 5 DKP weekly",
		["index"] = { "custodian-6" }
	},
	{
		["players"] = "Ризониус,",
		["dkp"] = -65,
		["date"] = 1569561133,
		["reason"] = "Other - Шапка Т2 Ониксия",
		["index"] = { "roeshambo-6" }
	},
	....
}
commented

I'm familiar with meta tables. I'm simply not 100% on how the addition of a meta table would nullify the need for a public note identifying what is and isn't up to date.

commented

@Roeshambo those are not lua language meta tables.

Since every character has own index authencity (i.e. ordinal or time-based) every officer dkp table is considered up-to-date.
If officer1 addeds some dkp while officer2 is offline(and vice versa), when both are online at the same time, they exchanges missing part of the info, based on last known indexes.

commented

Gotcha. My intended plan was to have a meta table containing a value for every entry made holding a timestamp and name of who executed the entry, then simply compare those meta table entries and send the parent table entry if the timestamp was older. Even if it's not "up-to-date" necessarily, it would be updated again when someone with an even newer timestamp was online. Only problem I see with that, is when the tables get to be over 1000 entries each, that would result in a considerably heavy load as far as addon memory consumption goes. Unless I'm misinformed with regards to how much memory the savedvariables actually consumes.

commented

I've been trying to find a way to accomplish this without accidentally causing data to be lost. It also would have to be strictly officers that broadcast the data as literally anyone could go into their DKP tables, change the seed to be "up to date" and then log in to have it broadcasted. Just a lot of variables to consider.

commented

Could you check on the receiver's side if it is an officer who is sending/answering? I think this is sth. you had to do anyway as somebody could modify the addon, remove the officer-check and broadcast faulty data.

commented

Yes the addon as it is now checks and only accepts broadcasts if it's an officer sending it.

commented

Every addon could query on login, /dkp and on a repeating timer for new data. You can still show the green/red indicator to let the user know if he/she sees up-to-date information.

Then I guess you are pretty much set up to be able to do this as long every data entry has a timestamp, no? (The base settings need to be timestamped, too)
One additional thing to consider is: If a user truncates the history, the addon should consolidate everything as a first entry and mark it as first one, letting sb., being behind this first entry and asking for new data, know to throw everything away before that.

Probably things are more complex behind the scenes but ... it would be a really, really good feature.

commented

The only real concern with automatic broadcasts like that would be later on. Each table can get into the thousands. Which take several minutes to broadcast each. Now stack that with multiple people requesting broadcasts. It can backlog up to several minutes. That means the officer broadcasting (which he wouldn't even know he is) wouldn't be able to zone or log until it completes or it could corrupt those players tables. There's a lot going on with it in the background, but I haven't found a reliable way to do it automatically without risking integrity.

commented

Compound that with the fact that there's no real way to have officers chat in the back to determine who sends the data, so every officer would broadcast simultaneously. And I'm not versed well enough in how the communication actually functions to know if it might result in duplicate data

commented

If everybody is only asking for data with newer time-stamps an officer would begin sending him entry by entry. If that sending process gets interrupted, the receiver asks again if sb. has newer data for the now newest entry and the process begins anew. If everybody queries for the newest data all the time somebody needing everything from the beginning would be a rare case, no?

If this is made a 2 step process you could distribute the load:

  1. step: Query if sb. has newer data. Ideally several officers are answering.
  2. step: Randomly take an officer and ask him to start sending updates. The officer starts sending entries (from old to new) in private communication.

(3. step: begin at 1. until nobody answers with a yes)

If this is still too much load you could add a maximum at step 2 like... every officer only answers with at max 50 entries and add a random delay between step 2 and 3. So tables update slowly while people are playing the game.

commented

If you had a lot of players, wouldn't the size of the meta tables become a problem?

commented

The meta tables would only need to contain data on those authorized to create entries (officers). Wouldn't be more than a essentially a count of how many entries they have made individually. At least that's how I'm gathering it to be. IE: if my table says I have 32 entries (which would always be right because it only actually increments when I create an adjustment), and you log in and yours says you only have up to 16 from me, you simply need the other 16 broadcasted to you.

commented

@vjay82 meta table contains 1 entry per authorized player with 2 numbers: current and lowest known index.

@Roeshambo I suggest independed indexes for every authorized player.
lower bound value is needed to get a clue about depth of full history log in case it has been purged while player was offline and missed a lot of data, so he cannot get only partial update.
PlayerA had 1-16 entries and was offline for a month. Goes back online. OfficerA had a lot of entries, and purged some. Now OfficerA has 62-100 entries. PlayerA cannot get partial sync from that info, since 17-61 entries are missing => full sync required.

For performance reasons, I suggest, that logged in authorized player broadcasts his meta rather than every logged player requests for meta update.

futher communication is made via whisper channel. only new dkp changes are broadcasted.

I proposed algorithm you do not broadcast MonDKP_DKPTable during partial sync, only part of MonDKP_DKPHistory/MonDKP_Loot with corresponding indexes. Received entries are applied locally to MonDKP_DKPTable.

I can write down the whole sync sequence with an example for all possible cases to get on the same page quickly.

commented

I had a slightly separate plan for the DKPTable. That would include a "lastupdated" field for each entry. When officer logs in, they broadcast a name/timestamp pair for each entry. Receiving players compare timestamps and return a request for which entries are out of date to be sent back. Unless you had a method that drew a lesser load? Example would be great

commented

What happens with lastupdated field if:
Lets assume PlayerA has 10 dkp. OfficerA changes PlayerA dkp (from 10 to 20) while no other officer is online, then he goes offline. Next OfficerB changes PlayerA dkp (from 10 to 5), and then OfficerA come back online?

My alg will sync them in 2 private passes, which results in PlayerA having 15 dkp and 2 new history entries (1 per officer).

Writing down an example right away.

commented

@custodian The way I'm currently seeing it in my head is an officer starts at a current index of 0. When he creates a new entry, it tags that entry with OfficerA-curent_index+1 (so first entry would be OfficerA-1) then increments the meta table for that officer to an index of 1. So on. So from there it would work like:

  1. PlayerA logs in and requests meta tables.
  2. OfficerA sends meta tables indicating they are up to index 25 which PlayerA then compares.
  3. PlayerA determines his meta tables have OfficerA at an index of 15 and responds with that information.
  4. OfficerA broadcasts all entries that were missing (OfficerA-16 through OfficerA-25) and any entries that may have been missing in 1-15 as well.

Is this relatively accurate? What is the "lower" value denoting?

commented

Lets call authorized players officers.

Overall alg itself. It demonstates DKP History sync, but same can be done with Loot history.

When officer logs, he broadcasts his meta table ( MetaData Message ).
When officer adjusts DKP, created entry broadcasted ( DataEntry Message ).
This step/action is optional: When player logs, he randomly selects an officer, and requests his meta ( MetaRequest Message ).
Officer can broadcast his meta at anytime (by the button) to force launch the sync process for all players, if needed. Zero cost if all data is up-to-date.

  • On MetaData Message Received:
  Validate sender.
  Compare local meta with received meta per officer entry to detect diffs.
  If local currentindex > remote lowerindex and local currentindex < remote currentidnex then
    we are missing some history stash officer and range[local currentindex + 1, remote current index]
  else if local currentindex < remote lowerindex
    we are missing a lot history, and cannot partially sync. stash officer and full sync flag.
  else if local currentindex > remote currnet index
    we have newer info. stash newer info flag
  else
    we are equal (local current index = remote current index). do nothing with this entry.
  end

  If we have any partial sync queued, request data (can be multiple officers indexes in a single request) from sender( SyncRequest Message ) and exit.
  If we have any newer info, send local meta to sender and exit.

  If we have no newer info and full sync flag is set then request full sync ( FullSyncRequest Message ) from sender and exit.
  • On DataEntry Message Received:
  Validate sender.
  Compare received index with local meta. It must be sequential.
  If index is not sequential, drop message. Request meta from sender ( MetaRequest Message ) and exit with fail.
  If index is sequential, apply data to local history, adjust local DKPTable.
  Update local meta table with new index.
  • On MetaRequest Message Received:
  Send local meta table to sender ( MetaData Message ).
  • On SyncRequest Message Received:
  Grab required entries from History table based on required index and send data back to sender ( SyncResponse Message ) .
  • On SyncResponse Message Received:
  Validate sender.
  Iterate through items: call OnDataEntryReceived for every received entry, exit if fail.
  Since we synced (and local meta updated) send local meta back to sender ( MetaData Message ).
  • On FullSyncRequest Message Received:
  Send whole local Meta, DKPTable, DKPHistory tables to sender ( FullSyncResponse Message )
  • On FullSyncResponse Message Received:
  Validate sender.
  Replace local tables with received tables.
  Since we synced (and local meta updated) send local meta back to sender ( MetaData Message ). 

Manual actions that could be done:

  • AdjustDKP:
  Get next local index for myself.
  Create dkp entry with required info, and generated index.
  call OnDataEntryReceived with this entry.
  Broadcast entry to guild ( DataEntry Message ).
  • PurgeDKPHistory:
  Iterate through history backwards(from oldest to newest) get entry, get entry index.
  If index == lowest index, skip entry (to avoid )
  If index ~= lowest index, remove entry, increment lowest index.
  • DeleteHistoryEntry:
  Get next local index for myself.
  Get history entry, invert dkp value.
  Existing entry index field rename as "deleted-index" (or whatever to use it later during history browsing to hide deleted entry)
  Entry index set to generated index.
  Broadcast entry to guild ( DataEntry Message ).

While showing history, such entries could be hidden by matching:

	{
		["players"] = "PlayerA,PlayerB,",
		["dkp"] = -5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-2" }
		["deleted-index"] = { "OfficerA-1" }
	}

and

	{
		["players"] = "PlayerA,PlayerB,",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}

Example:

Lets start with empty DKP table.
OfficerA, OfficerB, PlayerA are online.
State of empty tables for all 3 players:

Meta = {}
DKPTable = {}
DKPHistory = {}

At this time OfficerA, OfficerB and PlayerA are online.

OfficerA adds from DKP to PlayerA.
Local current index is missing, thus OfficerA-0, next index is OfficerA-1.
Create dkp message

	{
		["players"] = "PlayerA",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}

Apply it locally.
OfficerA tables:

Meta = {
	["OfficerA"] = {
		["lower"] = 1
		["current"] = 1
	}
}
DKPTable = {
	{
		["previous_dkp"] = 0,
		["player"] = "PlayerA",
		["dkp"] = 5,
		["class"] = "MAGE",
		["lifetime_gained"] = 5,
		["lifetime_spent"] = 0,
	}
}
MonDKP_DKPHistory = {
	{
		["players"] = "PlayerA",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}
}

After action is done, this DKP entry is broadcasted.
PlayerA and OfficerB receives an update.

	{
		["players"] = "PlayerA",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}

Incoming messages are processed same way by every player.
All incoming updates are verified for authenticity (i.e. from officers or whitelist).
Received message has index OfficerA-1.

PlayerA lookup local meta table.
No entry for OfficerA, this means previous index is 0.
Received index is 1. It's sequential, everything is good.
Apply received message to local tables.
PlayerA and OfficerB:

Meta = {
	["OfficerA"] = {
		["lower"] = 1
		["current"] = 1
	}
}
DKPTable = {
	{
		["previous_dkp"] = 0,
		["player"] = "PlayerA",
		["dkp"] = 5,
		["class"] = "MAGE",
		["lifetime_gained"] = 5,
		["lifetime_spent"] = 0,
	}
}
MonDKP_DKPHistory = {
	{
		["players"] = "PlayerA",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}
}

OfficerA goes offline, PlayerA goes offline.

OfficerB adjust DKP for PlayerA:

	{
		["players"] = "PlayerA",
		["dkp"] = -5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerB-1" }
	}

Officer B tables:

Meta = {
	["OfficerA"] = {
		["lower"] = 1
		["current"] = 1
	},
	["OfficerB"] = {
		["lower"] = 1
		["current"] = 1
	}
	
}
DKPTable = {
	{
		["previous_dkp"] = 0,
		["player"] = "PlayerA",
		["dkp"] = 0,
		["class"] = "MAGE",
		["lifetime_gained"] = 5,
		["lifetime_spent"] = 0,
	}
}
MonDKP_DKPHistory = {
	{
		["players"] = "PlayerA",
		["dkp"] = -5,
		["date"] = 1569869366,
		["reason"] = "",
		["index"] = { "OfficerB-1" }
	},
	{
		["players"] = "PlayerA",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}
}

OfficerB goes offline.
OfficerA goes online and adjusts PlayerA DKP.

	{
		["players"] = "PlayerA",
		["dkp"] = 20,
		["date"] = 1569869367,
		["reason"] = "",
		["index"] = { "OfficerA-2" }
	}

OfficerA tables:

Meta = {
	["OfficerA"] = {
		["lower"] = 1
		["current"] = 2
	}
}
DKPTable = {
	{
		["previous_dkp"] = 0,
		["player"] = "PlayerA",
		["dkp"] = 25,
		["class"] = "MAGE",
		["lifetime_gained"] = 5,
		["lifetime_spent"] = 0,
	}
}
MonDKP_DKPHistory = {
	{
		["players"] = "PlayerA",
		["dkp"] = 20,
		["date"] = 1569869367,
		["reason"] = "",
		["index"] = { "OfficerA-2" }
	},
	{
		["players"] = "PlayerA",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}
}

OfficerB goes online, broadcasts own meta and sync starts:
OfficerA receives meta data from OfficerB and compares:
local data:

Meta = {
	["OfficerA"] = {
		["lower"] = 1
		["current"] = 2
	}
}

versus received data:

Meta = {
	["OfficerA"] = {
		["lower"] = 1
		["current"] = 1
	},
	["OfficerB"] = {
		["lower"] = 1
		["current"] = 1
	}
	
}

Which results in (from OfficerA perspective):
OfficerB has newer data

{
	["OfficerB"] = {
		["from"] = 1,
		["to"] = 1
	}
}

I has newer data

{
	["OfficerA"] = {
		["from"] = 2,
		["to"] = 2,
	}
}

OfficerA requests update from OfficerB

{
	["OfficerB"] = {
		["from"] = 1,
		["to"] = 1
	}
}

OfficerB OnSyncRequestReceived and finds required data and sends it back:

	{
		["players"] = "PlayerA",
		["dkp"] = -5,
		["date"] = 1569869366,
		["reason"] = "",
		["index"] = { "OfficerB-1" }
	}

OfficerA OnSyncResponseReceived validate and apply data.
OfficerA tables:

Meta = {
	["OfficerA"] = {
		["lower"] = 1
		["current"] = 2
	},
	["OfficerB"] = {
		["lower"] = 1
		["current"] = 1
	}
}
DKPTable = {
	{
		["previous_dkp"] = 0,
		["player"] = "PlayerA",
		["dkp"] = 20,
		["class"] = "MAGE",
		["lifetime_gained"] = 5,
		["lifetime_spent"] = 0,
	}
}
MonDKP_DKPHistory = {
	{
		["players"] = "PlayerA",
		["dkp"] = 20,
		["date"] = 1569869367,
		["reason"] = "",
		["index"] = { "OfficerA-2" }
	},
	{
		["players"] = "PlayerA",
		["dkp"] = -5,
		["date"] = 1569869366,
		["reason"] = "",
		["index"] = { "OfficerB-1" }
	},
	{
		["players"] = "PlayerA",
		["dkp"] = 5,
		["date"] = 1569869365,
		["reason"] = "",
		["index"] = { "OfficerA-1" }
	}
}

Since meta changed, meta is sent back to sender (OfficerB)

OfficerB OnMetaDataReceived and compares, which results in OfficerA newer data.
OfficerB request update from OfficerA:

{
	["OfficerA"] = {
		["from"] = 2,
		["to"] = 2,
	}
}

OfficerA OnSyncRequestReceived finds history entries

{
	["players"] = "PlayerA",
	["dkp"] = 20,
	["date"] = 1569869367,
	["reason"] = "",
	["index"] = { "OfficerA-2" }
}

OfficerB OnSyncResponseReceived validate and apply data.
OfficerB and OfficerA tables are now equal with all logs and up-to-date data.

PlayerA goes online and requests data automatically, via optional step, or waits till meta broadcast/entry data.

commented

How do you anticipate syncing the DKPtable? You could technically apply DKPHistory entries to the table and remain correct, until data is purged. At which point it would be incorrect. You would also utilize a separate meta table for each DKPHistory and LootHistory correct? IE:

MonDKP_DKPHistory = {
	{
		["players"] = "PlayerA",
		["dkp"] = 20,
		["date"] = 1569869367,
		["reason"] = "",
		["index"] = { "OfficerA-2" }
	},
	{
		["players"] = "PlayerA",
		["dkp"] = -5,
		["date"] = 1569869366,
		["reason"] = "",
		["index"] = { "OfficerB-1" }
	},
        ["meta"] = {
	        ["OfficerA"] = {
		        ["lower"] = 1,
		        ["current"] = 1,
	        },
	        ["OfficerB"] = {
	        	["lower"] = 1,
	        	["current"] = 1,
        	}
        }
}
commented

Second question, the meta table on login would be broadcasted to the entire guild. I assume the response with the required data is done via whisper. If this occurs for 40+ people at the same time, is there a possibility this could crash the sending officer? (in the event 20 entries are sent to 40 different people at the exact same time, plus the possibility of a full sync being required for some). And if that's the case, would it be more efficient to simply broadcast every entry the officer has and only apply if that index is missing on a per user basis?

commented

I'd prefer to have separate MonDKP_Meta table for all required info instead of squashing actual data and meta into single table, but's thats not a big deal.

MonDKP_Meta = {
       ["loot"] = {
	        ["OfficerA"] = { ... },
	        ["OfficerB"] = { ... },
       },
       ["history"] = {
	        ["OfficerA"] = { ... },
	        ["OfficerB"] = { ... },
        }
}

On the second question:
Yes, all further communication is done via whisper channel. 40+ people will require sync at the same time only if they all were offline when previous changes were made to dkp table. Officer would not be crashed, since acecomm uses libthrottle by default, which can delay delivery/processing. Overall, the amount of data processed is so small, that there will be no significant notice.
Optional user sync step(when logged user privately communicates with randomly selected online officer for meta) will also reduce possible spike load.
Broadcasting every entry may not be a good idea, since (in that particular alg) it may cause full sync, if non-sequential data is received by the user.
Incoming sync requests from non-officers can be queued internally by officer the addon, and processed 1 by 1 by the timer (100-500ms), so there would be no spike load on officer addon.
There are plenty possible optimizations here and there.

You can also speedup your communication by removing double compression/decompression check for better alg, and use steady libdeflate for compression.

I use such alg with one of my addons on 800 players guild (average online during sync is 270-300), no pitfalls so far. I do use optional player optional player step with one of the officers.

Another thing is conversion process from old format to new one.
This should be done only once, buy a person with a button (and mind focus).
Iterate through old data, create DataEntry and apply to newly created table.
Then compare resulted dkp/lifetime_gained/lifetime_spent with old data, and generate extra DataEntry to fix the possible gap (if there was any history purge before).

commented

How do you anticipate syncing the DKPtable? You could technically apply DKPHistory entries to the table and remain correct, until data is purged.
Thats why you need lower index. If your local current index within remote lower and current index (or higher) then you already applied all the necessary DataEntries, and if history is purged, it will not affect your DKPTable.
You are the source only for your own DKP changes (made by you). If you lost track of remote changes (a lot has been done and purged) then you send your few local changes to remote first (where they are applied), and then you full sync from remote.
The only issue which must be manually solved is huge (writing huge i mean really huge) desync. OfficerA made 2500+ new entries and purged some history while every other officer was offline, and OfficerB made 2500+ entries and purged some history while every other officer was offline. And the next morning both come online.

Although, you can add a special case for that, and force sync them by the button, but I'd prefer a manual resolution for such problem.

commented

Thanks a ton for the write up. Think I've got the basic jist of it. Wrapping my head around writing it without accidentally causing an infinite loop should be fun.

commented

@Roeshambo any thoughts/progress on sync?
I can implement it for current version and make a pull request for tests. Should be done by the weekend.

commented

@custodian I've been incredibly busy the past few days with work and such. I began the planning stage of it last night and got basic steps written out for each phase of the hand-off but haven't gotten around to actually executing the code. If you've already know exactly how you see it working the assist would be hugely appreciated.

commented

@Roeshambo okay, I'm on it then, based on "Purge DKP List Button" commit ( 1e6194c )

commented

Just got done writing up this system. Hoping to have bugs weeded out and have it out by the end of the weekend.