ProtocolLib

3M Downloads

SynchronizedCollection causes heavy TPS issues

Yive opened this issue ยท 5 comments

commented
  • This issue is not solved in a development build

Describe the bug
The server will suffer heavy TPS loss if there is enough pressure on the processed/skipped packets collections in the Netty injector.

To Reproduce
Steps to reproduce the behavior:

  1. Setup an empty Paper server
  2. Install ProtocolLib (b600), Spark and Stress (from pop4959)
  3. Load 30 - 50 accounts onto the server
  4. Start a 30 second spark profile
  5. Run a 30 second entity test via the stress command, I used 500 pigs with a range of 10
  6. View spark report

Spark Report
https://spark.lucko.me/oeCBaJrwBU

Version Info
https://pastebin.com/yFdYFcGK

Additional context
This can be reproduced without a proxy, I just used it to offload ViaVersion and ViaBackwards as the player stress tester I used only goes up to 1.15.2. This report was generated from reproducing the issue. On production instances, TPS can drop as low as 0.8 with MSPT over 5000ms.

commented

Thats not the syncronized collection, but rather the performance of the underlying HashSet. There is currently nothing we can do to improve this, we're just limited by java now. What I can do is look into removing the collection (altough there has to be a reason for the collection to exist) and/or use an external library which can provide faster & better implementations of collections (e.g. FastUtil).

commented

I ended up redoing the test twice, turns out the issue comes from the processed packets ArrayList. Making use of FastUtil for the backing collections doesn't really seem to improve things unfortunately.

First test with just the backing collections changed to an ObjectOpenHashSet and an ObjectArrayList: https://spark.lucko.me/3yfMmCJ1gc

Second test was the same backing collections, but instead I synchronized them with ObjectSets#synchronize and ObjectLists#synchronize from FastUtil to see which collection was the bigger issue: https://spark.lucko.me/iuURht7FTl

commented

Yea, the issue should get fixed by removing the List, but, that causes issues with outgoing packets iirc. Need to take a look into that again and see if we can remove the list in any way.

The issue is not gone as the hashCode method (which is used in HashSets) is for most packets not overridden and uses the native System#identityHashCode method, which is slow

commented

So I removed the "processed packets" set and removed some calls to Set/Map#remove if not needed to reduce hash computation. Can you retry with https://github.com/derklaro/ProtocolLib/suites/8566640620/artifacts/383086738 (warning: this is a zip file, inside is the plugin file) if everything still works & is faster?

commented

Looks like the issue has been resolved with that build as I ran eight tests and all of them ran without any TPS issues nor any noticeable bugs with listeners.

Paper:

  1. 500 pigs, 50 players, no listeners:
    • TPS loss was comparable to ProtocolLib 4.x
  2. 2000 pigs, 50 players, no listeners:
    • same as above
  3. 500 pigs, 50 players, LibsDisguise installed
    • same as above
  4. 2000 pigs, 50 players, LibsDisguise installed
    • same as above

Pufferfish:

  1. 500 pigs, 50 players, no listeners:
    • TPS loss was comparable to ProtocolLib 4.x
  2. 2000 pigs, 50 players, no listeners:
    • same as above
  3. 500 pigs, 50 players, LibsDisguise installed
    • same as above
  4. 2000 pigs, 50 players, LibsDisguise installed
    • same as above

Note: Async entity tracker was enabled in Pufferfish as I'm sure a decent amount of servers out there use that setting when using Pufferfish or forks of it.