[1.16.5] Network set in TransmitterNetworkRegistry constantly growing
KR33PYK1NG opened this issue ยท 9 comments
Issue description:
Basically, the title.
private final Set<DynamicNetwork<?, ?, ?>> networks = new ObjectOpenHashSet<>();
On my public server this network collection grows larger over time, peaking at 50-100k objects.
This eventually leads to TransmitterNetworkRegistry
's onTick
method taking really long to complete.
While investigating, I've found out that network is not always removed from this set even if all its transmitters were unloaded, so I believe it's some sort of a leak.
Is this behaviour intended/Is there some sort of a cleanup over time?
I feel like odds are this is probably fixed by 10.3.2 due to our extra unload checks for when a chunk just becomes inaccessible but isn't actually unloaded yet.
I'm using the latest version of Mekanism (10.0.21.448)
These mods are installed alongside Mekanism: itemfilters, jei, metalbarrels, mekanism, silents_mechanisms, bettercaves, cookingforblockheads, placebo, yungsapi, ftbguilibrary, engineerstools, pneumaticcraft, randompatches, storagetech, mekanismgenerators, mininggadgets, immersivepetroleum, refinedstorage, zerocore, industrialforegoing, titanium, ftbquests, immersiveengineering, silentlib, mekanismadditions, chickenchunks, gaiadimension, jaopca, pamhc2foodcore, camera, fastbench, performant, lostcities, elevatorid, worldedit, fastfurnace, mekanismtools, cfm, aiimprovements, engineersdecor, bigreactors, trashcans, byg, codechickenlib, bettermineshafts, openloader
Today I also observed a behaviour I haven't seen before - network count continues to increase even without any players online.
This leads me to think there is some sort of autonomous duplication bug (different object instances which describe the same network are created as time goes)
I will look into this further.
Performant causes many issues unless you disable a bunch of its options.
I suspect you're referring to Performant's load balancing (tick skipping) - it is disabled.
Also, how exactly are you measuring this number?
Trivially: by calling size
method of network set.
Performant causes many issues unless you disable a bunch of its options.
Also, how exactly are you measuring this number?
So, I was able to reproduce this issue locally and get to the bottom of it.
Mekanism fully relies on onChunkUnloaded
callback to clean up its networks.
If the callback wasn't executed in-time, the affected network will hang in memory until the server stops.
Performant, on the other hand, does a deliciously cruel thing - it delays onChunkUnloaded
execution if it's considered laggy from Performant's POV.
Hmm, good to know about and also thanks for looking into this and debugging it. I am not sure how much we will be able to do about them delaying onChunkUnloaded and causing us to then not removing the network, but given how large a memory leak it sounds like it causes I will certainly see at some point if there is some way I can try to mitigate the issue from our end.