Mekanism

Mekanism

111M Downloads

[1.16.5] Network set in TransmitterNetworkRegistry constantly growing

KR33PYK1NG opened this issue ยท 9 comments

commented

Issue description:

Basically, the title.
private final Set<DynamicNetwork<?, ?, ?>> networks = new ObjectOpenHashSet<>();
On my public server this network collection grows larger over time, peaking at 50-100k objects.
This eventually leads to TransmitterNetworkRegistry's onTick method taking really long to complete.
While investigating, I've found out that network is not always removed from this set even if all its transmitters were unloaded, so I believe it's some sort of a leak.
Is this behaviour intended/Is there some sort of a cleanup over time?

commented

I feel like odds are this is probably fixed by 10.3.2 due to our extra unload checks for when a chunk just becomes inaccessible but isn't actually unloaded yet.

commented

What Mekanism version?
Are you using Performant or similar mods?

commented

I'm using the latest version of Mekanism (10.0.21.448)
These mods are installed alongside Mekanism: itemfilters, jei, metalbarrels, mekanism, silents_mechanisms, bettercaves, cookingforblockheads, placebo, yungsapi, ftbguilibrary, engineerstools, pneumaticcraft, randompatches, storagetech, mekanismgenerators, mininggadgets, immersivepetroleum, refinedstorage, zerocore, industrialforegoing, titanium, ftbquests, immersiveengineering, silentlib, mekanismadditions, chickenchunks, gaiadimension, jaopca, pamhc2foodcore, camera, fastbench, performant, lostcities, elevatorid, worldedit, fastfurnace, mekanismtools, cfm, aiimprovements, engineersdecor, bigreactors, trashcans, byg, codechickenlib, bettermineshafts, openloader

commented

Today I also observed a behaviour I haven't seen before - network count continues to increase even without any players online.
This leads me to think there is some sort of autonomous duplication bug (different object instances which describe the same network are created as time goes)
I will look into this further.

commented

Performant causes many issues unless you disable a bunch of its options.

I suspect you're referring to Performant's load balancing (tick skipping) - it is disabled.

Also, how exactly are you measuring this number?

Trivially: by calling size method of network set.

commented

Performant causes many issues unless you disable a bunch of its options.

Also, how exactly are you measuring this number?

commented

So, I was able to reproduce this issue locally and get to the bottom of it.
Mekanism fully relies on onChunkUnloaded callback to clean up its networks.
If the callback wasn't executed in-time, the affected network will hang in memory until the server stops.
Performant, on the other hand, does a deliciously cruel thing - it delays onChunkUnloaded execution if it's considered laggy from Performant's POV.

commented

Hmm, good to know about and also thanks for looking into this and debugging it. I am not sure how much we will be able to do about them delaying onChunkUnloaded and causing us to then not removing the network, but given how large a memory leak it sounds like it causes I will certainly see at some point if there is some way I can try to mitigate the issue from our end.

commented

As mentioned in my issue above, loadbalancing of unloading stuff is by default always disabled/tells the users to not use it. So unless op did enable it and managed to get to low tps then performant is probably not the issue source