
Memory leak with WelcomeTask
IllusionTheDev opened this issue ยท 4 comments
Observed Behavior
WelcomeTask holds a hard reference to the bukkit Player. If the value happens to be high and the player leaves, we still hold that reference.
Checking for this in the task will still hold the reference until the task runs.
Fix suggestion: Hold the player's UUID, call Bukkit.getPlayer(playerId) and check for nullability.
Expected Behavior
It won't leak
Reproduction steps
- Set the delay to a really high value
- Join and leave the server a dozen times
- Create a heapdump
- Notice that there are a dozen hard references to the CraftPlayer despite no players being online.
Stack trace or error log
Server version
N/A
GriefPrevention version
N/A
Configuration
N/A
Plugin list
Running without GriefPrevention
- I attempted running the server without GriefPrevention installed.
- The problem does not occur when GriefPrevention is removed from the server.
Running with only GriefPrevention
- I attempted running only GriefPrevention on the server.
- The issue still occurs when GriefPrevention is the only plugin running.
Running on a fresh, clean server installation
- I attempted testing for the issue on a new server.
- The issue still occurs on a new server.
Using unmodified client
- I attempted testing for the issue with the vanilla client.
- The issue still occurs when using the vanilla client.
We appreciate you taking the time to fill out a bug report!
- I searched for similar issues before submitting this bug report.
Hmm, I guess this could be an issue for very high volume servers that use a very large delay. The other alternative is to hold a map of the tasks and cancel them on quit.
Curious of design patterns though, which would be better design/simplicity-wise, as it makes sense to pass a Player object to that task.
Hmm, I guess this could be an issue for very high volume servers that use a very large delay. The other alternative is to hold a map of the tasks and cancel them on quit.
Some servers don't like this feature and deliberately set a high delay. One server I'm aware of sets it to ~43 years to avoid any problems
Curious of design patterns though, which would be better design/simplicity-wise, as it makes sense to pass a Player object to that task.
The "easiest" approach would be to hold a weak reference to the player. The more standard approach is just to hold a UUID and call getPlayer, you could also make a proxid Player wrapper with that approach which is a bit safer but might result in a lot of repeating code.
Not saying it shouldn't be addressed, but the described scenario requires a different account per join on a normal server, which is significantly more involved than what's presented for reproduction steps. It should only happen on the first join unless you're running some particular setup where you don't keep player data and every join is a "first" join. GP isn't really aimed at hub servers, which is the only use case I can think of for a setup like that.
I suppose for offline-mode servers it's a lot more of a threat.
Curious of design patterns though, which would be better design/simplicity-wise, as it makes sense to pass a Player object to that task.
From a correctness perspective, we should be re-obtaining the player instance anyway to ensure we have the actual online copy if they log out and back in. Currently we rely on the Bukkit implementation correcting the internal Player accessed, which may or may not happen (usually not, if memory serves). Using a UUID is effectively identical to re-obtaining the current instance via Player#getPlayer
and would leak a lot less memory per task.
We should also offer a way to actually disable the task. I'm assuming -1 doesn't work? I haven't looked at this area of the code since I made the draft splitting it out into a module. I know I added an explicit enable/disable setting to that.
we should be re-obtaining the player instance anyway to ensure we have the actual online copy if they log out and back in.
Ah, true
We should also offer a way to actually disable the task. I'm assuming -1 doesn't work?
Yea, looks like I had a draft that this should be disable-able. I don't see why a negative number wouldn't work. If the code doesn't handle it, then it'd error every time it attempts to schedule this, so I'd assume it's implemented.