Memory leak with TickingTask's cachedPlayersMap
AnttiMK opened this issue ยท 5 comments
Confirmation
- I have read the FAQ.
- I have tested the latest development build of Holographic Displays and the bug is still present.
- I have updated Spigot to the latest release for my particular Minecraft version.
- I made sure the bug hasn't already been reported on the issue tracker.
Description
The
cachedPlayerMap
in TickingTask
seems to be leaking a lot of memory, especially on larger servers where there's a lot of players online concurrently. In the screenshot below, the map can be observed to be having about 3000 entries (over 500M of memory), even though there are less than 10 players online (this is on our hub server).
This seems to be caused by the CachedPlayer
object holding a strong reference to Player
, which is also the value's own key in the map. From WeakHashMap's javadoc:
Implementation note: The value objects in a WeakHashMap are held by ordinary strong references. Thus care should be taken to ensure that value objects do not strongly refer to their own keys, either directly or indirectly, since that will prevent the keys from being discarded.
How to reproduce
- Install Holographic Displays build 186 or newer
- Let players join server and wait
- Take a heap dump / attach debugger or analyzer, and observe map growing in size
Server version
This server is running Purpur version git-Purpur-"66e5b11" (MC: 1.17.1) (Implementing API version 1.17.1-R0.1-SNAPSHOT) (Git: 66e5b11 on ver/1.17.1)
Holographic Displays version
HolographicDisplays version 3.0.0-SNAPSHOT-b189
Installed plugins that allow players to join with multiple Minecraft versions
ViaVersion, ViaBackwards, ViaRewind
Additional information
Confirm this issue. The field player
from the CachedPlayer
class should probably also be stored in a WeakReference
Should be fixed by b89e626. Can you please confirm with the latest dev build? I don't think there are other places causing memory leaks, but I want to be sure.
Heap dump looked fine after about 10 hours of uptime, so the isssue seems to be fixed :) Thanks!