Entity behavior creating an issue
Monster-Zer0 opened this issue ยท 6 comments
Expected Behavior
optimization of entity behavior
Actual Behavior
Entity behavior is becoming an issue, causing too many ticks to be dedicated to it
Reproduction Steps
This is the All Of Fabric 3 NA server, we have a fairly large population and a server that has been going since 1.16.1 without a wipe. We have a custom tool that will show us how many entities in one chunk (made by notsteven of indrev), so that we can manage if people are overcrowding, but have found that the entity count isn't too over populated. When running a spark profile (link at the below) we notice that over time entity methods are causing an issue.
I am attempting to post this as an issue as it may be an issue with Lithium or an opportunity for the Lithium project to be able to observe a public server with moderate use (to help with any future improvements)
https://spark.lucko.me/#8xfy3WvJlj
Server:
Ryzen 5 3600X (12 core) @ 4.4GHz
32GB DDR4 ECC 2666MHz RAM
2 NVMe 500GB with Enterprise Class RAID (LSI card)
in a datacenter
ubuntu 20
FAPI: fabric-api-0.21.0+build.407-1.16
LOADER: 0.9.3+build.207
MC 1.16.2
notes:
running voyager https://github.com/modmuss50/Voyager
to fix a java 11 bug for MC
running lithium 0.5.5 rejarred to allow use in 1.16.2
Can you provide a rough estimate of the number of players your server had online when this profiler snapshot was taken? Additionally, do you know how many loaded chunks there were?
I see a few small hotspots here, but nothing points to an issue in Lithium here, rather just vanilla slowness.
About 14 players online at the time of the profile. NotSteven made a little mod to get the number of entities and loaded chunks, but we will have to run the profile and get all those numbers at the same point in time to give some better data.
Here are the Server Specs.
Specs:
Ryzen 5 3600X (12 core) @ 4.4GHz
- 1 for MC
- 1 for watchdog
- 6 for bluemap (patreons with $10 tier and up)
- 4 spare
32GB DDR4 ECC 2666MHz RAM
2 NVMe 500GB with Enterprise Class RAID (LSI card)
In a datacenter, not in a basement so this means:
redundant power supply
redundant 1GB internet
enterprise grade firewall
enterprise grade hardware
Setup:
Ubuntu Focal
XFS for world folder to increase speed and decrease i/o
no write-through cache
using Aikar's JVM flags adjusted to this servers specs
using Amazon Coretto java 11
I am copying this from discord, just for continuity sake. I am not blaming lithium at all for this, but I want to rule it out before I go on a witch hunt.
This is the top 15 chunks with entities
[80, 94]: 61 entities
[41, 32]: 47 entities
[1, -50]: 43 entities
[58, 608]: 30 entities
[-225, -291]: 29 entities
[53, -20]: 29 entities
[56, 622]: 23 entities
[53, 616]: 21 entities
[59, 608]: 21 entities
[43, 31]: 21 entities
[53, -25]: 20 entities
[4, -48]: 20 entities
[41, 31]: 19 entities
[52, -24]: 18 entities
[52, -25]: 17 entities
[54, -20]: 16 entities
[-5, 79]: 15 entities
[58, 619]: 15 entities
[-34, 457]: 13 entities
[-226, -291]: 12 entities
often we drop down to 8tps sometimes just hover around 13
it appears that after about 8 people on the server, it starts to go down and can tank as low as 3tps with 18 people
which normally we can handle, no problem
Can you please strip your server of any JVM arguments (other than -Xmx) and use a build of Java from AdoptOpenJDK's website? This will help narrow down a number of variables here, and I'm curious to know if it helps any since these numbers are much much worse than a similar machine I use daily.
We are starting to lean toward chunk loaders being the issue.. more chunk loaders, more chunks loaded, more chunks loaded, more entities. we are working with the dev of the mod that provides chunk loaders (D4rk of kibe) to help with this, such as limit 1 chunk per player or a timeout period for the chunk loader. I think this should help a ton. I appreciate all the input and help on this one, but I think we can call this closed for now.