Softlock on Server Thread, seemingly due to Config changes.
NevadaActual opened this issue ยท 2 comments
Mod version: 4.6.1
Forge version: 35.1.37
Minecraft version: 1.16.4
Modpack in use: Permafrost, which no longer has performant installed.
This is a continuation of #395, after more information was collected on our end. For review, the modpack me and a couple other are working on, Permafrost, is having some extremely strange crashes and soft crashes alike, where logs are not left behind due to the integrated server locking up completely. Since then, the solution we've found is to make sure a single config file stays the same. Specifically, the line that somehow plays a role in overall stability when spawning Apotheosis boss mobs is this:
A list of type overrides for the affix loot system. Format is |. Types are SWORD, RANGED, PICKAXE, SHOVEL, AXE, SHIELD [default: [minecraft:stick|SWORD]]
S:"Equipment Type Overrides" <
minecraft:stick|SWORD
>
Adding an extra line about a different item or removing the existing line about minecraft:stick seems to cause apotheosis bosses to crash the integrated server, forcing a task kill on the client. But, with this specific config of the deadly module we still get fatal errors in the output log (message.txt), even if the game continues to run without stability issues. Is the culprit likely a datapack issue? Is there existing documentation we could use to fix the problem ourselves, short of using process of elimination to find what mob or item is causing the issue?
I don't really think there is anything possible for me to do from my end.
You have invoked this issue by using the boss summoner item (which runs entirely on the mainthread) and a softlock is incapable of being caused by non-threaded code, especially when running on the main thread like that.
Nor has this issue been encountered in any other instance (Even those heavily modified by datapack).
Apologies, I completely forgot I made this issue! We eventually figured out there's an odd mod conflict causing the issue, but we've worked around it. If we find out the specifics behind the issue, and it's something that could be an issue in other setups than ours, I'll make a different issue. Thank you for your help, it is much appreciated.