Subsequent world joins cause "Chunk build failed" exception
SKProCH opened this issue · 10 comments
Your GTNH Discord Username
skproch
Mod Version
1.0.0-alpha28
Java Version
Java 21
Graphics Card Vendor
NVidia
Bug Report
Sometimes when I try to enter the world a second time I get the following exception.
It's happens not every time, but can happen in different scenarios:
- If we firstly play on multiplayer server, and after that try to join singleplayer
- If we firstly play on singleplayer, and after that try to join singleplayer again (same map)
- If we firstly play on multiplayer server, and after that try to join same multiplayer
Here is the crash log: crash-2024-02-18_18.02.21-client.txt
Mod List or GTNH Pack Version
GTNH 2.5.1
Final Checklist
- I have searched the issues and haven't found a similar issue.
- I have read the known incompatibilities and this is not related to one of those.
- I am running an officially released version. (Or, if I've compiled it myself I plan to fix the issue)
- This issue is not related to a feature that is disabed by default - Shaders, MCPF, etc. [They'll be enabled when they're ready for testing]
I can get you a test build that'll log anytime the stack depth is over 8... 8 is already pretty high imho....
hmm, Capturing Tessellator should be cleaning up after itself... and I haven't seen any obvious place with 16 nested tessellations.... very odd.
Unless something isn't cleaning up.... interesting
Caused by: java.lang.IllegalStateException: Stack overflow size 17 reached
at com.gtnewhorizons.angelica.glsm.stacks.Vector3dStack.push(Vector3dStack.java:21)
at com.gtnewhorizons.angelica.client.renderer.CapturingTessellator.storeTranslation(CapturingTessellator.java:144)
at com.gtnewhorizons.angelica.glsm.TessellatorManager.startCapturing(TessellatorManager.java:45)
at me.jellysquid.mods.sodium.client.render.pipeline.BlockRenderer.renderModel(BlockRenderer.java:84)
at me.jellysquid.mods.sodium.client.render.chunk.tasks.ChunkRenderRebuildTask.performMainBuild(ChunkRenderRebuildTask.java:275)
at me.jellysquid.mods.sodium.client.render.chunk.tasks.ChunkRenderRebuildTask.lambda$performBuild$0(ChunkRenderRebuildTask.java:204)
at java.util.concurrent.CompletableFuture$AsyncRun.run(Unknown Source)
Loading into the world and then crashing looks like this: I see some entities being drawn, after which the screen is filled with a static color and a crash follows, which is caught by NoEnoughtCrashes. Also, relaunching game allows to join worlds fine... for a while.
I can get you a test build that'll log anytime the stack depth is over 8... 8 is already pretty high imho....
Yep, let's go. I'll can use it to gather info for you.
Same crash, before version 28 it was randomly crashing on loading new chunks, now even on join and already generated chunks
crash-2024-02-19_01.08.09-client.txt
Copy messages from discord:
Here it is: https://mclo.gs/skzBZw6
I've logged into same SP world twice, after that log info server (and DOESN'T wait till world fully renders), and after that second server login throws Chunk build failed error
Yep, currently reproduced it via just login to server, leave before all chunks rendered, and after it next login throws error
Here is the log: https://mclo.gs/oKg43zm
Here is the crashlog: https://mclo.gs/4jUb0Rh
@Prooty You're using a -pre version of carpenter's blocks that is not yet ready for usage. Dream shouldn't have included it in the dev/pre tag
Looking at the full log, it looks like the shutdown (and thus cleanup) process got interrupted due to ForgeRelocation
Caused by: java.lang.NullPointerException: Cannot read field "field_72995_K" because "w" is null
at mrtjp.relocation.MovementManager2$.getWorldStructs(movement.scala:32) ~[MovementManager2$.class:?]
at mrtjp.relocation.MovementManager2$.isMoving(movement.scala:105) ~[MovementManager2$.class:?]
at mrtjp.relocation.ASMHacks$.getRenderType(hacks.scala:36) ~[ASMHacks$.class:?]
at mrtjp.relocation.ASMHacks.getRenderType(hacks.scala) ~[ASMHacks.class:?]
I'd suggest giving https://github.com/GTNewHorizons/ForgeRelocation/releases/tag/0.1.2 a try as this should work around the issue.