Server crashes periodically
deB4SH opened this issue ยท 5 comments
Hi all,
I'm currently running this modpack within my kubernetes cluster with the help of the itzg/docker-minecraft-server image in version java17-jdk.
Sadly the server is running into an issue which I cant trace back to any specific source.
The world itself is pretty much empty with some buildings for the first chapter.
So I don't guess this is happening due to large NBT Tags or something else that "often" occur with modpacks.
After a while the server crashes with a netty exception.
Did someone else observe this issue and knows how to resolve it?
Configured are the following parameters for the container.
image: itzg/minecraft-server:java17-jdk
Environment:
- name: VERSION
value: 1.18.2
- name: TYPE
value: FABRIC
- name: FABRIC_LAUNCHER_VERSION
value: 0.11.2
- name: FABRIC_LOADER_VERSION
value: 0.14.19
Log with the error after a fresh start.
[12:41:07] [Worker-Main-2/INFO]: Preparing spawn area: 62%
[12:41:08] [Worker-Main-9/INFO]: Preparing spawn area: 90%
[12:41:08] [Server thread/INFO]: Time elapsed: 6573 ms
[12:41:08] [Server thread/INFO]: [MemoryLeakFix] Attempting to ForceLoad All Mixins and clear cache
[12:41:08] [Server thread/WARN]: @Redirect conflict. Skipping lithium.mixins.json:alloc.entity_tracker.EntityTrackerMixin from mod lithium->@Redirect::useFasterCollection()Ljava/util/Set; with priority 1000, already redirected by krypton.mixins.json:shared.network.microopt.TacsTrackedEntityMixin from mod krypton->@Redirect::construct$useFastutil()Ljava/util/Set; with priority 1000
[12:41:09] [Server thread/INFO]: [MemoryLeakFix] Done ForceLoad and clearing SpongePowered cache
[12:41:09] [Server thread/INFO]: Done (7.653s)! For help, type "help"
[12:41:09] [Server thread/INFO]: Starting remote control listener
[12:41:09] [Server thread/INFO]: Thread RCON Listener started
[12:41:09] [Server thread/INFO]: RCON running on 0.0.0.0:25575
[12:41:09] [Server thread/INFO]: Starting backup cleaning thread
[12:41:09] [Server thread/INFO]: Using default implementation for ThreadExecutor
[12:41:09] [Server thread/INFO]: Initialized Scheduler Signaller of type: class net.creeperhost.ftbbackups.org.quartz.core.SchedulerSignalerImpl
[12:41:09] [Server thread/INFO]: Quartz Scheduler v.2.0.2 created.
[12:41:09] [Server thread/INFO]: RAMJobStore initialized.
[12:41:09] [Server thread/INFO]: Scheduler meta-data: Quartz Scheduler (v2.0.2) 'ftbbackups2' with instanceId 'NON_CLUSTERED'
Scheduler class: 'net.creeperhost.ftbbackups.org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'net.creeperhost.ftbbackups.org.quartz.simpl.SimpleThreadPool' - with 1 threads.
Using job-store 'net.creeperhost.ftbbackups.org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
[12:41:09] [Server thread/INFO]: Quartz scheduler 'ftbbackups2' initialized from an externally provided properties instance.
[12:41:09] [Server thread/INFO]: Quartz scheduler version: 2.0.2
[12:41:09] [Server thread/INFO]: Scheduler ftbbackups2_$_NON_CLUSTERED started.
[12:41:09] [Server thread/INFO]: Loading quests from /data/config/ftbquests/quests
[12:41:09] [Server thread/INFO]: Loaded 1 chapter groups, 8 chapters, 596 quests, 0 reward tables
[12:41:09] [Server thread/INFO]: Loading quest progression data from /data/config/ftbquests/quests
[12:41:09] [Server thread/INFO]: Registered thread Server thread
[12:41:09] [Server thread/INFO]: Encoded Weapon Attribute registry size (with package overhead): 92657 bytes (in 10 string chunks with the size of 10000)
[12:41:17] [Server thread/INFO]: Loaded data from grave data file
[14:37:47] [Netty Epoll Server IO #11/ERROR]: Exception occurred in netty pipeline
io.netty.handler.codec.DecoderException: java.io.IOException: Packet 1/0 (class_2937) was larger than I expected, found 16 bytes extra whilst reading packet 0
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.68.Final.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.1.68.Final.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-all-4.1.68.Final.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311) [netty-all-4.1.68.Final.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432) [netty-all-4.1.68.Final.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.1.68.Final.jar:?]
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) [netty-all-4.1.68.Final.jar:?]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) [netty-all-4.1.68.Final.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-all-4.1.68.Final.jar:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.68.Final.jar:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.io.IOException: Packet 1/0 (class_2937) was larger than I expected, found 16 bytes extra whilst reading packet 0
at net.minecraft.class_2543.decode(class_2543.java:47) ~[server-intermediary.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.68.Final.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.68.Final.jar:?]
... 25 more
2023-06-05T14:40:08.799+0200 WARN mc-server-runner Minecraft server failed. Inspect logs above for errors that indicate cause. DO NOT report this line as an error. {"exitCode": -1}
2023-06-05T14:40:08.806+0200 INFO mc-server-runner Done
Stream closed EOF for minecraft-astral-space-reborn/minecraft-fff5b55b6-cxpgv (main)
Best regards
The only thing I know of having caused this so far is the Create Toolbox. However, this should have been solved by the latest modpack update. What version of the pack are you using?
Also, from the thumbsup, was this resolved?
Not sure if this is a specific incompatibility with the docker image you're using then. I'm not sure why or for what you're hosting your minecraft server in a kubernetes cluster and also not sure if the following would work, but there's a docker image available specifically for Create: Astral (made by an IRL friend of mine): https://github.com/maxi0604/create-astral-container
@Erdragh I'm using the latest 2.0.4c
This just resolved the issue with the IOException. Server is still crashing from time to time. But now without any error printed to stdout.
log related to ftbbackups2.json: changed configuration to daily backups
[19:27:34] [Worker-Main-8/INFO]: Preparing spawn area: 69%
[19:27:35] [Server thread/INFO]: Time elapsed: 6136 ms
[19:27:35] [Server thread/INFO]: [MemoryLeakFix] Attempting to ForceLoad All Mixins and clear cache
[19:27:35] [Server thread/INFO]: [MemoryLeakFix] Done ForceLoad and clearing SpongePowered cache
[19:27:35] [Server thread/INFO]: Done (6.811s)! For help, type "help"
[19:27:35] [Server thread/INFO]: Starting remote control listener
[19:27:35] [Server thread/INFO]: Thread RCON Listener started
[19:27:35] [Server thread/INFO]: RCON running on 0.0.0.0:25575
[19:27:35] [Server thread/INFO]: Starting backup cleaning thread
[19:27:35] [Server thread/INFO]: Using default implementation for ThreadExecutor
[19:27:35] [Server thread/INFO]: Initialized Scheduler Signaller of type: class net.creeperhost.ftbbackups.org.quartz.core.SchedulerSignalerImpl
[19:27:35] [Server thread/INFO]: Quartz Scheduler v.2.0.2 created.
[19:27:35] [Server thread/INFO]: RAMJobStore initialized.
[19:27:35] [Server thread/INFO]: Scheduler meta-data: Quartz Scheduler (v2.0.2) 'ftbbackups2' with instanceId 'NON_CLUSTERED'
Scheduler class: 'net.creeperhost.ftbbackups.org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'net.creeperhost.ftbbackups.org.quartz.simpl.SimpleThreadPool' - with 1 threads.
Using job-store 'net.creeperhost.ftbbackups.org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
[19:27:35] [Server thread/INFO]: Quartz scheduler 'ftbbackups2' initialized from an externally provided properties instance.
[19:27:35] [Server thread/INFO]: Quartz scheduler version: 2.0.2
[19:27:35] [Server thread/INFO]: Scheduler ftbbackups2_$_NON_CLUSTERED started.
[19:27:35] [Server thread/INFO]: Loading quests from /data/config/ftbquests/quests
[19:27:35] [Server thread/INFO]: Loaded 1 chapter groups, 8 chapters, 596 quests, 0 reward tables
[19:27:35] [Server thread/INFO]: Loading quest progression data from /data/config/ftbquests/quests
[19:27:35] [Server thread/INFO]: Registered thread Server thread
[19:27:36] [Server thread/INFO]: Encoded Weapon Attribute registry size (with package overhead): 92657 bytes (in 10 string chunks with the size of 10000)
[19:27:43] [Server thread/INFO]: Loaded data from grave data file
[19:37:45] [Server thread/WARN]: Can't keep up! Is the server overloaded? Running 2722ms or 54 ticks behind
[20:42:15] [FTB Backups Config Watcher 0/INFO]: Config at /data/config/ftbbackups2.json has changed, reloaded!
[20:42:15] [FTB Backups Config Watcher 0/INFO]: Config at /data/config/ftbbackups2.json has changed, reloaded!
2023-06-05T21:15:13.997+0200 WARN mc-server-runner Minecraft server failed. Inspect logs above for errors that indicate cause. DO NOT report this line as an error. {"exitCode": -1}
2023-06-05T21:15:13.998+0200 INFO mc-server-runner Done
Stream closed EOF for minecraft-astral-space-reborn/minecraft-fff5b55b6-mqpcj (main)
EDIT: Currently thinking, without any real evidence, that the mc-server-runner by itzg may be misinterpreting something and shuts down the java process without any real issue.
Server kept running over night without any restart. I guess some mod in this pack produces an exit -1 which lets the mc-server-runner think that the server itself crashed. Closing this issue now. Thanks for the help @Erdragh