spark

spark

26M Downloads

The server enters a "crashed state" after running the profiler

Ammorack opened this issue · 0 comments

commented

As the title suggests, the server stops after running the profiler. We start Profiler and wait until the server stops with an error.
 
 
Console OutPut

[00:48:03 INFO]: [⚡] Initializing a new profiler, please wait...
[00:48:03 INFO]: [⚡] Profiler now active! (async)
[00:48:03 INFO]: [⚡] Use '/spark profiler --stop' to stop profiling and upload the results.
[00:49:06 WARN]: me.lucko.spark.common.sampler.async.JfrParsingException: Error parsing JFR data from profiler output
[00:49:06 WARN]:        at spark-1.9.42-bukkit.jar//me.lucko.spark.common.sampler.async.AsyncProfilerJob.aggregate(AsyncProfilerJob.java:215)
[00:49:06 WARN]:        at spark-1.9.42-bukkit.jar//me.lucko.spark.common.sampler.async.AsyncSampler.rotateProfilerJob(AsyncSampler.java:119)
[00:49:06 WARN]:        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[00:49:06 WARN]:        at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[00:49:06 WARN]:        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
[00:49:06 WARN]:        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
[00:49:06 WARN]:        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
[00:49:06 WARN]: Caused by: java.lang.NullPointerException
[00:49:06 WARN]:        at spark-1.9.42-bukkit.jar//me.lucko.spark.common.sampler.async.AsyncProfilerJob.lambda$aggregate$0(AsyncProfilerJob.java:198)
[00:49:06 WARN]:        at spark-1.9.42-bukkit.jar//me.lucko.spark.common.sampler.async.AsyncProfilerJob.readSegments(AsyncProfilerJob.java:247)
[00:49:06 WARN]:        at spark-1.9.42-bukkit.jar//me.lucko.spark.common.sampler.async.AsyncProfilerJob.aggregate(AsyncProfilerJob.java:205)
[00:49:06 WARN]:        ... 7 more
[00:49:15 INFO]: [STATS-AUTOSAVE] Saving 0 cached data
>....JVMDUMP039I Processing dump event "abort", detail "" at 2022/11/11 00:49:18 - please wait.
JVMDUMP032I JVM requested System dump using '/home/container/core.20221111.004918.47.0001.dmp' in response to an event
JVMDUMP010I System dump written to /home/container/core.20221111.004918.47.0001.dmp
JVMDUMP032I JVM requested Java dump using '/home/container/javacore.20221111.004918.47.0002.txt' in response to an event
[00:49:21 WARN]: org.sqlite.SQLiteException: [SQLITE_IOERR_WRITE]  I/O error in the VFS layer while trying to write to a file on disk (disk I/O error)
[00:49:21 WARN]:        at org.sqlite.core.DB.newSQLException(DB.java:1012)
[00:49:21 WARN]:        at org.sqlite.core.DB.newSQLException(DB.java:1024)
[00:49:21 WARN]:        at org.sqlite.core.DB.throwex(DB.java:989)
[00:49:21 WARN]:        at org.sqlite.core.NativeDB._exec_utf8(Native Method)
[00:49:21 WARN]:        at org.sqlite.core.NativeDB._exec(NativeDB.java:94)
[00:49:21 WARN]:        at org.sqlite.jdbc3.JDBC3Statement.executeUpdate(JDBC3Statement.java:102)
[00:49:21 WARN]:        at CoreProtect-20.4.jar//net.coreprotect.database.Database.commitTransaction(Database.java:70)
[00:49:21 WARN]:        at CoreProtect-20.4.jar//net.coreprotect.consumer.process.Process.processConsumer(Process.java:96)
[00:49:21 WARN]:        at CoreProtect-20.4.jar//net.coreprotect.consumer.Consumer.run(Consumer.java:133)
>....JVMDUMP010I Java dump written to /home/container/javacore.20221111.004918.47.0002.txt
JVMDUMP032I JVM requested Snap dump using '/home/container/Snap.20221111.004918.47.0003.trc' in response to an event
JVMDUMP010I Snap dump written to /home/container/Snap.20221111.004918.47.0003.trc
JVMDUMP032I JVM requested JIT dump using '/home/container/jitdump.20221111.004918.47.0004.dmp' in response to an event
JVMDUMP051I JIT dump occurred in 'Async-profiler Sampler' thread 0x0000000001E07E00
JVMDUMP010I JIT dump written to /home/container/jitdump.20221111.004918.47.0004.dmp
JVMDUMP013I Processed dump event "abort", detail "".
container@pterodactyl~ Server marked as offline...
[Pterodactyl Daemon]: ---------- Detected server process in a crashed state! ----------
[Pterodactyl Daemon]: Exit code: 1
[Pterodactyl Daemon]: Out of memory: false
[Pterodactyl Daemon]: Checking server disk space usage, this could take a few seconds...
[Pterodactyl Daemon]: Updating process configuration files...
[Pterodactyl Daemon]: Ensuring file permissions are set correctly, this could take a few seconds...
container@pterodactyl~ Server marked as starting...
[Pterodactyl Daemon]: Pulling Docker container image, this could take a few minutes to complete...
[Pterodactyl Daemon]: Finished pulling Docker container image

  
  
Start - up flags

STARTUP /home/container: java -Xmx10240M -Xms512M -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:G1HeapWastePercent=5 -XX:G1MixedGCCountTarget=4 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=1 -XX:G1NewSizePercent=30 -XX:G1MaxNewSizePercent=40 -XX:G1HeapRegionSize=8M -XX:G1ReservePercent=20 -XX:InitiatingHeapOccupancyPercent=15 -Dusing.aikars.flags=https://mcflags.emc.gs -Daikars.new.flags=true -jar ${SERVER_JARFILE} 

 
  
Information about PLUGIN and SERVER

→ Server version: v1_17_R1 - 1.17.1 - paper
→ Spark version 1.9.42
→ [Spigot we use ] → NFT-Worlds-1.17.1-R0.1-SNAPSHOT.jar 

 
 
Contact me, I will grant extra information if needed.

Best regards,
Ammorack