spark

spark

95M Downloads

CPU Utilization issue

kmecpp opened this issue ยท 1 comments

commented

With Pterodactyl you can allocate a certain number of cpu cores to each server. For some reason this confuses the Spark CPU monitor which consistently thinks the system CPU load is out of the number of cores available to the docker the server is running in and not the real number of cores available to the system.

This leads to a very high reported utilization (usually consistently 100% but the server wasn't that active when I took the screenshots) but the actual server utilization is much lower

image
Running on a 4 core 8 thread system so load average numbers should get up to 4 or 8.

image

commented

Spark gets this info from the OperatingSystemMXBean.

If the values are incorrect, then that's probably a JVM issue - not much I can do to fix, sorry!