Too many open files while doing fullrender
cupang-afk opened this issue ยท 2 comments
i got error while doing fullrender pregen world with 15k radius
what i do
- run command
map fullrender overworld
- wait a while
java.nio.file.FileSystemException: ./world/entities/r.-7.3.mca: Too many open files
and another log similar to above
[ERROR] ChunkLoadTask Failed to load entity data for task: GenericDataLoadTask{class: io.papermc.paper.chunk.system.scheduling.ChunkLoadTask$EntityDataLoadTask, world: world, chunk: (-193,122), hashcode: 2067725570, priority: COMPLETING, type: ENTITY_DATA}, entity data will be lost
java.nio.file.FileSystemException: ./world/entities/r.-7.3.mca: Too many open files
additional info:
- server version
git-Purpur-1912 (MC: 1.19.3)*
- using pufferpanel as panel
- at the time writing this, im trying to fullrender on the linux console (
java -jar purpur.jar
) instead with all plugin disabled -
$ cat /proc/sys/fs/file-max 9223372036854775807
-
$ ulimit -n 1024 $ ulimit -s 8192 $ ulimit -S unlimited $ ulimit -H unlimited
suggestion:
saving rendered tiles on a database server e.g mysql/mariadb instead of flat file would be better
The image io executor will throttle renders to ensure no more than 100 save tasks are queued at once, additionally, only one file will be open at a time for saving images, as the image io executor is single-threaded (per world), and files it writes aren't opened until the actual write (when the task gets polled from the queue and ran).
So, regarding your suggestion of using a database: duplicate of #1, and again since the image files aren't the issue here I kind of doubt it would actually help.
Assuming squaremap is the only plugin installed, I see two likely causes:
- The server's chunk system is keeping too many files open. The errors pointing to region files support this. I haven't seen this before, but it's definitely possible. In this case, the only thing we can do on the squaremap side (without a major restructure to rendering, which isn't out of the question long-term, but is as a solution to this issue) is to add an option for manually throttling renders to a speed the chunk system can manage. From the end-user perspective, you could of course raise the open file limit.
- The internal web server could be responsible for opening up to
8 * Runtime.getRuntime().availableProcessors()
files at a time. On systems with lots of cores, this could bring you pretty close to the default limit of 1024 open files when the web server is getting hammered. But this doesn't really agree with the data of this only happening during full renders. You could test if this was the issue by disabling the internal web server and see if it makes a difference, if so I may need to override the undertow defaults there and or add a config option. But for users where this is a problem an external web server would likely be a better idea anyways (as opposed to raising the limit or nerfing the internal server).
For further debugging, you can try lsof -p <pid>
inside the container with the server's pid to see what files it actually has open (ideally do this right when an exception occurs).
Update
when i tried to do fullrender
without the server panel (pufferpanel) it does work without throwing Too many files open anymore
i think it related to the panel behaviour, or its me who actively checking the online map (because when i do fullrender
without server panel, i just leave it alone running)
anyway, you can close this issue if this is not a problem