Lootr (Fabric)

Lootr (Fabric)

12M Downloads

[1.21.1] I/O bottlenecks when saving SavedData

ArcadeArchie opened this issue ยท 7 comments

commented

Every time the world gets saved and it gets to saving the Lootr data it causes such a lag spike that players get timedout
https://spark.lucko.me/LWWOywafuo
image

commented

also related to AllTheMods/ATM-10#1474

commented

So digging into this further, it appears to be an issue with SavedData in general, rather than Lootr specifically, but because Lootr probably has a larger quantity of SavedData, it pops up more prominently.

I was inclined to think it was some sort of I/O bottleneck, but it's also relevant to note that NeoForge patches the SavedData::save method to use an atomic write system, although I doubt this would have any impact as it ends up calling the exact same methods as Minecraft's default implementations.

As you can see from this image, 99% of the tick time is taken up with saving, but only 63% is taken up by Lootr -- the rest is taken up with what appears to be Minecraft's SavedData.

It's worth noting that similar issues have been reported since 1.20.1 on Paper and Spigot servers: Watchdog crash on a Spigot server from March (I had a second report but I realize now it's a duplicate of the first one)

The slow-down is demonstrably coming from the actual I/O operations rather than the actual serialization, meaning I don't think there is really much I can do about this from Lootr's end, especially without knowing more about the I/O of this person's server.

EDIT: I've commented on the linked issue. I'd like to find out exactly how much total Lootr data there is versus the overall size of the data folder, as, in theory, each file should be kilobytes or less.

commented

i can sent you a backup of the world save for testing if you want @noobanidus

commented

That would be appreciated!

commented

I wanted to chip in with my experience, I've been having the same problem.
Here's a spark report filtered for only ticks over 2500ms: https://spark.lucko.me/32IQ4buqSI

This server uses managed hosting (Bluehost), and I can see that most of the Lootr data files are only a few hundred Bytes each.

I acknowledge this is a general issue with saving, but in the meantime is there any way to reduce the amount of Lootr savedata? For example if I were to enable loot refresh, would that help the problem or make it worse?

commented

I wanted to chip in with my experience, I've been having the same problem. Here's a spark report filtered for only ticks over 2500ms: https://spark.lucko.me/32IQ4buqSI

This server uses managed hosting (Bluehost), and I can see that most of the Lootr data files are only a few hundred Bytes each.

I acknowledge this is a general issue with saving, but in the meantime is there any way to reduce the amount of Lootr savedata? For example if I were to enable loot refresh, would that help the problem or make it worse?

Loot refresh wouldn't change anything in this instance, as it doesn't change the number of data files.

How often is your server being restarted?

There may be a large number of Lootr data files, but they should only be loaded from disk (and thus eligible to be saved to disk) when the relevant chest is being opened and the contents modified. Depending on what version you're running, I can possibly offer a version of Lootr that unloads saved data that should reduce the number of files it's trying to save.

That said, even a large number of loaded chests shouldn't result in them being saved, as it requires the data being marked as dirty, which generally only happens when:

  • A chest is opened for the first time
  • A player manually marks a chest as unopened
  • A container is refreshed

It is possible that something weird is causing the files to be marked as such without any actual changes being made.

You could try changing the start_refresh_while_ticking and perform_refresh_while_ticking configuration values to false to see if that makes any difference. I'll do some testing tomorrow to see if there are any unnecessary marks.