Dynmap-Forge/Fabric

Dynmap-Forge/Fabric

888k Downloads

Database increases in size constantly?

VL4DST3R opened this issue · 3 comments

commented

Issue Description: I'm not sure if this is an actual issue, but i wanted to double-check anyways. I've been using dynmap for multiple Multiverse dimensions on my server for many years now and i noticed my database slowly seems to increase in size, even after erasing chunks from some maps and doing a /dynmap purgemap and letting it regenerate. I wanted to know if this is known/working as intended or if there is any way to trim obsolete or unused map tiles if they do get left behind. I expected dynmap to just replace outdated tiles and free up deleted chunk tiles.

Should i use another type of database instead? (see configuration.txt pastebin below)

  • Dynmap Version: core=3.1-beta5-431, plugin=3.1-beta5-431
  • Server Version: paper-1.16.4-339
  • Pastebin of Configuration.txt: https://pastebin.com/HAnwwzM3
  • Server Host (if applicable): Self-hosted.
  • Pastebin of crashlogs or other relevant logs: n/a
  • Other Relevant Data/Screenshots: image
  • Steps to Replicate: Keep regenerating the map over a long playtime.

[✔] I have looked at all other issues and this is not a duplicate
[✔] I have been able to replicate this

commented

Not clear on your reference to 'database': your configuration.txt appears to be using the file system. If you're talking about the space used by the maps in the file system - purgemap (for a single map) or purge (for a world) is how you can clean up a map or world whose data you no longer want, but rerunning the render will likely replace it. Since MC chunks never 'go away' - they change, or new ones are added - the regenerated map will be no smaller than the previously rendered map, and chunks never become 'obsolete'. Now, if you are using something like WorldBorder to trim and discard chunks, then rerunning a fullrender (or running a render on the area discard) will discard the empty tiles (or replace them with very small blank ones), but otherwise nothing will happen (as discarding MCA files, or trimming chunks, is not 'normal' behavior, and there is literally no way for Dynmap to know about this...).

commented

Crap, yeah, i noticed after posting. I actually updated the plugin a few weeks ago before posting this here and it apparently reset my configs to defaults, only i caught it after posting here. Assume my config was like this:

# Map storage scheme: only uncomment one 'type' value
#  filetree: classic and default scheme: tree of files, with all map data under the directory indicated by 'tilespath' setting
#  sqlite: single SQLite database file (this can get VERY BIG), located at 'dbfile' setting (default is file dynmap.db in data directory)
#  mysql: MySQL database, at hostname:port in database, accessed via userid with password
storage:
  # Filetree storage (standard tree of image files for maps)
  # type: filetree
  # SQLite db for map storage (uses dbfile as storage location)
  type: sqlite
  #dbfile: dynmap.db
  # MySQL DB for map storage (at 'hostname':'port' with flags "flags" in database 'database' using user 'userid' password 'password' and table prefix 'prefix')
  #type: mysql
  #hostname: localhost
  #port: 3306
  #database: dynmap
  #userid: dynmap
  #password: dynmap
  #prefix: ""
  #flags: "?allowReconnect=true"

Essentially changed the map render storage from filetree to sqlite database.

but rerunning the render will likely replace it. Since MC chunks never 'go away'

Exactly, and i expected the size to remain about the same assuming no new chunks are generated. Even so, i noticed the database size increase in size at quite a noticeable rate, far beyond what i would expect for the maps i had active.

After posting the question i actually went ahead and (after configuring the plugin again) wiped the old database and regenerated it in its entirety. Given, i also switched to using webp image format from jpeg to further save on space, and after about 36 hours of rendering everything the new database sits at about 17GB. This is presumably for the exact same amount of chunks loaded, whereas before it would be almost 42GB. Even accounting for the better compression that webp has, this size difference is insane.

So i guess my original question itself still stands. Is there some form of leak happening here?

commented

PLS check your mysqld config that you binlog setting:
log_bin = /var/log/mysql/mysql-bin.log binlog_expire_logs_seconds = 259200 max_binlog_size = 100M
It takes a lot of space in my server, about 150G binlogs for 20G database.