compress all tile data and uncompress so you can tranfer the fliles
tristanzwart opened this issue ยท 11 comments
template is bold
sample data is italicized
Feature Description: I want to have a command so you can compress all tile data so you can easily transfer or download it via ftp/sftp etc. Also the option to uncompress it afterwords.
- Additional context: So just make a command like /dynmap compress and also /dynmap decompress and then the file type. Like zip for instance.
You can do that yourself prior to moving servers right? That sounds more feasible because the user can check the Total usable disk space and see if a zip can be created
No because it is from a hosting site and I can't compress it via FTP and I also don't have direct access to the files or is there a different plugin to do this with?
using (s)ftp you should be able to download, compress, decompress and upload when switching servers, or create a script that can do so. I don't know the dev's view on this though. @mikeprimm what do you think of this?
Well I use winscp and tryed it but I couldn't get it to work and otherwise it takes very long for all of those small files because I can't compress them before downloading
This feature request is well outside the scope of what Dynmap is meant to do. If you don't want to deal with individual image tiles use sqlite
Compression on image files is largely pointless - they are already compressed, so you'll only gain a few percent, at best. Main thing is transferring lots of separate files over a network - most transfer tools kinda suck at that, as the latency of the round trip communications for each file, as well as writing and closing each file before sending the next one, makes the performance terrible. The tool best for this process is 'rsync' (which can generally be used on anything that has SSH), as it is specifically designed for bulk transfers of files (small and large), and supports restarting and incremental updates. It can also compress files in flight (that is, compress while reading, send the compressed bytes, and uncompress while writing), but as I said, with already compressed image files, this is likely pointless.
Thanks the main reason why compression was because of that it becomes one file so it is faster to transfer. You said something about rsync I wil look in to that thanks for the help
Right - if you have the space AND a stable network connection, building the megablob and sending that will work. The rsync solution kind of avoids either of those needing to be the case - it kind of 'streams a tarball' to the remote end on the fly (including option to compress in flight), logically, so kind of best of both worlds (no double-space requirement temporarily on both ends). If your network between the two isn't rock solid, the blob thing can kinda suck if you get 70% of the way through the uber-transfer and it craps out - which is another 'nice thing' about rsync :)
If you want it to be one file, it seems, based on your comments that you have some form of shell access... tar up the directory. No need to gzip it. tar -cf /some/storage/volume/with/free/space/dynmap.tar /my/path/to/dynmap/files
- if it's all tar'd up, it'll be one singular file, reducing that massive overhead of many little files being transferred.
And, as mikeprimm said above, compression won't help much here, so there's no need to have the tar command gzip things too.
Since I know this topic is out of the scope of Dynmap, I don't want to add any more noise than needed here.
But, I don't believe the "streaming a tarbal" part is correct. Most documentation I can find, and from what I know, is that it processes each file individually (based on the checksum or timestamp+size params you pass in) - specifically, https://serverfault.com/a/18142/215461
You do have a point to the space requirement & single blob being interrupted though;
One thing you could do is to use the -P
flag when running the rsync, so that if it does get interrupted, the command can be run again and it will resume where it left off. But that doesn't address needed space on the origin side of this. https://linux.die.net/man/1/rsync
Hopefully this all works out in the end for @tristanzwart though!
I'm simplify how it works - I very well do know how rsync functions, and the point is that it avoids the serialized stack up of file system latencies on opening and reading being added to network latencies to start and acknoweldge transferred data, being further added to the remote file system latencies of opening, writing and closing the files, This stack of is why classic network file system copies using local file system oriented tools suck so badly....