Dynmap over proxypass on remote vps
sidboy55555 opened this issue ยท 9 comments
template is bold
sample data is italicized
Issue Description: Whever using dynmap over proxypass with nginx after a while I get a error code 524, timed out. Before I fixed this by increasing the max-sessions setting in dynmap, but doing this everything isn't a liable fix. any idea on how to fix this?
- Dynmap Version: core=3.6-beta-2-894, plugin=3.6-beta-2-894
- Server Version: Paper version git-Paper-550 (MC: 1.19.4)
- Pastebin of Configuration.txt: https://pastebin.com/acWKnzZw
- Server Host (if applicable): Self hosted
- **nginx virtualhost: ** https://pastebin.com/XJ4SYDkn
- Nginx logs: https://pastebin.com/BJVpZzH0
- how to replicate: use the same config (but put max-sessions back to 100 orso) and put nginx on a remote vps, let is proxypass to dynmap, and let 30 people orso use dynmap at the same time
after adding proxy_read_timeout 600s;
again, it works again, but the dynmap is extremely slow.
Try adding proxy_http_version 1.1; in the location block. (5m google search of your error).
This didn't fix anything, errors still occurring and still a 524 timeout.
I've done google searches before even thinking about opening this topic.
server {
listen 0.0.0.0:443 ssl;
listen [::]:443 ssl;
server_tokens off;
server_name _;
add_header Strict-Transport-Security max-age=31536000;
# add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
include /etc/nginx/snippets/snakeoil.conf;
location / {
proxy_pass http://192.168.178.40:8123;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
This is what I have used multiple times and worked always fine
Now I get an error code 504... atleast the 524 is gone!
EDIT: And now there is also a upstream timed out (110: Connection timed out) while reading response header from upstream
error in the logs
Maybe your vps's storage backend is really slow? I don't really know as I've never faced this issue
@sidboy55555 This is not specifically a Dynamps issue. One user can open ~6 connections to your nginx server (limit depends on each browser), which will result in the same number of connections to your Minecraft Dynmaps setup.
So looking at worst-case numbers, Edge with an 8 connection limit, all you need is ~13 users to clog up dynamps with a 100 connection limit, excluding all other factors.
But connection limit is not the only issue you could be facing.
Underlying storage has a massive impact on performance and perceived speed or sluggishness.
Spinning rust is extremely unfavorable in this situation, as each image is very small but there are many of them for a single page load, and even more for navigating the map. HDDs are typically capable of achieving at most 200iops at 4k size reads. That is at most a single-user session that would feel relatively fast.
The situation is a lot better with SSDs and NMEs as they practically have no seek time, greatly speeding things up.
Things I could recommend:
- Config suggests you are using MySQL, ensure you have enough memory allocated. innodb_buffer_pool_size should be more than the total data stored in MySQL. Additionally, you can check for major/MySQLTuner-perl for some configuration recommendations depending on your current instance size.
- Check storage device utilization, on Linux
iostat -x --human 1 [device]
eg:iostat -x --human 1 nvme1n1
- Check the difference in response times by sending a request directly to Dynmaps port and through nginx
- Ensure you have enough memory available, and disable swapping if enabled; if you don't have enough memory you can encounter hard system freezes or random processes being killed when swap is disabled
I'm able to pull ~2700 random map images per second on my setup (using ddosify with random coordinate requests and cache busting for Cloudflare) before hitting the connection limit and performance brick wall.
Network: Cloudflare -> Cloudflare tunnel -> Traefik -> Docker network -> Dynmaps port
Minecraft/Dynamps:
- Paper - git-Paper-170 (Xmx265G Xms64G)
- MC 1.20.1
- Dynmaps storage: MySQL (~170GB data stored, and 32GB memory used)
- Dynmaps max-sessions: 30
- Generic SATA 3Gb/s SSDs through HBA
Ddosify 60-second test with increasing load from 0 to 3260 requests/s:
At the end of the test, it starts failing with 502s and later connection errors.
Edit:
It also might be worth it for you to try forcing nginx to close proxy connections as soon as possible by setting keepalive_requests 0;
in your nginx configuration or by passing proxy_set_header Connection close
in combination with proxy_http_version 1.0
I'm sorry I forgot to reply, Hexide his solution did work.