CC: Tweaked

CC: Tweaked

57M Downloads

Limiting HTTP

SquidDev opened this issue ยท 9 comments

commented

ComputerCraft added very rudimentary limiting system to the filesystem (a fixed number of handles per computer), but no similar system was added to the HTTP API. It would be nice to add some more fine grained control to the http API, to prevent abuse by malicious/badly implemented programs.

One nice thing about HTTP is that, as it is asynchronous, we can delay requests instead of outright rejecting them. I suspect this will be required as many installers/updaters will download multiple files in parallel.

Configuration options could be server wide and per-computer. While the latter is more important (normally it will be a few computers playing up), it may be useful to enforce some global constraints too. If so, we will have to establish some level of scheduling, prioritising requests from computers which don't access the network very often, etc...

Request limits

These limits are designed to control individual requests. They are not going to prevent abuse of the HTTP API, but should stop people doing insanely stupid things with it.

HTTP Requests

  • Max requests: The maximum number of requests a computer can be making at one time. Additional requests can be pushed to a queue.
  • Max download size: The maximum size an incoming response can be before the connection is terminated.
  • Max upload size: The maximum size an outgoing request can be.

Websockets

  • Max websockets: The maximum number of open websockets a computer can have. Unlike HTTP requests, it is probably save to error if this limit is reached.
  • Max packet size: The maximum size a websocket message can be (incoming and outgoing).

It's also worth noting that many of these options could be configured on a per-domain/per-IP-range level. For instance, you could allow downloading larger files from GitHub, or limit the upload size to Pastebin.

Rate limiting

While the above has some utility, it doesn't prevent people making lots of requests in a short period of time. Instead, we should also investigate ways of limiting the amount of bandwidth a computer may consume.


If people have ideas, suggestions or if there's anything else we could look into limiting, do comment!

commented

An obvious first question:
What would each of those limits be by default, and would the limits be enabled at all by default?

commented

I will probably enable them all by default, but set it sufficiently large that it really shouldn't make a difference. For example, the maximumFilesOpen config option is currently set to 128 which prevents insanely broken code, but isn't going to stop most programs for running.

I'd assume something like this is going to be "good enough":

  • Max requests: 16. Honestly, this could be much lower and it won't break anything.
  • Max download: 16Mib. This is absurdly large, but there you go.
  • Max upload: 4Mib
  • Max websockets: 4. This is what CCTweaks used and I'm fine with it.
  • Max packet size: 64KiB. This is the current limit as imposed by Netty, though we could go much lower.

It's worth noting that the rate limiting options could be as high or low as we liked as it won't actually stop any programs from running, just degrade their performance a little. If people have any changes to the above list, I'd like to hear it!

We're also going to have to have a think about how the configuration of this will look: Forges' default config format isn't well suited to this sort of stuff, but I'm a little bit reluctant to move to a different format as I'm still a little stuck up on keeping compatibility with CC. A separate file would be an option, though that has all sorts of other issues. Thoughts are welcome!

commented

I think for the basic options, keep the same config format, but allow specifying an additional config file to load for the more advanced options.

Would that be too difficult to implement?

commented

We need to limit the request rate and bandwidth usage, at least globally. When a global limit is reached, I would advise to prioritize the 'good' computers. For example, the computer doing the most request/bandwidth throws a LUA exception, and a console log reports it. That way the player and server owners are aware of the issue.

commented

I think its better to rate limit rather than size limit, at least for Websockets but also for HTTP, since large files can just be spread across multiple requests without too much effort. Presumably what you're trying to limit is max concurrent connections, and throughput/rate, from this you can work out how much bandwidth at max a user can occupy per day, for example.

Edit: I realize I very much repeated the original post, and that's because I skimmed it.

The download limits imposed here look good. But I'm of the opinion that rate limiting is far more useful in controlling how much bandwidth is used. I would also suggest the option to allow bursting, example, 20 Mbps for up to 16 seconds, and then 500Kbps there-after. This burst bandwidth should perhaps be applied on a per computer basis, but also, if we can have owners of computers, a per owner basis.

commented

image

Some people have pointed out that you can pump some serious numbers using HTTP and a decent host. This would be good to fix next release.

commented

stopping this fully from being abused is like impossible. don't forget about the older version of computercraft on 1.7 etc that have no updates anymore regardless of what the limits are, even if the limits seem abit high limiting it down to under 100mpbs/50mpbs per server is 10x/20x better than having no limit and being able to push a 1 Gbps out of each server hosting provider or many providers :D

commented

don't forget about the older version of computercraft on 1.7

Well sure, but that's an issue with every bug. I can only make sure the latest release is as bug-free as possible :p.

commented

stopping this fully from being abused is like impossible. don't forget about the older version of computercraft on 1.7 etc that have no updates anymore regardless of what the limits are, even if the limits seem abit high limiting it down to under 100mpbs/50mpbs per server is 10x/20x better than having no limit and being able to push a 1 Gbps out of each server hosting provider or many providers :D

That issue with out of DEV builds of CC would have to be fixed in other server side ways ie plugins, mods or network filtering.