LuckPerms

LuckPerms

41.4k Downloads

Inconsistent application of server/world contexts when using velocity/bungeecord proxies

GrahamJenkins opened this issue ยท 2 comments

commented

Description

This has come up both on Discord as well as other issues (#3038 #3410 #2336) and it seems that the general consensus is that it could be better but people rely on the grandfathered functionality too much to make a breaking change.

Issue: When running a bungeecord/velocity proxy (tested velocity in my case) LuckPerms Velocity (LPV) interprets context server as the proxy the user is connected to, and context world as the backend server the user is connected to (through the proxy).
In comparison, when LuckPerms (LP) is running on a backend server (which has no knowledge of the proxy) it interprets context server as its server name, defined in the config file, and context world as the dimension name.
(Let's disregard the dimension-type context for now)

Additionally, the documentation (https://luckperms.net/wiki/Context) specifically mentions server and world contexts with examples reflecting the backend (LP) implementation.

The inconsistent application of server/world contexts for proxies is ambiguous and inconsistent with the documentation, and extremely confusing for many users.

(Maintainers pardon this semi rant and skim down to the Proposal section below)
Why this matters/how it affects the end user (expanding on this for the sake of people stumbling on this bug)
Administrators of multi server environments create standard player/donor/staff tracks with prefixes and permissions included with each group. Additionally, users of proxy software (LPV) often use chat/tablist/scoreboard/... plugins that runs on the proxy, and applies permissions based on group membership. When creating them without a context, everything works fine. User A in group B receives appropriate group B permissions on both the backend and the proxy.

But what happens if you want to add a player to a group, only on a specific server? I'll use my example, a builders group (creative perms and custom prefix) that has special permissions on a building server. What's the first guess? Per docs, user permissions would look like:
user: ABC

  • group: default
  • group: builders (context: [server:buildserver])

This is logical. But the prefix is wrong! Going back to the issue portion, LPV (and the chat/tab menu plugins) read server=proxyname and discard this group. Meanwhile the backend server (LP) reads its context as server:buildserver and properly applies the server permissions.

Workaround (for admins running into this bug)
The simplest solution here is to create a "proxy group" as follows:
group: buildproxy
group: builders (context: [server:buildserver])
group: builders (context: [world:buildserver])

What this does, is the first context (server:) applies to the backend server, so permissions are applied properly, and the second context (world:) applies the same group for any proxy plugins. This isn't the most elegant workaround, but it does work.

Proposal
Implement a potentially breaking change:

  1. LPV: Server context always reflects the backend server the user is currently connected to. This would instead rely on the proxy context instead.
  2. LPV: World context deprecated. In my understanding, the proxy has no way of receiving this information and it is irrelevant
  3. LP: Proxy context deprecated. (if applicable) LP on a backend server likely has no knowledge of the proxy in front of it.

I previously imagined a solution that replaced the server context with something strictly explicit, but that would be significant friction, and I realize that the ambiguity almost exclusively applies to the proxy implementation of LuckPerms.

My apologies for a long winded bug report, my hope is that this can conclusively aid any server admins who have run into this issue. (case in point, I received a DM from another user on Discord within 24 hours after asking about this issue, so I am definitely not alone)

Finally thank you for all the hard work and well designed software. Mysql/maria syncing is smooth and rapid, and overall permission management is powerful. I hope this comes off as constructive and helpful for end users, rather than an entitled beggar demanding changes of an open source project. You guys are great!

Reproduction Steps

  1. Install LuckPerms on backend server (Fabric in my case)
  2. Install LuckPerms on a proxy server (Velocity in my case)
  3. Configure servername for both, including a shared database
  4. Install a proxy plugin that uses LuckPerms (Eg: HuskChat, Velocitab)
  5. Use a mod/plugin on the backend that supports permissions
  6. Create a group containing:
  • Something the proxy plugin requires, commonly a prefix
  • A permission on the backend
  1. Apply the group to a user with a server OR world context

Using server: the backend permissions will be applied. Using world: the prefix will be applied.

Expected Behaviour

I expect server: to reflect "The player's current server" (name) per documentation, not the name of the proxy. I also expect world: to reflect the current world, not the backend server's name.

Server Details

Backend: paper-1.20.1.82 AND fabric-1.20.1.0.14.21 behind Velocity-3.2.0 proxy

LuckPerms Version

LuckPerms-Velocity-5.4.98, LuckPerms-Bukkit-5.4.98, LuckPerms-Fabric-5.4.88

Logs and Configs

No response

Extra Details

No response

commented

its worth noting that the value of what is currently the "world" context on the proxy may not always be the same as value of the "server" context on the backend server. "world" on the proxy uses the proxies name for that server, while "server" on the backend uses the first config option in the luckperms config

commented

I am closing this as duplicate of #3410 since that's what it is.

For clarity: we are well aware of the problem and that it unfortunately is an issue that cannot be solved without presenting risks. The real problem that underlies this (and prevents this from being solved (and other storage-wise improvements to be performed)) is that there is no storage versioning; why this is a problem is fairly straight-forward: especially in the context of a centralized database, if one instance of LP was updated and the layout of the storage method were to change, and another instance of LP was kept outdated, then the older one would simply not understand the new storage:

  • the administrator would need to be aware of that change and update all running LuckPerms instances in all servers at the same time, first starting one server/proxy up to perform the migration, then the rest. That brings undesirable problems on their own unrelated to LuckPerms such as downtime, shutting down the entire network for non-standard maintenance is not a thought that puts you to sleep at night, and if something goes wrong then that's a whole can of worms.
  • as discouraged as they are, many people do run auto-updaters for plugins, restart one server and then the rest of the network does not understand the storage layout, you'll find errors popping on every console.
  • even if auto-updaters are not involved and migration is successful, erring is human, it only takes one instance to not be updated for errors to show up, whether that be by not updating an existing one, or by introducing the LP version that changes storage in a new server where everything would be fresh and updated.

If storage were to be versioned since day one, then the risks are lower, the plugin could simply refuse to load if it encountered a storage version that didn't match the expected one (that would still present downtime but at least it would be managed and the plugin wouldn't error), but without it, it is no light task.
On a local system that is using H2/SQLite/YAML/etc. those risks are far lower although still exist, I've seen people symlink their YAML storage to a single folder.

A lot of thought has been given to the problem many times, which is why the current stance is to favour the current system.