PneumaticCraft: Repressurized

PneumaticCraft: Repressurized

43M Downloads

Improve requester round robin

johalun opened this issue ยท 10 comments

commented

Describe the bug

The current implementation of requester round robin has a bug where it won't round robin all requester if there are an even amount of them for that specific item. We need to debug and figure out what's going on and fix the implementation.

How to reproduce the bug

N/A

Expected behavior

N/A

Additional details

No response

Which Minecraft version are you using?

1.19

Which version of PneumaticCraft: Repressurized are you using?

4.3.8

Crash log

No response

commented

Here's what I've found so far. With 4 requester frames, the manager adds 4 tasks. However, only the first task is executed. This is with the provider being stocked with 1 item every 3 seconds, while the logistics widget runs every 1 second (using a stock logistics drone)

[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: doLogistics
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Adding task 114812273
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 0
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Move requester 505037139 to end of list. List size is 4
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Adding task 1171728927
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 1
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Move requester 409378969 to end of list. List size is 4
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Adding task 1138571147
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 2
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Move requester 505037139 to end of list. List size is 4
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Adding task 1965230811
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 3
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: Move requester 505037139 to end of list. List size is 4
[25Sep2023 11:18:46.697] [Server thread/INFO] [pneumaticcraft/]: executing task 114812273

We can also see that as we iterate over request index 2 and 3, we just keep shuffling the same requester around so clearly there are some issues with this solution where the same requester gets multiple tasks where one get none.

Throwing out some preliminary thoughts here..

So to have fair round robin scheduling might not be so straight forward since it depends on 2 things, how we create tasks from the list of requesters, and how many tasks will end up being executed. Something we don't know at the time we create the tasks.

Is it only ever one task that gets executed per invocation of the logistics widget or does it depend on availability in the provider? I assume that it will try to fulfill as much as it can during one execution of the logistics widget and that the number of executed tasks will vary..

commented

Also noticed in this here code https://github.com/TeamPneumatic/pnc-repressurized/blob/1.19.2/src/main/java/me/desht/pneumaticcraft/common/drone/LogisticsManager.java#L110-L120

that for each slot that has what's requested we add a task. not sure the reason for this, but if for example the requester requests 64 items and the provider thas 5 stacks with 64 of each of that item, we'll add 5 tasks but only the first will get executed.

commented

This is something that crossed my mind when doing the previous implementation. Not perfect, but it's better. We only rotate once per getTasks() call and priority class. The proper solution would probably be to actually keep track of what tasks were executed and call back to logistics manager after and then re-order for the ones that were executed.

@@ -55,8 +60,11 @@ public class LogisticsManager {
         ItemStack item = holdingStack instanceof ItemStack ? (ItemStack) holdingStack : ItemStack.EMPTY;
         FluidStack fluid = holdingStack instanceof FluidStack ? (FluidStack) holdingStack : FluidStack.EMPTY;
         PriorityQueue<LogisticsTask> tasks = new PriorityQueue<>();
+        int[] toRotate = new int[N_PRIORITIES];
+        Arrays.fill(toRotate, -1);
         for (int priority = logistics.size() - 1; priority >= 0; priority--) {
-            for (AbstractLogisticsFrameEntity requester : logistics.get(priority)) {
+            for (int requesterId = 0; requesterId < logistics.get(priority).size(); requesterId++) {
+                AbstractLogisticsFrameEntity requester = logistics.get(priority).get(requesterId);
                 if (droneAccess && requester.isObstructed(PathComputationType.AIR)) continue;
                 for (int i = 0; i < priority; i++) {
                     for (AbstractLogisticsFrameEntity provider : logistics.get(i)) {
@@ -82,12 +90,28 @@ public class LogisticsManager {
                             // it could be that the drone is carrying some item or fluid it can't drop off right now
                             // however it might still be able to transfer the other resource type (i.e. transfer items if
                             // it's holding a fluid, and vice versa)
+                            int taskCount = tasks.size();
                             tryProvide(provider, requester, tasks, item.isEmpty(), fluid.isEmpty());
+
+                            Log.info("tryProvide for prio " + priority + ", req " + requesterId);
+                            // if we provided something to the requester, move the requester to last in its list
+                            // so another requester gets the next delivery, effectively round robin.
+                            List<AbstractLogisticsFrameEntity> requesters = logistics.get(priority);
+                            if (toRotate[priority] == -1 && tasks.size() > taskCount && requesters.size() > 1) {
+                                Log.info("Move requester " + System.identityHashCode(requester) + " to end of list. List size is " + requesters.size());
+                                toRotate[priority] = requesterId;
+                            }
                         }
                     }
                 }
             }
         }
+        for (int i=0; i<N_PRIORITIES; i++) {
+            if (toRotate[i] != -1 && logistics.get(i).size() > 1) {
+                Log.info("Rotating index " + toRotate[i] + " in prio " + i);
+                logistics.get(i).add(logistics.get(i).remove(toRotate[i]));
+            }
+        }
         return tasks;
     }
commented

Still confused about the many tasks being added though. If I add a bunch of stacks of the requested item in the provider chest, but only 1 item in each stack, each requester gets a task for each stack, but at the end only the first added task is executed per iteration.

[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: doLogistics
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1212913382
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1921746182
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1532952646
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1985714756
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1655955151
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1279185962
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1866506459
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1172541133
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 970618136
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 58685837
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1775458113
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 815737168
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1518860526
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 575422754
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 0
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Move requester 1305127558 to end of list. List size is 5
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 1373004661
[25Sep2023 16:36:31.603] [Server thread/INFO] [pneumaticcraft/]: Adding task 407508406
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 2112872153
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1030411543
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1598411780
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 401247743
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 342921421
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 2144943417
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 248324914
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1301004897
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1575863781
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1802181524
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 207122808
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 533934680
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 1
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 748818090
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 522585873
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1354393148
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1459478401
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1779850410
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1160233078
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1177535848
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 2123762683
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 649984911
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1575140190
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1700667663
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 727267212
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1651622123
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 713142192
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 2
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1044620726
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1556349093
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 819918158
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 386216138
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 885545353
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1224062621
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 264658504
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 838577178
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1200209853
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 581806223
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 522487436
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1934848526
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 2033016718
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 915106292
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 3
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 220240207
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1695238631
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 1992932327
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 459323395
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 302231745
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 762877719
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 374675420
[25Sep2023 16:36:31.604] [Server thread/INFO] [pneumaticcraft/]: Adding task 149620149
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: Adding task 132196817
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: Adding task 1850263061
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: Adding task 1352521719
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: Adding task 9156348
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: Adding task 237676701
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: Adding task 645574435
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: tryProvide for prio 3, req 4
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: Rotating index 0 in prio 3
[25Sep2023 16:36:31.605] [Server thread/INFO] [pneumaticcraft/]: executing task 1212913382
commented

Did some more testing and to me it seems like we always only execute the first task in the list returned by getTasks before calling getTasks again. It's possible I'm missing something given my short history with contributing to the mod and I might not have the full picture. I tested with the following changes

  • return early when the first task has been created
  • rotate both provider and requester

With a setup with 4 requester and 2 provider, it round robin perfectly both requesters and providers. If this is an option, it would save on creating many tasks that will end up never getting executed.

Edit: this testing was only for drones. I see now for tube modules, it does actually iterate over all tasks so this change would have to be only for drones.

@@ -60,7 +65,8 @@ public class LogisticsManager {
                 AbstractLogisticsFrameEntity requester = logistics.get(priority).get(requesterId);
                 if (droneAccess && requester.isObstructed(PathComputationType.AIR)) continue;
                 for (int i = 0; i < priority; i++) {
-                    for (AbstractLogisticsFrameEntity provider : logistics.get(i)) {
+                    for (int providerId = 0; providerId < logistics.get(i).size(); providerId++) {
+                        AbstractLogisticsFrameEntity provider = logistics.get(i).get(providerId);
                         if (droneAccess && provider.isObstructed(PathComputationType.AIR)) continue;
                         if (provider.shouldProvideTo(priority)) {
                             if (!item.isEmpty()) {
@@ -83,14 +89,16 @@ public class LogisticsManager {
                             // it could be that the drone is carrying some item or fluid it can't drop off right now
                             // however it might still be able to transfer the other resource type (i.e. transfer items if
                             // it's holding a fluid, and vice versa)
-                            int taskCount = tasks.size();
                             tryProvide(provider, requester, tasks, item.isEmpty(), fluid.isEmpty());

-                            // if we provided something to the requester, move the requester to last in its list
-                            // so another requester gets the next delivery, effectively round robin.
-                            List<AbstractLogisticsFrameEntity> requesters = logistics.get(priority);
-                            if (tasks.size() > taskCount && requesters.size() > 1) {
-                                requesters.add(requesters.remove(requesterId));
+                            // if we provided something to the requester, move the requester and provider to last in their
+                            // lists so another frame gets selected first, effectively round robin.
+                            if (tasks.size() > 0) {
+                                if (logistics.get(priority).size() > 1)
+                                    logistics.get(priority).add(logistics.get(priority).remove(requesterId));
+                                if (logistics.get(i).size() > 1)
+                                    logistics.get(i).add(logistics.get(i).remove(providerId));
+                                return tasks;
                             }
                         }
                     }
@@ -115,6 +123,7 @@ public class LogisticsManager {
                             ItemStack stack = providingStack.copy();
                             stack.setCount(requestedAmount);
                             tasks.add(new LogisticsTask(provider, requester, stack));
+                            return;
                         }
                     }
                 }
commented

Sorry been a little busy so hadn't replied yet; I will go over these changes in details and let you know. They look good, though. And yes, this would need to be just for drones, but that's super easy to distinguish (there's a boolean droneAccess parameter to the method ๐Ÿ˜„ )

commented

Did a bit of testing - yep, I think this is good! Drones and logistics modules behaving nicely.

Just pushed a commit to 1.20.1 (6.0.8 release imminent for that version), will also cherry-pick into 1.19.2 shortly. I included the suggestion of bailing after finding one task, when the actor is a drone (logistics module do indeed process all the tasks that are found).

commented

Awesome, thanks!!

commented

I think this can be closed right?

commented

Yeah I reckon so :) Thanks again.