Task partitioning with direct object transfer based on penalty and reward
Authors
Edgar Buchanan, Andrew Pomfret, Jon Timmis
Abstract:
This paper is concerned with foraging robots that are retrieving items to a destination using odometry for navigation in enclosed environments, and their susceptibility to dead-reckoning noise. Such noise causes the location of targets recorded by the robots to appear to change over time, thus reducing the ability of the robot to return to the same location. Previous work on task partitioning was attempted in an effort to decrease this error and increase the rate of item collection by making the robots travel shorter distances. In this paper we study a Dynamic Partitioning Strategy (DPS) which adjusts the travelling distance from the items location to a collection point as the robots locate the items, through the use of a penalty and reward mechanism. Results show that the robots adapt according to their dead-reckoning error rates, where the probability of finding items is related to the ratio between the penalty and the reward parameters.
Video material:
This section contains videos of the swarm converging to a solution by using the Dynamic Task Partitioning strategy for different scenarios. The first four videos show the positions recorded from the simulations where the object was transferred over time from one robot to another and a graph which shows the convergence of the partition length (P) to a solution. The last video shows an actual simulation from the ARGoS simulator that in similar way positions where the object was transferred is shown.
Video 1 and 2
A group of 6 robots retrieve items from an environment where the distance between the nest and source is 4m. All robots start with an initial P of 0.5m (video 1) and 3.5m (video 2), and they divide the task into two parts at the same distance 2.5m.
Video 3 and 4
A group of 30 robots retrieve items from an environment where the distance between the nest and source is 6m. All robots start with an initial P of 2.0m and two different error levels 0.5σ (video 3) and 1.5σ (video 4). Robots divide the task into two and four parts respectively.
Video 5
This video share the same characteristics as the first video, however this video is taken directly from the ARGoS simulation. Small red circles represent where the robot thinks is the item source.
Supporting Data
Data set archives
(NB these are very large data sets and may require a Google drive account to access, please contact edgar.buchanan@york.ac.uk if there are any problems accessing the file)
Section A and B: Costs experiments - [138 MB] Link (https://drive.google.com/open?id=1sztKB-U0yNrQMjBq99mdOttEPxjbXf4A)
Section C: Alpha experiments - [1.5 GB] Link (https://drive.google.com/open?id=10R0GmFeGPVH9QPke--lTiqtMLpKlKdSy
Source code for the controller:
https://github.com/edgarbuchanan/dps