Skip to content

Tracking TeammatesΒΆ

For a well coordinated positioning and sophisticated passing, it is important for every robot to know the positions of its teammates. To compute these positions, there currently exist two sources of information: the detection of robots in images as well as the communication of self-localization results among the robots. Both on their own are not sufficient to compute a reliable model of the teammates' positions.

Similar to the ball, robots only communicate information about their position and walk target, if a change of this information is considered to be relevant. Thus, this kind of information might be quite sparse. However, the different kinds of information are not sent separately but in one combined message. Thus, if a changed ball position resulted in the sending of a new message, an updated robot pose is sent, too. Nevertheless, due to the strict communication limit, it is quite common that robots do not communicate for several seconds, although their position or their walking direction might have changed significantly. This make communication an unreliable source for position information.

B-Human has a quite reliable implementation for detecting robots in camera images, which is however not capable of distinguishing between different robots or different teams. Thus, in a subsequent processing step, it is tried to compute the jersey color for each detected robot. This implementation is not described in this documentation as it is currently still work in progress und has a limited reliability, depending on the jersey colors that are currently on the pitch. The quality can be roughly described as: "If B-Human plays in red jerseys, the jersey detector often gets it right." Furthermore, as all robots look the same, identifying individual team members would only be possible by detection the numbers on their jerseys. We have not tried to do this (yet).

Given the problems of these different kinds of information, our approach is to combine them to properly track our teammates. This is done by the GlobalTeammatesTracker, which computes the GlobalTeammatesModel. All teammates communicate their current walk target and speed, which already allows a basic prediction of a teammate's position for a given point of time. Nevertheless, these predictions often deviate from the real position, for instance, if the robot falls or is blocked by opponents. In case of a large deviation, the teammate is supposed to send a new message. However, this might arrive delayed to remain within the planned message budget. Furthermore, minor deviations remain unnoticed and could result in a lack of precision when playing a pass. To overcome these problems, the position predictions become corrected by local observations of the teammates (see example below): Whenever a robot that wears a jersey of the our team is detected, its relative position, which is assumed to be more precise than the communicated and predicted position, can be used to update the position estimate of a fitting teammate. In this process, the orientation of the teammates is not considered. Thus, the implementation corresponds to a basic 2D Kalman filter.

Teammates model example

Sample situation of the GlobalTeammatesModel of the goalkeeper (leftmost robot, depicted including its current field of view). The red circles that are outlined in yellow depict the interpolated positions of the teammates based on their most recently communicated data. The dashed violet lines connect their last known positions with their walk targets. The semitransparent black circles with white numbers show their computed positions after incorporating local perceptions.


Last update: October 11, 2024