Skip to content

Self-Localization

A robust and precise self-localization has always been an important requirement for successfully participating in the Standard Platform League. B-Human has always based its self-localization solutions on probabilistic approaches1 as this paradigm has been proven to provide robust and precise results in a variety of robot state estimation tasks. Overall, B-Human's self-localization implementation has not significantly changed in recent years, except for some minor adaptions and bug fixes.

Probabilistic State Estimation

The pose state estimation is handled by multiple hypotheses that are each modeled as an Unscented Kalman Filter (UKF)2. The hypotheses management and hypotheses resetting is realized as a particle filter3. Both approaches are straightforward textbook implementations1, except for some adaptions to handle certain RoboCup-specific game states, such as the positioning after returning from a penalty. In addition, we only use a very low number of particles (currently 12), as multimodalities do not often occur in RoboCup games.

The current self-localization implementation is the SelfLocator module that provides the RobotPose representation: 2D position and rotation along with uncertainty information. Furthermore, for debugging purposes, the module also provides the loggable representation SelfLocalizationHypotheses which contains the states of all currently internally maintained pose hypotheses.

Overall, this combination of probabilistic methods enables a robust, precise, and efficient self-localization on the RoboCup field. In most situations, the deviation between the position estimate and the actual robot position is in the range of a few centimeters. The following image shows a comparison between the estimatess (dashed circles) and the positions determined by an external video tracking system (full circles) during the RoboCup 2023 final against HTWK Robots (in blue).

Self-localization example

Perceptions Used for Self-Localization

Several years ago, B-Human used goals as a dominant feature for self-localization. When the field was smaller and the goal posts were painted yellow, they were easy to perceive from most positions and provided precise and valuable measurements for the pose estimation process. In particular the sensor resetting part, i.e. the creation of alternative pose estimates in case of a delocalization4, was almost completely based on the goal posts perceived. In 2015, we still relied on this approach, using a detector for the white goals. However, as it turned out that this detector required too much computation time and did not work reliably in some environments (requiring lots of calibration efforts), we decided to perform self-localization without goals but by using complex field features instead5. However, the new NAO's computing power and our recent progress in the application of deep neural networks might lead to new robust goal perceptions in the near future. These perceptions could be integrated in the current implementation easily.

The currently used complex field features are the combination of the center circle with the center line as well as the combination of a penalty mark with the outer line of the penalty area. In addition, the self-localization still uses basic features such as field lines, their crossings, the penalty mark, and the center circle as measurements. All these field elements are distributed over the whole field and can be detected very reliably, providing a constant input of measurements in most situations.

The complex field features are also used as measurements but not as a perception of a relative landmark. Instead, an artificial measurement of a global pose is generated, reducing the translational error in both dimensions as well as the rotational error at once. The handling of the field symmetry, which leads to actually two poses, is similar to the one that is done for the sensor resetting as described in the following section.

To use any perception in the UKF's measurement step, it needs to be associated to a field element in global field coordinates first. For instance, given a perceived line and a particle's current pose estimate, the field line that is most plausible regarding length, angle, and distance is determined. In a second step, it is checked whether the difference between model and perception is small enough to consider the perception for the measurement step. For all kinds of perceptions, different thresholds exist, depending on the likelihood of false positives and the assumed precision of the perception modules. For this process, the module PercepRegistrationProvider provides a representation PerceptRegistration, which contains a set of functions that are called by the SelfLocator. Furthermore, the ratio between the number of successfully assigned perceptions and their total number is considered a measure of reliability and thus used for the particle filter's resampling step.

Sensor Resetting based on Field Features

When using a particle filter on a computationally limited platform, only few particles, which cover the state space very sparsely, can be used. Therefore, to recover from a delocalization, it is a common approach to perform sensor resetting, i.e. to insert new particles based on recent measurements4. The complex field features provide exactly this information - by combining multiple basic field elements in a way that a robot pose (in global field coordinates) can be derived directly - and thus are used by us for creating new particles. These particles are provided as AlternativeRobotPoseHypothesis by the module AlternativeRobotPoseProvider.

As false positives can be among the field features, for instance caused by robot parts overlapping parts of lines and thereby inducing a wrong constellation of elements, an additional filtering step is necessary. All robot poses that can be derived from recently observed field features are clustered and only the largest cluster, which also needs to contain a minimum number of elements, is considered as a candidate for a new sample. This candidate is only inserted into the sample set in case it significantly differs from the current robot pose estimate.

To resolve the field's symmetry when handling the field features, we use the constraints given by the rules (all robots are in their own half (plus the opponent half of the center circle) when the game state switches to playing or when they return from a penalty) as well as the assumption that the alternative that is more compatible to the previous robot pose is more likely than the other one. An example is depicted in the figure below. This assumption can be made, as no teleportation happens in real games. Instead, most localization errors result from situations in which robots slightly lose track of their original position and accumulate a significant error by repeatedly associating new perceptions to false parts of the field model.

Pose alternatives

Whenever a new alternative pose is inserted into the sample set, it must be decided, if it can be used directly or if it needs to be mirrored. This decision depends on the current robot pose. This figure visualizes the possible decisions for one example robot pose (located on the right side of the own half). The stars denote example alternative poses. A red line means: "if a new pose is computed here, it must be mirrored before its insertion into the sample set". Consequently, black lines mean that the pose can be used directly. As one can see, the current formula prefers the direction towards the opponent goal over the current position.

Mirrored Pose Estimates

As aforementioned, the self-localization already includes some mechanisms that keep a robot playing in the right direction. However, it still might happen that a single robot loses track of its playing direction. In the past, the B-Human software included a component for detecting such situations by comparing a robot's ball observations with those of the teammates. If a constellation indicated that the robot had probably been flipped (multiple teammates recently saw the ball at a position that is point mirrored to the own ball state estimate), all particles inside the SelfLocator were point mirrored around the center circle to set the robot back on track.

Such a feature does not exist anymore in the B-Human codebase for multiple reasons:

  • Having a component that is capable of turning the robot's localization in the opposite direction carries a risk: In case of false ball perceptions - either by the robot itself of by multiple teammates at the same time - a previously well-localized robot might be flipped. This happened in the past, leaving the question whether such a feature is actually helpful. One could reduce such a risk by setting strict parameters. This in turn might prevent the mechanism from working correct in the desired situations.
  • During the whole RoboCup 2019 competition, we did not observe any robot that required such a correction of its pose estimate. Thus, one can assume that the other self-localization components are currently precise and reliable enough to render such a feature useless.
  • Last but not least, in recent years, the Standard Platform League significantly reduced the amount of messages that robots could send. Consequently, each robots receives less information about balls seen by other robots. Furthermore, B-Human's current communication scheme6 prevents robots from sending information that can be assumed to be already known, i.e. if one teammate recently communicated a ball position, other teammates will not send this ball position again (unless there are other reasons to communicate). Overall, the current setup renders our previous communication-based approach for localization useless.

For RoboCup 2023, we deleted our implementation.


  1. Sebastian Thrun, Wolfram Burgard, and Dieter Fox: Probabilistic Robotics. MIT Press, Cambridge, 2005. 

  2. Simon J. Julier, Jeffrey K. Uhlmann, and Hugh F. Durrant-Whyte: A New Approach for Filtering Nonlinear Systems. In Proceedings of the American Control Conference, volume 3, pages 1628–1632, 1995. 

  3. Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun: Monte-Carlo Localization: Efficient Position Estimation for Mobile Robots. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, pages 343 – 349, Orlando, FL, USA, 1999. 

  4. Scott Lenser and Manuela Veloso. Sensor resetting localization for poorly modelled mobile robots. In Proceedings of the 2000 IEEE International Conference on Robotics and Automation (ICRA 2000), volume 2, pages 1225–1232, San Francisco, CA, USA, 2000. 

  5. Thomas Röfer, Tim Laue, and Jesse Richter-Klug: B-Human 2016 – Robust approaches for perception and state estimation under more natural conditions. In Sven Behnke, Raymond Sheh, Sanem Sarıel, and Daniel D. Lee, editors, RoboCup 2016: Robot World Cup XX, volume 9776 of Lecture Notes in Artificial Intelligence, pages 503 – 514. Springer, 2017. 

  6. Thomas Röfer, Tim Laue, Arne Hasselbring, Jo Lienhoop, Yannik Meinken, and Philip Reichenberg: B-Human 2022–More Team Play with Less Communication. In Amy Eguchi, Nuno Lau, Maike Paetzel-Prüsmann and Thanapat Wanichanon: RoboCup 2022: Robot World Cup XXV, volume 13561 of Lecture Notes in Artificial Intelligence, pages 287-299. Springer, 2023. 


Last update: October 11, 2024