Skip to content

Change Log

2024

Getting Started

macOS

Make/macOS/generate now creates Xcode projects with reproducible GUIDs. The tool Util/SimRobot/Make/macOS/pbxgen replaces the random GUIDs created by cmake by ones derived from the project structure. As a result, Xcode does not create a new set of precompiled headers anymore whenever the project file was regenerated. In addition, Make/macOS/generate is much faster now.

Architecture

Team Communication

We improved our communication and decision-making to determine when robots should send packages to their temmates. Previously, once our budget limit implementation allowed for one message to be send, all robots would be able to send. This caused a deficit of up to six messages and therefore for no more messages to be send for the next 6 seconds. This caused a lot of problems with multiple robots wanting to play the ball and not being capable to resolve this situation for a long period of time. We added a restriction to prioritize sending messages from robots playing the ball. In addition, a delay for robots further away from the ball is now included. With those adjustments, we do not overshoot our own budget anymore and allow for a smoother communication.

Perception

Ball and Penalty Mark Perceptor

We combined the ball perceptor network and the penalty mark classifier network to a unified model (see this section).

CNN-based Goal Posts Detection

We developed a method that finds candidate regions for goal posts and then uses a convolutional neural network to either confirm or reject those candidates (see this section).

Improved Line Intersection Classification

The classification of intersections of field lines was split into two modules and the classification network was retrained. In addition to our previous approach, the network now gets the distance to the intersection candidates as an additional input (see this section).

Modeling

Whistle Detection

B-Human's WhistleRecognizer was replaced by the WhistleDetector developed by the Nao Devils. We adapted their module to our code base and modified it to better work with our team-wide acceptance of the whistle. We also added a minimum confidence required for both individual detection methods (physical model and neural network) to accept a whistle.

Behavior Control

CABSL 2

The per-robot behavior control (SkillBehaviorControl) consists of high-level actions and skills. Before 2024, the high-level actions were implemented as so-called options in the C-based Agent Behavior Specification Language (CABSL). Skills were written in a specialized skill framework, either as plain C++ code or also as CABSL options. This is now consolidated into a new version of CABSL, implementing everything as options. In comparison to CABSL 1, CABSL 2 adds the ability to split the behavior into multiple compilation units, it uses a different syntax for passing arguments, and it adds constant definitions as well as state variables. The standalone version of CABSL 2 is described here. However, the version integrated into the B-Human framework differs in a few aspects from the standalone version:

  • Rather than using the iostream framework from the C++ standard library for streaming arguments and reading configuration files, the B-Human streaming framework is used.
  • In addition to referring to options by their names as strings, the B-Human version also creates an enumeration type Option within the behavior. It overloads methods such as execute and select_option to also accept this type.
  • Per-option definitions can be read and changed using the modify mechanism.
  • The SkillBehaviorControl is a module and its options inherit all representations that it requires and they can set all representations that it provides.

Handling the Standby State

For detecting the referee signal in the Standby state, we use the same approach as last year (see this section), but we extract a smaller patch from the top of the upper camera image. The patch is smaller, because the referee appears a lot smaller in the image if robots have to look at him/her from the opposing touchline. The robots look at the assumed position of the referee in a way that he/she should appear in the upper part of the upper camera image to minimize blooming effects from the ceiling lights.

As misdetections cannot be out-ruled, a single robot starts to enter the pitch after the signal was detected, giving the referee enough time to penalize that robot before the rest of the team walks in, which it would not if the pawn sacrifice is penalized. The robot walks in from the side where the referee stands, which means that even if it were removed, the same number of robots can still look at the referee afterwards. There is a maximum of two pawn sacrifices before the robots just wait for the GameController to announce the Ready state.

If the team missed the referee signal and receives the delayed Ready signal from the GameController, the robots enter the pitch with maximum speed. Most of them will still reach their kick-off positions within the time window of 15 seconds.

Indirect Kick

To implement the indirect kick rule, each robot shares the timestamp of the last kick it performed with its teammates. If any of the timestamps received from teammates is more recent than the last set play, a robot is allowed to kick at the goal. The first robot that kicks the ball after a set play sends this information as a priority message (see B-Human Team Report and Code Release 2022 Section 3.2.1) to ensure that its teammates know that they can score. To preserve the message budget, further kicks are only announced as a byproduct of the regular team communication.

If a robot is not allowed to kick at the goal, it will pass to a teammate or dribble instead, just as it would do if the way to the goal is blocked or the goal is too far away.

Calibration

The calibration procedure got a handful of fixes. It can now be aborted at any time and restarted by simply bringing the robot back into the unstiff state with the head sensors. For two full calibrations back to back, a restart of the software is still necessary.

Motion Control

Walk Stability

We did smaller adjustments and improvements in the already existing balancing features:

  • The step adjustments velocity, which is used to determine how fast the feet are allowed to be moved to the corrected position, is drastically reduced once the current step time exceeds the step duration.
  • The side hip shift was improved with more suitable parameters.

Walk Planing

Our step planning modules WalkToBallEngine and WalkToPoseEngine and the used step planing utilities WalkUtilities got multiple bug fixes to reduce the occurrence of steps, which just let the robot stand still and waste up to 500 ms (two walking steps) of walking time.

In-Walk Kicks

We improved the stability, accuracy and speed of execution.

  • The decision, whether a kick will be unstable based on the previous step size, was improved and a WalkDelayPhase is used to improve kick stability.
  • In combination with the stable walk, the execution time of the kicks could be reduced by increasing the deviation thresholds.
  • The accuracy was improved by updating the kick after being calculated in the first few frames after the kick started.
  • We added dynamic diagonal kicks. Those are an interpolation between the forward and side kicks and are dynamically calculated based on the ball center position and the closest point of the edge of the kicking sole. This allows for a fast play of the ball near other robots.

Intercepting

We changed our intercepting for the dynamic ball handling challenge in 2023. This version was refined and used for the normal games. It replaces the simple intercepting by side steps requested by the behavior. This new version plans directly in the Motion process. It places the robot in the middle of the ball path and optimizes the translational step size of the rotational part. This automatically creates large side steps when the specific leg is the swing foot that is closer to the ball path and large rotational steps in the other case. In case there is not much time left, a step is calculated that only places one sole on the ball path. Also a back walking step is requested to ensure not walking into the ball, but a more smooth stopping of the ball.

Additionally we added the possibility to overwrite the current executed walking step to execute an intercepting step for a rolling ball. Previously, if the robot just started a walk step with the same side the ball would intercept the own y-axis, the robot needed to wait two full steps to start intercepting. This is a waste of up to 500 ms. Now the robot can start intercepting, even if the step is close to being finished.

Goalie Jump

We changed the decision for when the goalie can jump while walking. This reduces the chance of a buffered jump when the robot still needs to finish its walking steps. Also more intercepting by walking is used to allow the goalie to be more active with the ball afterwards.

Technical Challenges

B-Human participated in the only technical challenge that was held this year:

2023

Getting Started

Support for Linux on ARM

On the PC, the B-Human system can now also be compiled and used on Linux for ARM, i.e. the architecture aarch64 is fully supported. Some compromises are made similar to the support for arm64 on macOS, i.e. our library CompiledNN is replaced by the ONNXruntime and x86 intrinsics are mapped to native instructions by the header-only library sse2neon.

Architecture

Message Queues

Message queues are B-Human's means to transmit and store sequences of typed data (basically bags). They were reimplemented with a more compact interface, which is now based on iterators rather than message numbers. Some specialized functions were moved to the only places in our code that actually use them to keep the implementation of the message queue compact.

Log Player

The log player was rewritten as well. Log files mainly consist of a streamed message queue and some additional information. They can be rather big (> 3 gb for a typical half of a game). The new log player opens such a file as a memory mapped file to avoid that the whole log is read into memory all at once. Since our code uses some information that would usually require to read through the entire log to compute it, some indices are created when a log file is first opened and then appended to the log file. When the log file is opened again, the indices are read from the file, instead of recomputing them. Thus, instead of reading more than 3 gb, only less than 2 mb have to be read when opening a log file, which significantly speeds up the process.

Perception

Robot Detection

We trained a new CNN-based robot detector, which now uses colored images rather than grayscaled ones. A brief description can be found here.

CNN-Based Preprocessing on Lower Camera

A CNN predicts heatmaps for the ball, obstacles, and the penalty mark on the lower camera directly from 320x240 YUYV camera images, leaving the field line detection the only scanline-based method on the lower camera.

Behavior Control

To find the ball more quickly whenever its position is unknown, a new ball search was implemented.

Sensing

Torso Orientation Estimation

The estimation for the robot torso orientation, which is implemented in the module InertialDataProvider, was improved, to better handle the rough walking. It often caused the estimation to deviate at lot up to a point, which resulted in a worse walking and worse world modeling. Especially the walking started to stumble multiple times in a row in such situations, which was no longer observed at the RoboCup competition.

Motion Control

Walking

The walking was improved to better handle worn-out robots, but also large diagonal and sidewards walking steps, which is further described in Walking.

Also the module FootSoleRotationCalibration and its calibration process became obsolet. It originally corrected mistakes in the torso orientation estimation. Such correction is no longer necessary.

Technical Challenges

B-Human participated in all three technical challenges, more on which you can find here:

Tools

SimRobot

Our robotics simulator SimRobot got a graphical overhaul, in particular on macOS (see screenshot in this section). The support for floating windows was removed.

In addition, the library that connects SimRobot to B-Human's robot code was improved in several ways:

Data Views

Data views allow to display and edit data that is published by the macro MODIFY in our robot code. The appearance of these views was streamlined and they now only exchange data with the robot code when they are visible. This allowed to change their creation completely. In the past, they had to be manually added using the console command vd. The command was removed. Instead, views corresponding to all data made available through MODIFY macros in the code are immediately available under the category data of a robot in SimRobot's scene graph, grouped by the threads they are defined in. As the hierarchy suggest, the data is now shown per thread, i.e. there are different views available, e.g. for a BallPercept from the thread Upper and the one from the thread Lower.

Commands per Thread

This per-thread handling is now also available for console commands. The prefix command for allows to specify the threads a certain command is meant for. For instance, for Upper Lower mr BallPercept BallPerceptor defines that in both the threads Upper and Lower a module called BallPerceptor should provide a representation named BallPercept. Under certain conditions that vary between different commands, it is still possible to omit the command prefix for (see this section).

Message ID Mapping

When SimRobot is connected to a real NAO through a network connection, information is exchanged through message queues. While it was already possible to exchange data in the past if SimRobot and the robot code did not have exactly the same version, such a connection was not possible if the number of message ids (i.e. the constants that encode the message types in queues) differed. Now, the number of data message ids is exchanged between SimRobot and the NAO when the connection is established, which allows to map the relevant message ids correctly in most cases.

Deploy Dialog

In the past, B-Human had two different graphical frontends to deploy the code to the robots: On the one hand, a dialog could be displayed after having built for the NAO. It allowed to specify the parameters for the script deploy that actually copies the data to the robot. On the other hand, the B-Human User Shell (bush) could be used to deploy the code to a whole team of robots. However, the layout of the bush made it difficult to increase the maximum jersey number that could be deployed without making the window overly wide. In addition, the software structure was rather complex. Therefore, a new deploy dialog was developed, which replaces the bush and is integrated as part of the build process for the NAO. It is described in this section.

Changes in 2022 and Before

Since 2023, we have used this public website for the documentation of our annual code release. Before, we published a PDF document to highlight changes and give an introduction to our framework. You can find the most recent versions here:


Last update: October 14, 2024