Game Log Event Labeling Technical Challenge
+Table of Contents
+-
+
- 1. Goal of the Technical Challenge +
- 2. Event Types + + +
- 3. Procedure + + +
- 4. Scoring + + +
- 5. Software Tooling + + +
- 6. File Format +
1 Goal of the Technical Challenge
++Label an SSL game log file for specific event types (with associated +metadata). Being able to detect important game events can be useful +for the decision making algorithms of the robots. Additionally, in +future years, being able to automatically label events with associated +metadata may lead to more explainable auto-ref decisions and +potentially auto commentary on a game. +
+2 Event Types
++There are two event types: instantaneous and durations. Instantaneous +events have a label associated with a specific time in the +log. Duration events have a label associated with a range of time in +the log. +
+2.1 Dribbling
++Detect when dribbling is occuring. This is an instantaneous +event. Dribbling is defined as one robot is touching the ball with the +dribbler. The decision is based on the same criteria used in the +excessive dribbling rule. +
+ ++The following table summarizes the metadata associated with this event. +
+ +Field | +Description | +
---|---|
IsDribbling? | +Is a robot dribbling in this frame? | +
Robot ID | +Which robot is dribbling | +
Robot Team | +Which team is dribbling | +
2.2 Ball Possession
++Detect which robot/team is in control of the ball (if any). This is an +instantaneous event. +
+ ++The following table summarizes the metadata associated with this +event. +
+ +Field | +Description | +
---|---|
State | +One of the following values: yellow-posses, blue-posses, none | +
Robot ID | +Which robot is in possession (ignored if loose state is chosen) | +
+The state refers to who is in control of the ball. Either yellow, blue +or no team (none). The none state applies both when the ball is free +on the field, and when the ball is under contention from opposing team +robots. +
+2.3 Passing
++Detect pass attempts. This is a duration event. The duration lasts +from when the ball leaves the passer and ends when the pass either +reaches the receiver or fails (out of bounds, intercepted, deflected). +
+ +Field | +Description | +
---|---|
Start Time | +When did the pass start | +
End Time | +When did the pass end | +
Successful? | +Did the pass reach a receiver on the same team? | +
Passer ID | +Which robot was the one who passed the ball | +
Passer Team | +Which team is passing | +
Receiver ID | +Which robot was the intended recipient | +
2.4 Goal Shots
++Detect attempts at scoring a goal. This is a duration event. The +duration lasts from when the ball leaves the shooter and ends when the +ball either enters the goal, or is stopped/deflected by a robot. +
+ +Field | +Description | +
---|---|
Start Time | +When did the shot start | +
End Time | +When did the shot end | +
Successful? | +Did the shot enter the goal? | +
Shooter ID | +Which robot took the shot? | +
Shooter Team | +Which team did the shooter belong to | +
3 Procedure
++During the competition, two of the game logs. These game logs will be +pre-processed by the tools described in the software section +below. Participants will receive a copy of the pre-processed game +logs. A ground truth label file for these logs will be created by the +TC. The TC will attempt to use game logs from games where none of the +participants in this challenge played. This is to prevent an advantage +for teams that may have saved additional information in their own log +files. However, as this cannot be guaranteed, we will require all +participants to label 2 different game logs, so at most a participant +in this challenge will have competed in a single game log. +
+ ++Participants will produce their own set of labels for this game log +using the provided pre-processed data files. Teams will submit their +log files to the TC for scoring. The submitted label files will be +scored using the released scoring program. Participants will be ranked +by score, with higher scores being better. +
+ ++Participants can use any approach that is automated to label the +data. In other words, you must label the data with an algorithm that +does not require human interaction. You cannot hand-label the data, +you cannot ask for human assistance, and you cannot correct the labels +produced by your software. +
+ ++You may use any algorithm for labeling (rule-based, deep learning, +etc.). You can also use a non-causal labeling algorithm (i.e. your +labeling can look at future events to help label the current +event). Non-causal labeling may be useful in ambiguous situations. +
+ ++The participants should label all frames with the appropriate +instantaneous event types and metadata. To ease scoring, the number of +duration events in the ground truth data will be provided to the +participants before they produce their labels. This is to avoid having +to do sequence alignment of the ground truth and participant label +files. +
+3.1 File Formats and Utilities
++Figure 1 below shows a flow-chart of the +various utilities and file formats used in this challenge. The colors +of the boxes indicate who is responsible for running the +utility/producing the file. Clicking on this image will open a PDF +with clickable links to documentation for the different file formats +and software utilities. +
+ + +
+
Figure 1: File Formats and Utilities. See for PDF version with clickable links: clickable PDF version
+4 Scoring
++Scoring differs by event type. +
+4.1 Instantaneous Event Scoring
++Each instantaneous event has a state label (e.g. dribbling or +not). For each frame that matches the ground truth labels, +1 +point. For each matching piece of metadata +0.5 points. +
+4.2 Duration Event Scoring
++Duration events main score will be calculated using the Intersection +over Union (IoU) of the start and end times. The IoU is equal to the +area of overlap over the area of the union. +
+ +\begin{equation} +IoU = \frac{\text{area of overlap}}{\text{area of union}} +\end{equation} + ++The IoU is guaranteed to be \(\le 1\). All IoU will be added +up. Additionally, each correct piece of metadata for an event will add ++0.5 points to the score. +
+4.3 Final Score
++Each category will be ranked individually in order to make each label +category equally important for an overall win. Depending on the +ranking in that category, each participant will get an amount of +overall points. The amount of points is equal to \(\text{number of +participants} - \text{placement}\). So that first place has the most +points, and last place has the least. For example, with 7 participants +first place would get 6 points, second place 5, etc. +
+ ++Each of the category points will then be totaled to determine an +overall winner. This way each event type accuracy is equally +important, as they all contribute to the final score evenly. +
+5 Software Tooling
++All software tooling is located in a separate repo: https://github.com/RoboCup-SSL/ssl-rust-tools +
+ ++Below is an overview of the different parts, but refer to the +repository for more details on usage and file formats. +
+5.1 Label Preprocessing
++We have provided a simple tool that will process an SSL game log +file. This will strip unnecessary messages as well as group messages into +discrete frames that can be labeled. +
+5.2 Labeler
++We have provided a simple UI for creating label files. This can be +used to manually label training data for your algorithms. This will +also be the tool used by the TC to produce the ground truth labels. +
+5.3 Scoring Program
++We have provided a scoring tool. It will take in two different label +files, one ground truth and one to be scored, and produce a score for +each category described above. +
+6 File Format
++There are two formats: a data file format and a label file format. +
+ ++The data file is produced by the log preprocessor tool. The label file +is the list of labels for each of the frames in the data file. Both +use protobuf files and are designed to support fast random access. +
+ ++Please refer to the software repo for more information: https://github.com/RoboCup-SSL/ssl-rust-tools +
+