Final Results

The STABILO Challenge has come to an end! We were impressed with the range of different algorithms that you applied to this task of classifying 52 letters. Keeping in mind how difficult it is – even for humans – to distinguish characters like S, s, X and x, the best teams’ results are really remarkable. The algorithms were evaluated with writing from 20 persons that didn’t contribute to the training set – ensuring that the generalization capabilties of the submissions were being put to the test.

  • 1st place: TAL ML Team (72.02%)
    • Clever data augmentation techniques combined with an ensemble method yielded the victory by a great margin.
    • Affiliation: TAL Education Group (CN)
  • 2nd place: LME_SAGI (64.59%)
    • Affiliation: Pattern Recognition Lab at University of Erlangen-Nuremberg (GER)
  • 3rd place: ValueError shape mismatch (61.50%)
    • Affiliation: Human-Computer Interaction Group at University of Duisburg-Essen (GER)

Congratulations to the winning teams!

Thanks to everybody who participated in this challenge. We hope you enjoyed the process despite finishing after stage 2 and not being able to celebrate this in Cancún. Cross your fingers for Ubicomp 2021 in Cancún – with a STABILO Challenge 2.0!


Stage 1 Results

The 44 participating teams (by 2020/04/14) could make voluntary submissions to have their stage 1 algorithms evaluated. The teams that reached above 80% accuracy are listed below:

  • EINNS: 83.71%
  • ValueError shape mismatch: 80.85%

Congratulations to the leading teams! Keep tuning your algorithms and good luck in stage 2!


Evaluation Metrics (Stage 2)

The stage 2 submission is considered to determine the winning teams.

  • The most important evaluation criterion is the accuracy obtained from testing the submitted executable file with data from unknown writers.
  • If the difference in accuracy of multiple teams is below 2 percent points, the quality of the written report and the quality of the source code are taken into consideration.

What to submit after stage 2?

  • An executable of your classifier (more info below)
  • Source code necessary for preprocessing, training and testing your model (along with a common open source license of your choice). Please provide documentation.
  • A short written report (pdf, 2-6 pages) describing the algorithms that your team used.
Format of the Submitted Executable

All evaluations will be conducted on Windows 10 (64 bit) so please test your executable in that environment.

Requirements of the executable:

  • System: It’s an .exe-file runnable on Windows 10 (64 bit)
  • Arguments: It takes the following command line arguments:
    • -p C:/path/to/folder/containing/split/csv/files/ (more info below)
    • -c C:/path/to/calibration/file/calibration.txt
  • Naming: Please call your .exe file TeamName_Stage2.exe.
  • Output: For every .csv file in the given folder, the path of the file and the predicted letter has to be printed. Path and predictions are separated by *** and different files are separated by ~~~. The complete output has to be one line. The order of path***prediction tuples does not matter. Example:
  • Speed: The executable loads your saved model only once and then does the inference of all the validation files in the given folder. If the evaluation takes an unusually long period of time, your submission might not be considered.
  • The executable has to work independently and off-line.
  • How-To: Please include a short, concise ReadMe file if necessary.

Format of the validation folder:

We did not publish the recordings of a number of volunteers. These are used as a validation set in stage 1 and stage 2.
Along with the training data, you received the python script For each person, it creates a csv folder containing the split single letters. These files have exactly the same format as the validation files. They contain 15 columns with a header.
Consequently, you can test your executable on the training data folders created by the split_characters-script. (Unfortunately, the validation files will not contain the ground truth in their name 😛 )

Useful code snippets:

    • An example class that loads the model and prints the output in the expected format. This one should be saved as an .exe file and handed in.
    • Some code that you can use to test your .exe file. Make sure the highlighted lines also work for your executable.


How to submit?

The participants will be given information on how to submit via email.