Submissions

Note: The content of this page has been adapted to reflect the changes due to Covid-19.

Stage 1 Submissions: Results

The 44 participating teams (by 2020/04/14) could make voluntary submissions to have their stage 1 algorithms evaluated. The teams that reached above 80% accuracy are listed below:

  • EINNS: 83.71%
  • ValueError shape mismatch: 80.85%

Congratulations to the leading teams! Keep tuning your algorithms and good luck in stage 2!

 

Evaluation Metrics (Stage 2)

The stage 2 submission is considered to determine the winning teams.

  • The most important evaluation criterion is the accuracy obtained from testing the submitted executable file with data from unknown writers.
  • If the difference in accuracy of multiple teams is below 2 percent points, the quality of the written report and the quality of the source code are taken into consideration.

What to submit after stage 2?

  • An executable of your classifier (more info below)
  • Source code necessary for preprocessing, training and testing your model (along with a common open source license of your choice). Please provide documentation.
  • A short written report (pdf, 2-6 pages) describing the algorithms that your team used.
Format of the Submitted Executable

All evaluations will be conducted on Windows 10 (64 bit) so please test your executable in that environment.

Requirements of the executable:

  • System: It’s an .exe-file runnable on Windows 10 (64 bit)
  • Arguments: It takes the following command line arguments:
    • -p C:/path/to/folder/containing/split/csv/files/ (more info below)
    • -c C:/path/to/calibration/file/calibration.txt
  • Naming: Please call your .exe file TeamName_Stage2.exe.
  • Output: For every .csv file in the given folder, the path of the file and the predicted letter has to be printed. Path and predictions are separated by *** and different files are separated by ~~~. The complete output has to be one line. The order of path***prediction tuples does not matter. Example:
    C:/path/0.csv***R~~~C:/path/1.csv***X~~~C:/path/10.csv***T~~~C:/path/100.csv***E~~~C:/path/101.csv***Y~~~...
  • Speed: The executable loads your saved model only once and then does the inference of all the validation files in the given folder. If the evaluation takes an unusually long period of time, your submission might not be considered.
  • The executable has to work independently and off-line.
  • How-To: Please include a short, concise ReadMe file if necessary.

Format of the validation folder:

We did not publish the recordings of a number of volunteers. These are used as a validation set in stage 1 and stage 2.
Along with the training data, you received the python script split_characters.py. For each person, it creates a csv folder containing the split single letters. These files have exactly the same format as the validation files. They contain 15 columns with a header.
Consequently, you can test your executable on the training data folders created by the split_characters-script. (Unfortunately, the validation files will not contain the ground truth in their name 😛 )

Useful code snippets:

    • An example class that loads the model and prints the output in the expected format. This one should be saved as an .exe file and handed in.
    • Some code that you can use to test your .exe file. Make sure the highlighted lines also work for your executable.

 

How to submit?

The participants will be given information on how to submit via email.