Submissions

Stage 1 Submissions: Results

The 44 participating teams (by 2020/04/14) could make voluntary submissions to have their stage 1 algorithms evaluated. The teams that reached above 80% accuracy are listed below:

  • EINNS: 83.71%
  • ValueError shape mismatch: 80.85%

Congratulations to the leading teams! Keep tuning your algorithms and good luck in stage 2!

 

Evaluation Metrics

Open Competition Metrics

Only the submissions of stage 3 stage 2 of the open competition are considered to determine the three finalist winning teams.

  • The most important evaluation criterion is the accuracy obtained from testing the submitted executable file with data from unknown writers.
  • If the difference in accuracy of multiple teams is below 2 percent points, the quality of the written report and the quality of the source code are taken into consideration.

Final Competition Metrics

The finalist teams will be given the chance to adjust their algorithms and revise their report to be resubmitted shortly before the conference. On site, data recordings will take place that will be used for the final evaluation of the three best teams. The evaluation metrics are similar to the open competition but also take the teams’ presentations into account:

  • The accuracy obtained from testing the submitted executable file with data from unknown conference attendees.
  • If the difference in accuracy of multiple teams is below 1 percent point, the quality of the teams’ presentation, the quality of the written report and the quality of the source code are taken into consideration.

Format of the Submitted Executable

For the two (voluntary) submissions after stage 1 and stage 2, only executables – no source code – are required to be handed in. All evaluations will be conducted on Windows 10 (64 bit) so please test your executable in that environment.

Requirements of the executable:

  • System: It’s an .exe-file runnable on Windows 10 (64 bit)
  • Arguments: It takes the following command line arguments:
    • -p C:/path/to/folder/containing/split/csv/files/ (more info below)
    • -c C:/path/to/calibration/file/calibration.txt
  • Naming: Please call your .exe file TeamName_Stage1.exe.
  • Output: For every .csv file in the given folder, the path of the file and the predicted letter has to be printed. Path and predictions are separated by *** and different files are separated by ~~~. The complete output has to be one line. The order of path***prediction tuples does not matter. Example:
    C:/path/0.csv***R~~~C:/path/1.csv***X~~~C:/path/10.csv***T~~~C:/path/100.csv***E~~~C:/path/101.csv***Y~~~...
  • Speed: The executable loads your saved model only once and then does the inference of all the validation files in the given folder. If the evaluation takes an unusually long period of time (longer than 30 seconds for 100 predictions), your submission might not be considered.
  • How-To: Please include a short, concise ReadMe file if necessary.

Format of the validation folder:

We did not publish the recordings of a number of volunteers. These are used as a validation set in stage 1 and stage 2.
Along with the training data, you received the python script split_characters.py. For each person, it creates a csv folder containing the split single letters. These files have exactly the same format as the validation files. They contain 15 columns with a header.
Consequently, you can test your executable on the training data folders created by the split_characters-script. (Unfortunately, the validation files will not contain the ground truth in their name 😛 )

Useful code snippets:

    • An example class that loads the model and prints the output in the expected format. This one should be saved as an .exe file and handed in.
    • Some code that you can use to test your .exe file. Make sure the highlighted lines also work for your executable.

 

How to submit?

The participants will be given information on how to submit their executable via email.