diff --git a/2025/somd25.html b/2025/somd25.html index b8a2d97..19ec82c 100644 --- a/2025/somd25.html +++ b/2025/somd25.html @@ -76,7 +76,7 @@
We will upload the dataset shortly after the competition begins.
We evaluate submissions using the F1 score, a metric that reflects the accuracy and precision of the Named Entity Recognition (NER) and Relation Extraction (RE). We will calculate an exact match macro average F1 score for each of the two test phases.
+We evaluate submissions using the F1 score, a metric that reflects the accuracy and precision of the Named Entity Recognition (NER) and Relation Extraction (RE). We will calculate macro-average F1 score using exact match criteria for each of the two test phases.