diff --git a/2025/somd25.html b/2025/somd25.html index 9ba3e5f..7b8b829 100644 --- a/2025/somd25.html +++ b/2025/somd25.html @@ -76,7 +76,7 @@
We will upload the dataset shortly after the competition begins.
We evaluate submissions using the F1 score, a metric that reflects the accuracy and precision of the Named Entity Recognition (NER) and Relation Extraction (RE). We will calculate an F1 score for each of the two test phases. We determine the final score for each submission by averaging the F1 scores obtained from both test phases.
+We evaluate submissions using the F1 score, a metric that reflects the accuracy and precision of the Named Entity Recognition (NER) and Relation Extraction (RE). We will calculate an exact match macro average F1 scoreem> for each of the two test phases.