From 80b856b0c4bf6ba7b1dfad0bd6d1cfd034d84890 Mon Sep 17 00:00:00 2001 From: sharmila upadhyaya Date: Tue, 18 Feb 2025 22:19:41 +0100 Subject: [PATCH] Update somd25.html --- 2025/somd25.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2025/somd25.html b/2025/somd25.html index 9ba3e5f..7b8b829 100644 --- a/2025/somd25.html +++ b/2025/somd25.html @@ -76,7 +76,7 @@

Dataset

We will upload the dataset shortly after the competition begins.

Evaluation

-

We evaluate submissions using the F1 score, a metric that reflects the accuracy and precision of the Named Entity Recognition (NER) and Relation Extraction (RE). We will calculate an F1 score for each of the two test phases. We determine the final score for each submission by averaging the F1 scores obtained from both test phases.

+

We evaluate submissions using the F1 score, a metric that reflects the accuracy and precision of the Named Entity Recognition (NER) and Relation Extraction (RE). We will calculate an exact match macro average F1 scoreem> for each of the two test phases.

Competition Timeline Overview