From 17fc66258d15a4f1b3afa83610519bf69f5af31d Mon Sep 17 00:00:00 2001 From: kmax-tech <64914021+kmax-tech@users.noreply.github.com> Date: Wed, 24 Apr 2024 07:20:58 +0200 Subject: [PATCH 01/38] Update image-retrieval-for-arguments.html --- .../image-retrieval-for-arguments.html | 63 +++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/clef24/touche24-web/image-retrieval-for-arguments.html b/clef24/touche24-web/image-retrieval-for-arguments.html index ea43db5..89f1fd0 100644 --- a/clef24/touche24-web/image-retrieval-for-arguments.html +++ b/clef24/touche24-web/image-retrieval-for-arguments.html @@ -193,7 +193,70 @@
Images alone can be ambiguous and difficult to understand without context, e.g. if they refer to symbolism. That's why we offer the option to submit a rationale along with the image. The rationale is a piece of text that helps us understand the image. For example, it could be a caption or contextual information about the image. The image and rationale will be evaluated together to see how this combination conveys the premise.
++ + Image Retrieval + + If you have chosen the retrieval approach, please submit your results in a file called "results.jsonl". + + Each JSON object in your "results.jsonl" file should have the following keys: + + argument_id - id of the argument in the arguments.xml file + method - retrieval + image_id - the image's ID - it corresponds to the name of the image's directory in the released dataset + rationale - additional info or caption for the image to understand how it conveys the premise (optional) + rank - specifies the preference of your submissions - 1 is highest + tag - tag defined by you and your group, identifies your group and the method you used to obtain the results + + An example submission for argument "65302-a-2" would look like the following: + + { "argument_id" : "65302-a-2 ", + "method " : "retrieval", + "image_id" : "Iffdea3cd664722c736d7d667", + "rationale" : "space is the final frontier", + "rank" : 1, + "tag " : "touche organizers - example submission for image retrieval; manual selection of images" + } + + Image Generation + + If you are using image generation, submit a .zip file, which should contain a JSONL file called "results.jsonl" and a directory called "images "containing the generated images. + + Please use the following keys for your JSON Objects in the JSONL file: + + argument_id - id of the arguments in the arguments.xml file + method - generation + prompt - the prompt that you have used to generate the image + image : - name of the generated image, which can be found in the images directory + rationale - additional info or caption for the image to understand how it conveys the premise (optional) + rank - specifies the preference of your submissions - 1 is highest + tag - tag defined by you and your group, identifies your group and the method you used to obtain the results + + An example looks like this: + + { "argument_id" : "65302-a-2 ", + "method " : "generation", + "prompt" : "cat looking into the stars", + "image_name" : "space-pic1.jpg", + "rationale" : "space is fascinating", + "rank" : 1, + "tag : "touche organizers - example submission for image generation; manual prompt engineering" + } + + Therefore the corresponding zip would have the following structure: + + - submissions.jsonl + - generated_images + - space-pic1.jpg + +
@@ -208,7 +208,7 @@- You can find more information about JSON lines here and a results.jsonl example for retrieval here. + You can find more information about JSON lines here and a "results.jsonl" example for retrieval here. In each run you can assign up to 10 images to the same argument ID. The rank key is used to determine the preference order of your images for the corresponding argument, where 1 is the most relevant image. This means that if you submit for example 5 images for one argument you need to use the rank values from 1 till 5. Also this means that a run containing multiple image assignments for an argument with the same rank will not be valid. - You can find an examples for such multiple image asssignments to the same argument in the linked results.jsonl example. + You can find an examples for such multiple image asssignments to the same argument in the linked "results.jsonl" example.Submission - Format
Each JSON object in your "results.jsonl" file should have the following keys: - argument_id - id of the argument in the arguments.xml file + argument_id - id of the argument in the arguments.xml file in the released dataset method - retrieval image_id - the image's ID - it corresponds to the name of the image's directory in the released dataset rationale - additional info or caption for the image to understand how it conveys the premise (optional) @@ -228,12 +228,12 @@Submission - Format
Image Generation - If you are using image generation, submit a .zip file, which should contain a JSONL file called "results.jsonl" + If you are using image generation, submit a zip file, which should contain a JSONL file called "results.jsonl" and a directory called "generated_images "containing the generated images. Please use the following keys for your JSON Objects in the JSONL file: - argument_id - id of the arguments in the arguments.xml file + argument_id - id of the arguments in the arguments.xml file in the released dataset method - generation prompt - the prompt that you have used to generate the image image : - name of the generated image, which can be found in the generated_images directory From 7c6d73428056a1566b69fe72a74638c9796e9b4a Mon Sep 17 00:00:00 2001 From: kmax-tech <64914021+kmax-tech@users.noreply.github.com> Date: Wed, 24 Apr 2024 16:39:42 +0200 Subject: [PATCH 07/38] Update image-retrieval-for-arguments.html --- clef24/touche24-web/image-retrieval-for-arguments.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clef24/touche24-web/image-retrieval-for-arguments.html b/clef24/touche24-web/image-retrieval-for-arguments.html index 3f80eb4..cf07b85 100644 --- a/clef24/touche24-web/image-retrieval-for-arguments.html +++ b/clef24/touche24-web/image-retrieval-for-arguments.html @@ -255,7 +255,7 @@Submission - Format
Therefore the corresponding zip would have the following structure: - - submissions.jsonl (file) + - results.jsonl (file) - generated_images (directory) - space-pic1.jpg (file) From 9c69b854e3c08050602b58ee16702f44f07d8f26 Mon Sep 17 00:00:00 2001 From: cagri coltekinDate: Thu, 25 Apr 2024 23:55:46 +0200 Subject: [PATCH 08/38] Update ideology/power task page. --- ...er-identification-in-parliamentary-debates.html | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/clef24/touche24-web/ideology-and-power-identification-in-parliamentary-debates.html b/clef24/touche24-web/ideology-and-power-identification-in-parliamentary-debates.html index 40aa1d2..5e07e60 100644 --- a/clef24/touche24-web/ideology-and-power-identification-in-parliamentary-debates.html +++ b/clef24/touche24-web/ideology-and-power-identification-in-parliamentary-debates.html @@ -41,6 +41,9 @@ Synopsis
Sub-Task 2: Given a parliamentary speech in one of several languages, identify whether the speaker's party is currently governing or in opposition. Communication: [mailing lists: task, organizers] Training data: [download] +Test data: [download] +Submission: [baseline] [evaluator] [forum] [submit] + @@ -176,10 +179,15 @@Submission
The participants are allowed to use any external datasets, except the source data from ParlaMint. -The submission system will open soon. -Register on the mailing list to get notified. +Submission are accepted through +TIRA. +You can submit both predictoins +and dockerized software submisions (for better reproducibility). + We provide - a simple linear baseline with code for reading and writing the files. +a simple linear baseline and evaluation script, +which also include a toy example, and examples of how to dockerize +your submission. +May 6, 2024: Approaches submission deadline. [register via submission system (SUBMIT)] May 31, 2024: Participant paper submission. June 21, 2024: Peer review notification. July 8, 2024: Camera-ready participant papers submission. From 3522f2dd33e3ad332c8dfc2f52462eb280198d16 Mon Sep 17 00:00:00 2001 From: Johannes KieselDate: Mon, 29 Apr 2024 13:20:51 +0200 Subject: [PATCH 21/38] late registration notice --- clef24/touche24-web/index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clef24/touche24-web/index.html b/clef24/touche24-web/index.html index 4a62e1c..86bf968 100644 --- a/clef24/touche24-web/index.html +++ b/clef24/touche24-web/index.html @@ -47,7 +47,7 @@ Shared Tasks
Important Dates
- Dec. 18, 2023: CLEF Registration opens. [register]
-- Apr. 22, 2024: CLEF Registration closes.
+- Apr. 22, 2024: CLEF Registration closes: late registration via submission system, see the respective task pages
- May 6, 2024: Approaches submission deadline.
- May 31, 2024: Participant paper submission.
- June 21, 2024: Peer review notification.
From ada3959e5505cd9ab9126698afae6c7832872e9a Mon Sep 17 00:00:00 2001 From: kmax-tech <64914021+kmax-tech@users.noreply.github.com> Date: Mon, 29 Apr 2024 18:16:56 +0200 Subject: [PATCH 22/38] Update image-retrieval-for-arguments.html --- .../touche24-web/image-retrieval-for-arguments.html | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/clef24/touche24-web/image-retrieval-for-arguments.html b/clef24/touche24-web/image-retrieval-for-arguments.html index 67fc288..e15174d 100644 --- a/clef24/touche24-web/image-retrieval-for-arguments.html +++ b/clef24/touche24-web/image-retrieval-for-arguments.html @@ -226,18 +226,18 @@Submission - Format
Image Generation - If you are using image generation, submit a file called "generation.zip", which should contain a JSONL file called "results.jsonl" + If you are using image generation or any other approach that generates images, submit a file called "generation.zip", which should contain a JSONL file called "results.jsonl" and a directory called "generated_images "containing the generated images. Please use the following keys for your JSON Objects in the JSONL file: argument_id - id of the argument in the arguments.xml file in the released dataset method - generation - prompt - the prompt that you have used to generate the image + prompt - the prompt that you have used to generate the image (leave empty if not applicable) image - name of the generated image, which can be found in the generated_images directory rationale - additional info or caption for the image to understand how it conveys the premise (optional) rank - specifies the preference of your image assignment (more below) - 1 is highest - tag - tag defined by you and your group, identifies your group and the particula method you used to obtain the results + tag - tag defined by you and your group, identifies your group and the particular method you used to obtain the results An example looks like this: @@ -251,7 +251,7 @@Submission - Format
"tag": "touche organizers - example submission for image generation; manual prompt engineering" } - Therefore the corresponding "generation.zip for this submission would have the following structure: + Therefore the corresponding "generation.zip" for this submission would have the following structure: - results.jsonl (file) - generated_images (directory) @@ -259,10 +259,10 @@Submission - Format
Interested?
+ SubmitIf you would like to use the Stable Diffusion API for this task, just contact us and we will provide you with the details.
Submit your approach via TIRA. Ask in the Forum if you need help. You need to register your team (in addition to a registration at CLEF) and pick an alias for your team name (submission is anonymous; you can reveal you true team name after final paper acceptance). You submit a Docker image or submit from your Github repository (via automated Docker building). In case of trouble, you can also submit via run file upload (not recommended due to poor reproducibility; rather contact us if you need help with Dockerization). You can submit on the validation dataset to check how submission works or the test dataset. You will not be able to see your results on the test dataset until after the deadline. Datasets are provided multilingual or in machine-translated English (see Data). [forum] [submit]
-We recommend to start your approach from one of our example approaches (in Python), which include the code for reading and writing the files and make it easy to later deploy your approach as server or submit and distribute it as Docker image. [random baseline: script, notebook] [bert baseline] [ollama baseline]
+We recommend to start your approach from one of our example approaches (in Python), which include the code for reading and writing the files and make it easy to later deploy your approach as server or submit and distribute it as Docker image. If you run them as a local server, you can use our web ui to use them interactively. [random baseline: script, notebook] [bert baseline] [ollama baseline]
Approaches need to produce run files that have the same format as the labels.tsv
, but the numbers can be between 0 and 1 and are interpreted as the confidence of the approach (employed for evaluation via ROC-curves): [toy example]
For sub-task 1: For each sentence and value, the sum of the numbers in the attained and constrained columns should be the confidence of your approach in that the sentence references the value. A sum ≥ 0.5 is treated as a positive prediction for purposes of evaluation with precision, recall, and F1-score. Subscribe to the mailing list to receive notifications. Subscribe to the mailing list to receive notifications. Subscribe to the mailing list to receive notifications. All deadlines are 23:59 anywhere on earth (UTC-12). All deadlines are 23:59 CEST (UTC+2). All deadlines are 23:59 anywhere on earth (UTC-12). All deadlines are 23:59 CEST (UTC+2). All deadlines are 23:59 anywhere on earth (UTC-12). All deadlines are 23:59 CEST (UTC+2).
From fe521111e12f6bc671a28fbb4b17bb6474b94f04 Mon Sep 17 00:00:00 2001
From: Johannes Kiesel Important Dates
Important Dates
-
-Important Dates
-
- Important Dates
-Important Dates
Important Dates
Participants are not required to use the first four fields,
From c2a7469130f4ef8c0ab8e2989ea73f4842040198 Mon Sep 17 00:00:00 2001
From: Johannes Kiesel Data
non-English speeches.
Important Dates
Task
From 70bc114d0f175f64a4b2d9eca2c1d61b5b956b96 Mon Sep 17 00:00:00 2001
From: Johannes Kiesel Important Dates
Important Dates
Important Dates
Important Dates
Task
Debates in national parliaments do not only affect