From e68ff2771397e9e94902a2bdda5bd66d0cfa2220 Mon Sep 17 00:00:00 2001 From: Li Bo Date: Tue, 14 Jan 2025 02:47:23 +0800 Subject: [PATCH] Update README.md --- lmms_eval/tasks/megabench/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/lmms_eval/tasks/megabench/README.md b/lmms_eval/tasks/megabench/README.md index a82d3afb..066307c2 100644 --- a/lmms_eval/tasks/megabench/README.md +++ b/lmms_eval/tasks/megabench/README.md @@ -1,6 +1,8 @@ # MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks -TODO: some introduction +![image](https://github.com/user-attachments/assets/5fd44fa9-0ec2-4298-ad0c-e883cb1edf7f) + +MEGA-Bench contains 505 multimodal tasks with diverse data sources, input/output formats, and skill requirements. The taxonomy tree is derived from the application dimension, which guides and calibrates the annotation process. The benchmark is equiped with a suite of 45 evaluation metrics to handle various output formats beyond multiple-choice questions. ## Step-1: Get the model response files with lmms-eval