diff --git a/README.md b/README.md index 5acec8c..fc97017 100644 --- a/README.md +++ b/README.md @@ -98,6 +98,7 @@ Move the `slider` to preview the positions and ID information of faces on differ * The result can `not` directly converted to exactly the same [RTTM](./rttms/all.rttm) as some duration or face ids are adjusted and off-screen speech is not included in this part. By the way, the facial identification in each video is unique and also differs from the identifiers in [RTTM](./rttms/all.rttm) mentioned above. * Different from the above-mentioned cropped face, the annotation here is for the bounding box of the unprocessed face in the original video. * **Why are we releasing it now?** Our initial experiments were conducted using a training set based on cropped faces. However, we realized that facial tagging is extremely important for multi-modal speaker diarization. Consequently, following the publication of our work, we decided to embark on a frame-by-frame review process. The undertaking is massive, involving the inspection of approximately 120,000 video frames, and ensuring that the IDs remain consistent throughout the video. We also conducted a second round of checks for added accuracy. It is only after this meticulous process that we are now able to release the dataset for public use. +* **What is the relationship between audio labels and visual labels?** You can use this [Link](https://github.com/X-LANCE/MSDWILD/tree/master/visualization) to visualize the relationship. * I suggest that this is merely **supplementary** material for this dataset. Possible future work we envision includes training an end-to-end multimodal speaker diarization that incorporates facial location information, and an evaluation method for a multimodal speaker diarization that takes into account the human face location.