A benchmark toward an end-to-end machine learning system for acoustic monitoring of wildlife populations and ecosystems
Current passive acoustic monitoring datasets focus on single species, limiting their use for conservation and policy. We propose collaborating with biologists and computer scientists to create a comprehensive benchmark covering various species. Standardized annotations, preprocessing, and baseline models will bridge this gap for a general ecosystem and animal population assessment system.
Check more information here
ESP Atlas Multi-taxonomic Annotation Protocol (Draft in Spanish)
Download the ESP Multi-taxonomic Dataset (Forthcoming)
A more thorough dataset description is available in the original paper. (Forthcoming)
-
Install Conda
-
Clone this repository
git clone https://github.com/jscanass/esp_atlas/
- Create an environment and install requirements
cd esp_atlas
conda create -n esp_env python=3.8 -y
conda activate esp_env
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirements.txt
Notes
- The installation of dependencies where tested on Azure. If you want to run locally, you might have to change the way you install PyTorch. Check the PyTorch official webpage for installation instruction on specific platforms.
- For macOS you might need to install chardet: The Universal Character Encoding Detector with pip.
-
Download the data directly from Zenodo
-
Train
python baseline/train.py --config baseline/configs/exp_resnet18.yaml
- Inference
python baseline/evaluate.py --config baseline/configs/exp_resnet18.yaml
If you find this work useful for your research, please consider citing it as:
The authors acknowledge financial support from the ESP Grant