Skip to content

Latest commit

 

History

History
71 lines (40 loc) · 2.58 KB

README.md

File metadata and controls

71 lines (40 loc) · 2.58 KB

A benchmark toward an end-to-end machine learning system for acoustic monitoring of wildlife populations and ecosystems

img-verification

Current passive acoustic monitoring datasets focus on single species, limiting their use for conservation and policy. We propose collaborating with biologists and computer scientists to create a comprehensive benchmark covering various species. Standardized annotations, preprocessing, and baseline models will bridge this gap for a general ecosystem and animal population assessment system.

Check more information here

Download

ESP Atlas Multi-taxonomic Annotation Protocol (Draft in Spanish)

Download the ESP Multi-taxonomic Dataset (Forthcoming)

A more thorough dataset description is available in the original paper. (Forthcoming)

Installation instruction and reproduction of baseline results (Forthcoming)

  1. Install Conda

  2. Clone this repository

git clone https://github.com/jscanass/esp_atlas/
  1. Create an environment and install requirements
cd esp_atlas
conda create -n esp_env python=3.8 -y
conda activate esp_env
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirements.txt

Notes

  1. Download the data directly from Zenodo

  2. Train

python baseline/train.py --config baseline/configs/exp_resnet18.yaml
  1. Inference
python baseline/evaluate.py --config  baseline/configs/exp_resnet18.yaml

Citing this work (Forthcoming)

If you find this work useful for your research, please consider citing it as:

Acknowledgments

The authors acknowledge financial support from the ESP Grant