|
|
--- |
|
|
license: cc-by-nc-sa-4.0 |
|
|
base_model: |
|
|
- meta-llama/Llama-3.1-8B-Instruct |
|
|
tags: |
|
|
- EarthSpeciesProject |
|
|
- NatureLM |
|
|
--- |
|
|
|
|
|
# Model Card for NatureLM-audio |
|
|
|
|
|
NatureLM-audio is the first audio-language foundation model specifically designed for bioacoustics. It is trained on a diverse dataset of text-audio pairs spanning bioacoustics, speech, and music, enabling it to perform tasks such as species classification, detection, captioning, and lifestage classification. The model demonstrates strong generalization to unseen taxa and tasks, setting a new state-of-the-art on several bioacoustics benchmarks. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
NatureLM-audio is an audio-language model designed to address bioacoustic tasks such as species classification, detection, and captioning. It leverages a combination of bioacoustic, speech, and music data to learn robust representations that generalize across domains. |
|
|
|
|
|
- **Developed by:** David Robinson, Marius Miron, Masato Hagiwara, Milad Alizadeh, Gagan Narula, Sara Keen, Benno Weck, Matthieu Geist, Olivier Pietquin (Earth Species Project) |
|
|
- **Funded by:** More info at [https://www.earthspecies.org/about-us\#support](https://www.earthspecies.org/about-us#support) |
|
|
- **Shared by:** Earth Species Project |
|
|
- **Model type:** Audio-language foundation model |
|
|
- **Language(s) (NLP):** English |
|
|
- **License:** CC-BY-NC-SA |
|
|
- **Finetuned from model:** Llama-3.1-8B-Instruct, [Fine-tuned BEATs\_iter3+ (AS2M) (cpt2)](https://github.com/microsoft/unilm/tree/master/beats) |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Repository:** [https://github.com/earthspecies/naturelm-audio](https://github.com/earthspecies/naturelm-audio) |
|
|
- **Paper:** [NatureLM-audio: An Audio-Language Foundation Model for Bioacoustics](https://arxiv.org/abs/2411.07186) |
|
|
- **Demo:** [https://earthspecies.github.io/naturelm-audio-demo/](https://earthspecies.github.io/naturelm-audio-demo/) |
|
|
- **Hugging Face Space - UI Demo:** [https://huggingface.co/spaces/EarthSpeciesProject/NatureLM-Audio](https://huggingface.co/spaces/EarthSpeciesProject/NatureLM-Audio) |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
NatureLM-audio can be used directly for bioacoustic tasks such as species classification, detection, and captioning. It is particularly useful for biodiversity monitoring, conservation, and animal behavior studies. |
|
|
|
|
|
Example prompts: |
|
|
|
|
|
Prompt: What is the common name for the focal species in the audio? |
|
|
Answer: Humpback Whale |
|
|
|
|
|
Prompt: Which of these, if any, are present in the audio recording? Single pulse gibbon call, Multiple pulse gibbon call, Gibbon duet, None. |
|
|
Answer: Gibbon duet |
|
|
|
|
|
Prompt: What is the common name for the focal species in the audio? |
|
|
Answer: Spectacled Tetraka |
|
|
|
|
|
Prompt: What is the life stage of the focal species in the audio? |
|
|
Answer: Juvenile |
|
|
|
|
|
Prompt: What type of vocalization is heard from the focal species in the audio? |
|
|
Answer with either 'call' or 'song'. |
|
|
|
|
|
Prompt: Caption the audio, using the common name for any animal species. |
|
|
|
|
|
### Downstream Use |
|
|
|
|
|
The model can be used to structure audio for ethology research, be integrated into larger ecological monitoring systems, or be fine-tuned for specific bioacoustic tasks. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
The model is not designed for tasks outside of bioacoustics. It was not tested for tasks such as individual-id, and call-type and lifestage classification tasks have only been tested on birds Tasks beyond those evaluated in the paper may require in-context learning or fine-tuning. The model does not currently perform fine-grained detection with exact time stamps. |
|
|
|
|
|
### Bias, Risks, and Limitations |
|
|
|
|
|
- **Bias:** The model may exhibit biases towards bird vocalizations due to the overrepresentation of bird datasets in the training data. This could limit its effectiveness for other taxa. Further, the model may inherit biases from the parent Llama model. |
|
|
- **Risks:** The model’s ability to detect and classify endangered species could be misused for illegal activities such as poaching. |
|
|
- **Limitations:** The model’s performance may be limited for under-represented taxa. |
|
|
- **Red-teaming results**: We ran a red-teaming assessment by first defining 16 risk categories adapted for AI safety in the context of animals, ecosystems, and the environment, such as Wildlife Exploitation, Non-Compliance with Environmental Laws, and Biodiversity Loss. Then, we used an LLM to generate adversarial prompts that could potentially lead to harmful output, and the responses were then evaluated to determine their safety. While the majority of responses from NatureLM-audio were safe, often providing no content for problematic prompts, we identified several scenarios where the model's responses were potentially harmful, including cases where the model failed to discourage unethical actions related to wildlife exploitation and environmental harm. |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
Users should be aware of the risks, biases, and limitations of the model. It is recommended to use the model in conjunction with other ecological monitoring tools and to validate its predictions in real-world settings. |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
Instantiating the model: |
|
|
|
|
|
```python |
|
|
from NatureLM.models import NatureLM |
|
|
|
|
|
# Download the model from HuggingFace |
|
|
model = NatureLM.from_pretrained("EarthSpeciesProject/NatureLM-audio") |
|
|
model = model.eval().to("cuda") |
|
|
``` |
|
|
|
|
|
Using the model: |
|
|
|
|
|
```python |
|
|
from NatureLM.infer import Pipeline |
|
|
|
|
|
audio_paths = ["assets/nri-GreenTreeFrogEvergladesNP.mp3"] |
|
|
queries = ["What is the common name for the focal species in the audio? Answer:"] |
|
|
|
|
|
pipeline = Pipeline(model=model) |
|
|
|
|
|
# Run the model over the audio in sliding windows of 10 seconds with a hop length of 10 seconds |
|
|
results = pipeline(audio_paths, queries, window_length_seconds=10.0, hop_length_seconds=10.0) |
|
|
|
|
|
print(results) |
|
|
# ['#0.00s - 10.00s#: Green Treefrog\n'] |
|
|
``` |
|
|
|
|
|
Refer to the GitHub [repository](https://github.com/earthspecies/naturelm-audio) for more details. |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
The model is trained on a diverse dataset of text-audio pairs, including bioacoustic recordings, general audio, speech, and music datasets. The training data includes datasets such as Xeno-canto, iNaturalist, and Watkins. We have released the [training dataset](https://huggingface.co/datasets/EarthSpeciesProject/NatureLM) on Hugging Face. |
|
|
|
|
|
### Training Procedure |
|
|
|
|
|
The model is trained in two stages: |
|
|
|
|
|
1. **Perception Pretraining** on species classification. |
|
|
2. **Generalization Fine-tuning** on a variety of bioacoustic tasks. |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
|
|
- **Learning rate:** 9.0e-5 (peak), 2.0e-5 (end) |
|
|
- **Batch size:** 128 |
|
|
- **Training steps:** 5.0e5 (Stage 1), 1.6e6 (Stage 2\) |
|
|
|
|
|
For the full list of hyperparameters consult the NatureLM-audio repository. |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
|
|
#### Testing Data |
|
|
|
|
|
The model is evaluated on the [BEANS-Zero](https://huggingface.co/datasets/EarthSpeciesProject/BEANS-Zero) benchmark, which includes tasks such as species classification, detection, and captioning. |
|
|
|
|
|
#### Metrics |
|
|
|
|
|
- **Accuracy** for classification |
|
|
- **F1** for detection |
|
|
- **SPIDEr** for captioning |
|
|
|
|
|
### Results |
|
|
|
|
|
The model achieves state-of-the-art performance on several bioacoustics tasks, including zero-shot classification of unseen species. |
|
|
|
|
|
## Environmental Impact |
|
|
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
|
|
- **Hardware Type:** 8xH100 |
|
|
- **Hours used:** 216 |
|
|
- **Cloud Provider:** Lambda labs |
|
|
- **Compute Region:** central-texas |
|
|
|
|
|
## Technical Specifications |
|
|
|
|
|
### Model Architecture and Objective |
|
|
|
|
|
The model uses a BEATs audio encoder, Q-Former for connecting audio embeddings to the LLM, and Llama-3.1-8B-Instruct as the text generator. |
|
|
|
|
|
### Compute Infrastructure |
|
|
|
|
|
- **Hardware:** 8xH100 |
|
|
- **Software:** Pytorch |
|
|
|
|
|
## Citation |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
``` |
|
|
@inproceedings{naturelm-audio, |
|
|
title={NatureLM-audio: An Audio-Language Foundation Model for Bioacoustics}, |
|
|
author={Robinson, David and Miron, Marius and Hagiwara, Masato and Pietquin, Olivier}, |
|
|
booktitle={Proceedings of the International Conference on Learning Representations}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
**APA:** |
|
|
|
|
|
Robinson, D., Miron, M., Hagiwara, M., & Pietquin, O. (2025). NatureLM-audio: An Audio-Language Foundation Model for Bioacoustics. ICLR 2025 |
|
|
|
|
|
## Glossary |
|
|
|
|
|
- **Bioacoustics:** The study of sound production and reception in animals. |
|
|
- **Zero-shot learning:** The ability of a model to perform tasks it has not explicitly been trained on. |
|
|
- **Taxa:** A group of organisms, such as species, genus, or family. |
|
|
|
|
|
## More Information |
|
|
|
|
|
For more information, please visit the [project page](https://earthspecies.github.io/naturelm-audio-demo/). |
|
|
|
|
|
## Model Card Authors |
|
|
|
|
|
- David Robinson (Earth Species Project) |
|
|
- Marius Miron (Earth Species Project) |
|
|
- Masato Hagiwara (Earth Species Project) |
|
|
- Milad Alizadeh (Earth Species Project) |
|
|
- Gagan Narula (Earth Species Project) |
|
|
- Sara Keen (Earth Species Project) |
|
|
- Benno Weck (Earth Species Project) |
|
|
- Matthieu Geist (Earth Species Project) |
|
|
- Olivier Pietquin (Earth Species Project) |
|
|
|
|
|
## Model Card Contact |
|
|
|
|
|
Contact: [[email protected]](mailto:[email protected]) |
|
|
|