Update app.py
Browse files
app.py
CHANGED
|
@@ -7,12 +7,17 @@ BANNER = f'<div style="display: flex; justify-content: space-around;"><img src="
|
|
| 7 |
INTRODUCTION_TEXT = """
|
| 8 |
📖**Open Universal Arabic ASR Leaderboard**📖 benchmarks multi-dialect Arabic ASR models on various multi-dialect datasets.
|
| 9 |
\nApart from the WER%/CER% for each test set, we also report the Average WER%/CER% and rank the models based on the Average WER, from lowest to highest.
|
| 10 |
-
\nTo reproduce the benchmark numbers and request a model that is not listed, you can launch an issue/PR in our GitHub repo😊.
|
| 11 |
-
\nFor more detailed analysis such as models' robustness, speaker adaption, model efficiency and memory usage, please check our paper.
|
| 12 |
"""
|
| 13 |
|
| 14 |
CITATION_BUTTON_TEXT = """
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
"""
|
| 17 |
|
| 18 |
METRICS_TAB_TEXT = METRICS_TAB_TEXT = """
|
|
@@ -20,7 +25,7 @@ METRICS_TAB_TEXT = METRICS_TAB_TEXT = """
|
|
| 20 |
We report both the Word Error Rate (WER) and Character Error Rate (CER) metrics.
|
| 21 |
## Reproduction
|
| 22 |
The Open Universal Arabic ASR Leaderboard will be a continuous benchmark project.
|
| 23 |
-
\nWe open-source the evaluation scripts at our GitHub repo.
|
| 24 |
\nPlease launch a discussion in our GitHub repo to let us know if you want to learn about the performance of a new model.
|
| 25 |
|
| 26 |
## Benchmark datasets
|
|
@@ -33,8 +38,8 @@ The Open Universal Arabic ASR Leaderboard will be a continuous benchmark project
|
|
| 33 |
| [MGB-2](http://www.mgb-challenge.org/MGB-2.html) | Unspecified | 9.6 |
|
| 34 |
|
| 35 |
## In-depth Analysis
|
| 36 |
-
We also provide a comprehensive analysis of models' robustness, speaker adaptation, inference efficiency and memory consumption.
|
| 37 |
-
\nPlease check our paper to learn more.
|
| 38 |
"""
|
| 39 |
|
| 40 |
|
|
|
|
| 7 |
INTRODUCTION_TEXT = """
|
| 8 |
📖**Open Universal Arabic ASR Leaderboard**📖 benchmarks multi-dialect Arabic ASR models on various multi-dialect datasets.
|
| 9 |
\nApart from the WER%/CER% for each test set, we also report the Average WER%/CER% and rank the models based on the Average WER, from lowest to highest.
|
| 10 |
+
\nTo reproduce the benchmark numbers and request a model that is not listed, you can launch an issue/PR in our [GitHub repo](https://github.com/Natural-Language-Processing-Elm/open_universal_arabic_asr_leaderboard)😊.
|
| 11 |
+
\nFor more detailed analysis such as models' robustness, speaker adaption, model efficiency and memory usage, please check our [paper](https://arxiv.org/pdf/2412.13788).
|
| 12 |
"""
|
| 13 |
|
| 14 |
CITATION_BUTTON_TEXT = """
|
| 15 |
+
@article{wang2024open,
|
| 16 |
+
title={Open Universal Arabic ASR Leaderboard},
|
| 17 |
+
author={Wang, Yingzhi and Alhmoud, Anas and Alqurishi, Muhammad},
|
| 18 |
+
journal={arXiv preprint arXiv:2412.13788},
|
| 19 |
+
year={2024}
|
| 20 |
+
}
|
| 21 |
"""
|
| 22 |
|
| 23 |
METRICS_TAB_TEXT = METRICS_TAB_TEXT = """
|
|
|
|
| 25 |
We report both the Word Error Rate (WER) and Character Error Rate (CER) metrics.
|
| 26 |
## Reproduction
|
| 27 |
The Open Universal Arabic ASR Leaderboard will be a continuous benchmark project.
|
| 28 |
+
\nWe open-source the evaluation scripts at our [GitHub repo](https://github.com/Natural-Language-Processing-Elm/open_universal_arabic_asr_leaderboard).
|
| 29 |
\nPlease launch a discussion in our GitHub repo to let us know if you want to learn about the performance of a new model.
|
| 30 |
|
| 31 |
## Benchmark datasets
|
|
|
|
| 38 |
| [MGB-2](http://www.mgb-challenge.org/MGB-2.html) | Unspecified | 9.6 |
|
| 39 |
|
| 40 |
## In-depth Analysis
|
| 41 |
+
We also provide a comprehensive analysis of the models' robustness, speaker adaptation, inference efficiency and memory consumption.
|
| 42 |
+
\nPlease check our [paper](https://arxiv.org/pdf/2412.13788) to learn more.
|
| 43 |
"""
|
| 44 |
|
| 45 |
|