Spaces:
Runtime error
Runtime error
add evalaute question (2nd take) (#5)
Browse files- add evalaute question (2nd take) (35a196ef99a6cc19ae7ceacf4e58a493443f3469)
Co-authored-by: Leandro von Werra <[email protected]>
app.py
CHANGED
|
@@ -78,13 +78,23 @@ References
|
|
| 78 |
- Codex paper: https://arxiv.org/abs/2107.03374
|
| 79 |
"""
|
| 80 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
internships = {
|
| 82 |
'Accelerate': default_question,
|
| 83 |
'Diffusion distillation': default_question,
|
| 84 |
'Skops & Scikit-Learn': skops_question,
|
| 85 |
"Code Generation": code_question,
|
| 86 |
"Document AI Democratization": default_question,
|
| 87 |
-
"Evaluate":
|
| 88 |
"ASR": default_question,
|
| 89 |
"Efficient video pretraining": default_question,
|
| 90 |
"Embodied AI": default_question,
|
|
|
|
| 78 |
- Codex paper: https://arxiv.org/abs/2107.03374
|
| 79 |
"""
|
| 80 |
|
| 81 |
+
evaluate_question = """
|
| 82 |
+
Use the `evaluate` library to compute the BLEU score of the model generation `"Evaluate is a library to evaluate Machine Learning models"` and the reference solution `"Evaluate is a library to evaluate ML models"`. Round the result to two digits after the comma.
|
| 83 |
+
<br/>
|
| 84 |
+
<br/>
|
| 85 |
+
References
|
| 86 |
+
<br/>
|
| 87 |
+
- `evaluate` library: https://huggingface.co/docs/evaluate/index
|
| 88 |
+
- BLEU score: https://en.wikipedia.org/wiki/BLEU
|
| 89 |
+
"""
|
| 90 |
+
|
| 91 |
internships = {
|
| 92 |
'Accelerate': default_question,
|
| 93 |
'Diffusion distillation': default_question,
|
| 94 |
'Skops & Scikit-Learn': skops_question,
|
| 95 |
"Code Generation": code_question,
|
| 96 |
"Document AI Democratization": default_question,
|
| 97 |
+
"Evaluate": evaluate_question,
|
| 98 |
"ASR": default_question,
|
| 99 |
"Efficient video pretraining": default_question,
|
| 100 |
"Embodied AI": default_question,
|