|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- meta-llama/Llama-3.1-70B-Instruct |
|
|
--- |
|
|
|
|
|
# Llama-3.1-70B-Instruct + ToolQA (Finetuned) |
|
|
|
|
|
This model is based on **Llama-3.1-70B-Instruct**, fine-tuned on the [ToolQA dataset](https://github.com/night-chen/ToolQA) for multi-step tool-use reasoning tasks. |
|
|
|
|
|
## Training Details |
|
|
|
|
|
* **Dataset**: [ToolQA](https://github.com/night-chen/ToolQA) – a benchmark designed for evaluating agents' tool-use capabilities in complex environments. |
|
|
* **Training Framework**: [Memento-No-More](https://arxiv.org/abs/2502.01562) – a novel framework for teaching models to internalize hints and perform multi-skill reasoning. |
|
|
* **Fine-tuning Rounds**: 3 |
|
|
* **Model Base**: [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) |
|
|
|
|
|
## Reference |
|
|
|
|
|
For detailed information on the training methodology, architecture, and evaluations, please refer to our paper: |
|
|
|
|
|
> **Alakuijala, M., Gao, Y., Ananov, G., Kaski, S., Marttinen, P., Ilin, A., & Valpola, H.** (2025). *Memento No More: Coaching AI Agents to Master Multiple Tasks via Hints Internalization*. arXiv preprint [arXiv:2502.01562](https://arxiv.org/abs/2502.01562). |
|
|
|
|
|
|