--- library_name: peft license: mit base_model: NousResearch/Nous-Hermes-llama-2-7b tags: - generated_from_trainer model-index: - name: root/workspace/outputs/9/abe7c8e9-05cf-4aaf-bfb5-98750b471a5a results: [] --- # root/workspace/outputs/9/abe7c8e9-05cf-4aaf-bfb5-98750b471a5a This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the /root/workspace/input_data/ff010c3c53b2877d_train_data.json dataset. It achieves the following results on the evaluation set: - Loss: 0.0012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.14.0 - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0