Instructions to use Angelectronic/gemma-QA-ViMMRC-Squad-v1.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Angelectronic/gemma-QA-ViMMRC-Squad-v1.1 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-1.1-7b-it-bnb-4bit") model = PeftModel.from_pretrained(base_model, "Angelectronic/gemma-QA-ViMMRC-Squad-v1.1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use Angelectronic/gemma-QA-ViMMRC-Squad-v1.1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Angelectronic/gemma-QA-ViMMRC-Squad-v1.1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Angelectronic/gemma-QA-ViMMRC-Squad-v1.1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Angelectronic/gemma-QA-ViMMRC-Squad-v1.1 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="Angelectronic/gemma-QA-ViMMRC-Squad-v1.1", max_seq_length=2048, )
gemma-QA-ViMMRC-Squad-v1.1
This model is a fine-tuned version of unsloth/gemma-1.1-7b-it-bnb-4bit on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.3372
Model description
More information needed
Intended uses & limitations
- Prompt 1: Given the following reference, create a question and a corresponding answer to the question: + [context]
- Prompt 2: Given the following reference, create a multiple-choice question and its corresponding answer: + [context]
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.8 | 0.2307 | 320 | 1.9584 |
| 0.4031 | 0.4614 | 640 | 2.0371 |
| 0.4 | 0.6921 | 960 | 2.1358 |
| 0.4 | 0.9229 | 1280 | 2.2552 |
| 0.2328 | 1.1536 | 1600 | 2.4241 |
| 0.2 | 1.3843 | 1920 | 2.5637 |
| 0.2 | 1.6150 | 2240 | 2.7250 |
| 0.1117 | 1.8457 | 2560 | 2.8899 |
| 0.1008 | 2.0764 | 2880 | 3.1551 |
| 0.0578 | 2.3071 | 3200 | 3.2185 |
| 0.0566 | 2.5379 | 3520 | 3.3025 |
| 0.0555 | 2.7686 | 3840 | 3.3309 |
| 0.0516 | 2.9993 | 4160 | 3.3372 |
Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Angelectronic/gemma-QA-ViMMRC-Squad-v1.1
Base model
unsloth/gemma-1.1-7b-it-bnb-4bit