Model Overview
Model license: cc-by-nc-4.0
This model is trained based on EleutherAI/pythia-1.4b-deduped model that is LoRA finetuned on vicgalle/alpaca-gpt4 dataset.
Prompt Template: Alpaca
<system_prompt>
### Instruction:
<user_message>
### Response:
<assistant_response>
Intended Use
THIS IS A TEST MODEL, IT IS NOT INTENDED FOR REAL APPLICATIONS BY ANY MEANS. HOWEVER, A NEW MODEL IS COMING IN THE SAME TOPIC.
This model series will be used for small but intense applications.
Training Details
This model took 2:31:23 to train in QLoRA on a single T4 GPU.
- epochs:
1 - train batch size:
12 - eval batch size:
12 - gradient accumulation steps:
1 - maximum gradient normal:
0.3 - learning rate:
2e-4 - weight decay:
0.001 - optimizer:
paged_adamw_32bit - learning rate schedule:
cosine - warmup ratio (linear):
0.03
- Downloads last month
- 2