Add comprehensive README
Browse files
README.md
CHANGED
|
@@ -1,207 +1,143 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
base_model: microsoft/DialoGPT-medium
|
| 3 |
-
library_name: peft
|
| 4 |
-
pipeline_tag: text-generation
|
| 5 |
tags:
|
| 6 |
-
-
|
| 7 |
- lora
|
| 8 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
-
#
|
| 12 |
-
|
| 13 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
| 14 |
-
|
| 15 |
|
|
|
|
| 16 |
|
| 17 |
## Model Details
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
-
|
| 26 |
-
- **
|
| 27 |
-
- **
|
| 28 |
-
- **
|
| 29 |
-
- **
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
- **
|
| 38 |
-
- **
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
## Training Details
|
| 82 |
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 86 |
-
|
| 87 |
-
[More Information Needed]
|
| 88 |
-
|
| 89 |
-
### Training Procedure
|
| 90 |
-
|
| 91 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 92 |
-
|
| 93 |
-
#### Preprocessing [optional]
|
| 94 |
-
|
| 95 |
-
[More Information Needed]
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
#### Training Hyperparameters
|
| 99 |
-
|
| 100 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 101 |
-
|
| 102 |
-
#### Speeds, Sizes, Times [optional]
|
| 103 |
-
|
| 104 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 105 |
-
|
| 106 |
-
[More Information Needed]
|
| 107 |
-
|
| 108 |
-
## Evaluation
|
| 109 |
-
|
| 110 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 111 |
-
|
| 112 |
-
### Testing Data, Factors & Metrics
|
| 113 |
-
|
| 114 |
-
#### Testing Data
|
| 115 |
-
|
| 116 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 117 |
-
|
| 118 |
-
[More Information Needed]
|
| 119 |
-
|
| 120 |
-
#### Factors
|
| 121 |
-
|
| 122 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 123 |
-
|
| 124 |
-
[More Information Needed]
|
| 125 |
-
|
| 126 |
-
#### Metrics
|
| 127 |
-
|
| 128 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 129 |
-
|
| 130 |
-
[More Information Needed]
|
| 131 |
-
|
| 132 |
-
### Results
|
| 133 |
-
|
| 134 |
-
[More Information Needed]
|
| 135 |
-
|
| 136 |
-
#### Summary
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
## Model Examination [optional]
|
| 141 |
-
|
| 142 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 143 |
-
|
| 144 |
-
[More Information Needed]
|
| 145 |
-
|
| 146 |
-
## Environmental Impact
|
| 147 |
-
|
| 148 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 149 |
-
|
| 150 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 151 |
-
|
| 152 |
-
- **Hardware Type:** [More Information Needed]
|
| 153 |
-
- **Hours used:** [More Information Needed]
|
| 154 |
-
- **Cloud Provider:** [More Information Needed]
|
| 155 |
-
- **Compute Region:** [More Information Needed]
|
| 156 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 157 |
-
|
| 158 |
-
## Technical Specifications [optional]
|
| 159 |
-
|
| 160 |
-
### Model Architecture and Objective
|
| 161 |
-
|
| 162 |
-
[More Information Needed]
|
| 163 |
-
|
| 164 |
-
### Compute Infrastructure
|
| 165 |
-
|
| 166 |
-
[More Information Needed]
|
| 167 |
-
|
| 168 |
-
#### Hardware
|
| 169 |
-
|
| 170 |
-
[More Information Needed]
|
| 171 |
-
|
| 172 |
-
#### Software
|
| 173 |
-
|
| 174 |
-
[More Information Needed]
|
| 175 |
-
|
| 176 |
-
## Citation [optional]
|
| 177 |
-
|
| 178 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 179 |
-
|
| 180 |
-
**BibTeX:**
|
| 181 |
-
|
| 182 |
-
[More Information Needed]
|
| 183 |
-
|
| 184 |
-
**APA:**
|
| 185 |
-
|
| 186 |
-
[More Information Needed]
|
| 187 |
|
| 188 |
-
|
|
|
|
|
|
|
|
|
|
| 189 |
|
| 190 |
-
|
| 191 |
|
| 192 |
-
|
|
|
|
|
|
|
|
|
|
| 193 |
|
| 194 |
-
##
|
| 195 |
|
| 196 |
-
|
|
|
|
|
|
|
|
|
|
| 197 |
|
| 198 |
-
##
|
| 199 |
|
| 200 |
-
|
| 201 |
|
| 202 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 203 |
|
| 204 |
-
|
| 205 |
-
### Framework versions
|
| 206 |
|
| 207 |
-
-
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
base_model: microsoft/DialoGPT-medium
|
|
|
|
|
|
|
| 4 |
tags:
|
| 5 |
+
- peft
|
| 6 |
- lora
|
| 7 |
+
- conversational-ai
|
| 8 |
+
- instruction-following
|
| 9 |
+
- fine-tuned
|
| 10 |
+
- alpaca-dataset
|
| 11 |
+
language:
|
| 12 |
+
- en
|
| 13 |
+
library_name: peft
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# DialoGPT-Medium LoRA Fine-tuned on Alpaca Dataset
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
This model is a LoRA (Low-Rank Adaptation) fine-tuned version of `microsoft/DialoGPT-medium` trained on a subset of the Alpaca instruction-following dataset.
|
| 19 |
|
| 20 |
## Model Details
|
| 21 |
|
| 22 |
+
- **Base Model**: microsoft/DialoGPT-medium (345M parameters)
|
| 23 |
+
- **Training Method**: LoRA (Low-Rank Adaptation)
|
| 24 |
+
- **LoRA Configuration**:
|
| 25 |
+
- Rank (r): 32
|
| 26 |
+
- Alpha: 64
|
| 27 |
+
- Dropout: 0.1
|
| 28 |
+
- Target modules: c_attn, c_proj, c_fc
|
| 29 |
+
- **Dataset**: Stanford Alpaca (1000 samples)
|
| 30 |
+
- **Training Split**: 800 train, 200 validation
|
| 31 |
+
- **Epochs**: 3
|
| 32 |
+
- **Final Training Loss**: 3.45
|
| 33 |
+
|
| 34 |
+
## Training Setup
|
| 35 |
+
|
| 36 |
+
- **Hardware**: Apple Silicon (MPS)
|
| 37 |
+
- **Precision**: FP32 for numerical stability
|
| 38 |
+
- **Batch Size**: 4 per device
|
| 39 |
+
- **Gradient Accumulation**: 2 steps
|
| 40 |
+
- **Learning Rate**: 1e-4
|
| 41 |
+
- **Scheduler**: Cosine
|
| 42 |
+
|
| 43 |
+
## Performance
|
| 44 |
+
|
| 45 |
+
This model demonstrates improved instruction-following capabilities compared to the base DialoGPT-medium model, with responses that:
|
| 46 |
+
- Follow instruction format better
|
| 47 |
+
- Provide more detailed explanations
|
| 48 |
+
- Handle diverse question types (AI/ML, technical concepts, etc.)
|
| 49 |
+
|
| 50 |
+
## Usage
|
| 51 |
+
|
| 52 |
+
### Loading the Model
|
| 53 |
+
|
| 54 |
+
```python
|
| 55 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 56 |
+
from peft import PeftModel
|
| 57 |
+
|
| 58 |
+
# Load base model and tokenizer
|
| 59 |
+
base_model_name = "microsoft/DialoGPT-medium"
|
| 60 |
+
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
|
| 61 |
+
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)
|
| 62 |
+
|
| 63 |
+
# Load LoRA adapter
|
| 64 |
+
model = PeftModel.from_pretrained(base_model, "Tanaybh/dialogpt-medium-qlora-alpaca")
|
| 65 |
+
|
| 66 |
+
# Add padding token
|
| 67 |
+
if tokenizer.pad_token is None:
|
| 68 |
+
tokenizer.pad_token = tokenizer.eos_token
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
### Inference
|
| 72 |
+
|
| 73 |
+
```python
|
| 74 |
+
def generate_response(prompt):
|
| 75 |
+
input_text = f"Instruction: {prompt}\nResponse:"
|
| 76 |
+
inputs = tokenizer.encode(input_text, return_tensors="pt")
|
| 77 |
+
|
| 78 |
+
with torch.no_grad():
|
| 79 |
+
outputs = model.generate(
|
| 80 |
+
inputs,
|
| 81 |
+
max_new_tokens=80,
|
| 82 |
+
temperature=0.7,
|
| 83 |
+
do_sample=True,
|
| 84 |
+
pad_token_id=tokenizer.eos_token_id,
|
| 85 |
+
top_p=0.8,
|
| 86 |
+
repetition_penalty=1.2
|
| 87 |
+
)
|
| 88 |
+
|
| 89 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 90 |
+
return response.split("Response:")[-1].strip()
|
| 91 |
+
|
| 92 |
+
# Example usage
|
| 93 |
+
response = generate_response("What is machine learning?")
|
| 94 |
+
print(response)
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
## Example Outputs
|
| 98 |
+
|
| 99 |
+
**Q: What is artificial intelligence?**
|
| 100 |
+
A: It's a computer program designed to detect patterns in images and predict them based on what they are. It can be used for machine learning, image recognition, or machine translation, among other things.
|
| 101 |
+
|
| 102 |
+
**Q: What is the difference between AI and ML?**
|
| 103 |
+
A: AI is a programming language for AI systems. ML is an algorithm for ML systems, like Deep Learning, Deep Recognition, etc. It uses the same algorithms as other algorithms, but is used in more advanced applications.
|
| 104 |
|
| 105 |
## Training Details
|
| 106 |
|
| 107 |
+
This model was fine-tuned using a Mac-optimized approach that provides QLoRA-like benefits without requiring CUDA-specific quantization libraries:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
+
- Used higher LoRA rank (32 vs typical 8-16)
|
| 110 |
+
- Targeted more modules for broader adaptation
|
| 111 |
+
- Leveraged Apple Silicon GPU (MPS) for efficient training
|
| 112 |
+
- Applied FP32 precision for numerical stability
|
| 113 |
|
| 114 |
+
## Limitations
|
| 115 |
|
| 116 |
+
- Responses may occasionally be verbose or repetitive
|
| 117 |
+
- Performance varies by question complexity
|
| 118 |
+
- Optimized for instructional/educational content
|
| 119 |
+
- May require generation parameter tuning for best results
|
| 120 |
|
| 121 |
+
## Technical Notes
|
| 122 |
|
| 123 |
+
- This is a LoRA adapter, not a full model
|
| 124 |
+
- Requires the base DialoGPT-medium model to function
|
| 125 |
+
- Trained on Mac hardware using MPS acceleration
|
| 126 |
+
- Compatible with standard PEFT/transformers libraries
|
| 127 |
|
| 128 |
+
## Citation
|
| 129 |
|
| 130 |
+
If you use this model, please cite:
|
| 131 |
|
| 132 |
+
```bibtex
|
| 133 |
+
@misc{dialogpt-medium-qlora-alpaca,
|
| 134 |
+
author = {Tanay Bhardwaj},
|
| 135 |
+
title = {DialoGPT-Medium LoRA Fine-tuned on Alpaca Dataset},
|
| 136 |
+
year = {2025},
|
| 137 |
+
url = {https://huggingface.co/Tanaybh/dialogpt-medium-qlora-alpaca}
|
| 138 |
+
}
|
| 139 |
+
```
|
| 140 |
|
| 141 |
+
## License
|
|
|
|
| 142 |
|
| 143 |
+
MIT License - see base model license for additional terms.
|