File size: 2,036 Bytes
9073a38
 
 
 
 
6fee778
93d00a9
9073a38
 
 
 
6fee778
 
 
 
a018501
93d00a9
 
 
9073a38
 
 
 
 
 
 
 
 
 
 
9419399
9073a38
 
 
9419399
9073a38
 
 
9419399
9073a38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: mit
library_name: peft
tags:
- generated_from_trainer
- text-generation
- autogenerated-modelcard
base_model: microsoft/phi-2
model-index:
- name: Trial1-phi2
  results: []
language:
- en
metrics:
- accuracy
widget:
- text: >-
    ['slum', 'redevelopment', 'dharavi', 'group', 'project', 'adani', 'collect',
    'data', 'mumbai', 'residents']
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Trial1-phi2

This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.

## Model description

This is a finetuned Phi3 model,text generation model, specifically crafting sentences from input keywords. Trained on keyword-input, sentence-output datasets, it adeptly creates contextually coherent sentences. Through fine-tuning, it enhances its proficiency in generating meaningful text aligned with the provided keywords.

## Intended uses & limitations

This model excels in generating text from keywords for tasks like content creation and assistive writing but may struggle with ambiguous keywords and nuanced language beyond its training data.

## Training and evaluation data

The training data consists of keyword lists paired with corresponding sentences, enabling the model to learn to generate text based on provided keywords. Evaluation involves assessing the model's performance in generating coherent sentences aligned with the given keywords, measuring its accuracy and fluency.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1

### Training results



### Framework versions

- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2