ericflo commited on
Commit
5fa2176
·
verified ·
1 Parent(s): 4a9b6e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -160
README.md CHANGED
@@ -12,200 +12,90 @@ language:
12
  pipeline_tag: text-generation
13
  ---
14
 
15
- # Model Card for Model ID
16
 
17
- <!-- Provide a quick summary of what the model is/does. -->
18
 
 
19
 
 
 
 
 
 
20
 
21
- ## Model Details
22
 
23
- ### Model Description
24
 
25
- <!-- Provide a longer summary of what this model is. -->
26
 
 
27
 
 
28
 
29
- - **Developed by:** [More Information Needed]
30
- - **Funded by [optional]:** [More Information Needed]
31
- - **Shared by [optional]:** [More Information Needed]
32
- - **Model type:** [More Information Needed]
33
- - **Language(s) (NLP):** [More Information Needed]
34
- - **License:** [More Information Needed]
35
- - **Finetuned from model [optional]:** [More Information Needed]
 
 
36
 
37
- ### Model Sources [optional]
38
-
39
- <!-- Provide the basic links for the model. -->
40
-
41
- - **Repository:** [More Information Needed]
42
- - **Paper [optional]:** [More Information Needed]
43
- - **Demo [optional]:** [More Information Needed]
44
-
45
- ## Uses
46
-
47
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
48
 
49
- ### Direct Use
50
 
51
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
52
 
53
- [More Information Needed]
 
 
54
 
55
- ### Downstream Use [optional]
56
 
57
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
58
 
59
- [More Information Needed]
 
 
 
60
 
61
  ### Out-of-Scope Use
62
 
63
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
64
-
65
- [More Information Needed]
66
 
67
  ## Bias, Risks, and Limitations
68
 
69
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
70
-
71
- [More Information Needed]
72
-
73
- ### Recommendations
74
-
75
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
76
-
77
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
78
-
79
- ## How to Get Started with the Model
80
-
81
- Use the code below to get started with the model.
82
-
83
- [More Information Needed]
84
-
85
- ## Training Details
86
-
87
- ### Training Data
88
-
89
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
90
 
91
- [More Information Needed]
92
-
93
- ### Training Procedure
94
-
95
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
96
-
97
- #### Preprocessing [optional]
98
-
99
- [More Information Needed]
100
-
101
-
102
- #### Training Hyperparameters
103
-
104
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
105
-
106
- #### Speeds, Sizes, Times [optional]
107
-
108
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
109
-
110
- [More Information Needed]
111
 
112
  ## Evaluation
113
 
114
- <!-- This section describes the evaluation protocols and provides the results. -->
115
-
116
- ### Testing Data, Factors & Metrics
117
-
118
- #### Testing Data
119
-
120
- <!-- This should link to a Dataset Card if possible. -->
121
-
122
- [More Information Needed]
123
-
124
- #### Factors
125
 
126
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
127
 
128
- [More Information Needed]
129
 
130
- #### Metrics
131
-
132
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
133
-
134
- [More Information Needed]
135
-
136
- ### Results
137
-
138
- [More Information Needed]
139
-
140
- #### Summary
141
-
142
-
143
-
144
- ## Model Examination [optional]
145
-
146
- <!-- Relevant interpretability work for the model goes here -->
147
-
148
- [More Information Needed]
149
-
150
- ## Environmental Impact
151
-
152
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
153
-
154
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
155
-
156
- - **Hardware Type:** [More Information Needed]
157
- - **Hours used:** [More Information Needed]
158
- - **Cloud Provider:** [More Information Needed]
159
- - **Compute Region:** [More Information Needed]
160
- - **Carbon Emitted:** [More Information Needed]
161
-
162
- ## Technical Specifications [optional]
163
-
164
- ### Model Architecture and Objective
165
-
166
- [More Information Needed]
167
 
168
  ### Compute Infrastructure
169
 
170
- [More Information Needed]
171
-
172
- #### Hardware
173
-
174
- [More Information Needed]
175
-
176
  #### Software
177
 
178
- [More Information Needed]
179
-
180
- ## Citation [optional]
181
-
182
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
183
-
184
- **BibTeX:**
185
-
186
- [More Information Needed]
187
-
188
- **APA:**
189
-
190
- [More Information Needed]
191
-
192
- ## Glossary [optional]
193
-
194
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
195
-
196
- [More Information Needed]
197
-
198
- ## More Information [optional]
199
-
200
- [More Information Needed]
201
-
202
- ## Model Card Authors [optional]
203
-
204
- [More Information Needed]
205
 
206
  ## Model Card Contact
207
 
208
- [More Information Needed]
209
- ### Framework versions
210
-
211
- - PEFT 0.12.0
 
12
  pipeline_tag: text-generation
13
  ---
14
 
15
+ # Model Card: Custom LLM with High-Rank Adapter
16
 
17
+ ## Model Overview
18
 
19
+ This model is a custom-trained language model based on the Meta-Llama-3.1-8B architecture. Unlike most instruction-tuned models, it was trained directly on a mixture of high-quality datasets for general text and code completion tasks, as well as instruction-following. A high-rank adapter (rank 128) is used to enhance learning capacity while mitigating catastrophic forgetting, which distinguishes this model from common low-rank fine-tuning methods.
20
 
21
+ - **Developer:** Eric Florenzano
22
+ - **Model Type:** Large Language Model (LLM)
23
+ - **Language(s):** English, with a focus on Python for code-related tasks
24
+ - **License:** Apache-2.0
25
+ - **Base Model:** meta-llama/Meta-Llama-3.1-8B
26
 
27
+ ## Model Sources
28
 
29
+ - **Repository:** [Custom Llama-3.1-8B Training](https://huggingface.co/ericflo/Llama-3.1-8B-ContinuedTraining)
30
 
31
+ ## Model Training and Approach
32
 
33
+ ### Unique Training Approach
34
 
35
+ Instead of fine-tuning an instruction-tuned model, the base Meta-Llama-3.1-8B model was trained with a diverse set of high-quality pretraining and instruction datasets. The training focused on both text completion/prediction and instruction-following tasks.
36
 
37
+ Key features of the training process:
38
+ - **Training Data**: A blend of high-quality data sources, each serving different purposes:
39
+ - **FineTome-100k**: High-quality instruction-tuned data for general language understanding and task completion.
40
+ - **dclm-baseline-1.0-parquet**: A pretraining corpus from Apple, used for standard text completion/prediction tasks.
41
+ - **English Wikipedia**: Used for text completion tasks with a focus on broad language understanding.
42
+ - **Starcoder**: High-quality Python-focused code dataset used for code completion tasks.
43
+ - **Instruction-Tuning**: The model alternates randomly between ChatML and the Llama Chat template during training to learn a general-purpose instruction-following format that is not restricted to one specific style.
44
+ - **Strata Information**: Training data is prefixed with contextual information (e.g., URLs for Wikipedia articles) to address data imbalance, allowing the model to weigh different data sources appropriately. However, this prefixing is only used during training, with inference relying on the model's learned representations.
45
+ - **High-Rank Adapter**: The model uses a high-rank adapter (rank 128) to learn more complex representations and reduce the risk of catastrophic forgetting, as opposed to the commonly used low-rank adaptation approach (LoRA).
46
 
47
+ ### Training Procedure
 
 
 
 
 
 
 
 
 
 
48
 
49
+ The model was trained for 650 steps using the datasets mentioned above. During this process, the focus was on ensuring a balanced learning process across different task types (text completion, code completion, instruction-following). The high-rank adapter plays a significant role in maintaining model capacity while reducing computational complexity.
50
 
51
+ #### Training Hyperparameters
52
 
53
+ - **Adapter Rank:** 128
54
+ - **Training Steps:** 650
55
+ - **Base Model:** meta-llama/Meta-Llama-3.1-8B
56
 
57
+ ## Intended Uses
58
 
59
+ This model is designed for a variety of natural language processing tasks, including:
60
 
61
+ - **Text Completion and Generation**: Generating and predicting text based on provided input.
62
+ - **Code Completion**: Assisting with Python code generation and completion tasks.
63
+ - **Instruction Following**: Capable of following complex instructions across multiple domains.
64
+ - **General Language Understanding**: Leveraging its diverse training data for broad language comprehension tasks.
65
 
66
  ### Out-of-Scope Use
67
 
68
+ - **Real-time Knowledge**: The model does not have access to real-time data or events beyond its training period.
69
+ - **Harmful or Biased Content**: Should not be used to generate harmful, biased, or misleading information.
70
+ - **Critical Decision-Making**: Should not be relied upon for critical tasks that require human oversight and judgment.
71
 
72
  ## Bias, Risks, and Limitations
73
 
74
+ While this model was trained on a mix of high-quality datasets, it may still exhibit biases present in the training data, especially in domains with limited or skewed representation. Users should:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
+ - Be aware of potential biases, particularly in sensitive domains.
77
+ - Review model outputs for accuracy, especially for code generation and decision-making tasks.
78
+ - Use the model as a tool to assist in human decision-making, not as a replacement.
79
+ - Understand that performance may vary across different domains and task types.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
  ## Evaluation
82
 
83
+ The model has not yet been evaluated - evaluation metrics will be added as they become available.
 
 
 
 
 
 
 
 
 
 
84
 
85
+ ## Technical Specifications
86
 
87
+ ### Model Architecture
88
 
89
+ - **Base Model**: meta-llama/Meta-Llama-3.1-8B
90
+ - **High-Rank Adapter**: A rank 128 adapter used to learn more complex patterns while reducing catastrophic forgetting.
91
+ - **Objective**: Multi-task learning across text completion, code completion, and instruction following.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
  ### Compute Infrastructure
94
 
 
 
 
 
 
 
95
  #### Software
96
 
97
+ - **Library**: PEFT 0.12.0 for efficient parameter fine-tuning.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
99
  ## Model Card Contact
100
 
101
+ For inquiries about this model, please contact Eric Florenzano through the [model repository](https://huggingface.co/ericflo/Llama-3.1-8B-ContinuedTraining).