Update README.md
Browse files# Annotated Model Card Template
## Template
[modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
## Directions
Fully filling out a model card requires input from a few different roles. (One person may have more than one role.)  We’ll refer to these roles as the **developer**, who writes the code and runs training; the **sociotechnic**, who is skilled at analyzing the interaction of technology and society long-term (this includes lawyers, ethicists, sociologists, or rights advocates); and the **project organizer**, who understands the overall scope and reach of the model, can roughly fill out each part of the card, and who serves as a contact person for model card updates.
* The **developer** is necessary for filling out [Training Procedure](#training-procedure-optional) and [Technical Specifications](#technical-specifications-optional). They are also particularly useful for the “Limitations” section of [Bias, Risks, and Limitations](#bias-risks-and-limitations). They are responsible for providing [Results](#results) for the Evaluation, and ideally work with the other roles to define the rest of the Evaluation: [Testing Data, Factors & Metrics](#testing-data-factors--metrics).
* The **sociotechnic** is necessary for filling out “Bias” and “Risks” within [Bias, Risks, and Limitations](#bias-risks-and-limitations), and particularly useful for “Out of Scope Use” within [Uses](#uses).
* The **project organizer** is necessary for filling out [Model Details](#model-details) and [Uses](#uses). They might also fill out
[Training Data](#training-data). Project organizers could also be in charge of [Citation](#citation-optional), [Glossary](#glossary-optional), 
[Model Card Contact](#model-card-contact), [Model Card Authors](#model-card-authors-optional), and [More Information](#more-information-optional).
_Instructions are provided below, in italics._
Template variable names appear in `monospace`.
--- 
# Model Name
**Section Overview:**  Provide the model name and a 1-2 sentence summary of what the model is. 
`model_id`
`model_summary`
# Table of Contents
**Section Overview:** Provide this with links to each section, to enable people to easily jump around/use the file in other locations with the preserved TOC/print out the content/etc.
# Model Details
**Section Overview:** This section provides basic information about what the model is, its current status, and where it came from. It should be useful for anyone who wants to reference the model.
## Model Description
`model_description`
_Provide basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, and the creators. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section._
* **Developed by:** `developers`
_List (and ideally link to) the people who built the model._
* **Funded by:** `funded_by`
  
_List (and ideally link to)  the funding sources that financially, computationally, or otherwise supported  or enabled this model._
* **Shared by [optional]:** `shared_by`
_List (and ideally link to) the people/organization making the model available online._
* **Model type:** `model_type`
_You can name the “type” as:_
_1. Supervision/Learning Method_
_2. Machine Learning Type_
_3. Modality_
* **Language(s)** [NLP]: `language`
_Use this field when the system uses or processes natural (human) language._
* **License:** `license`
_Name and link to the license being used._
* **Finetuned From Model [optional]:** `base_model`
_If this model has another model as its base, link to that model here._
## Model Sources [optional]
* **Repository:** `repo`
* **Paper [optional]:** `paper`
* **Demo [optional]:** `demo`
_Provide sources for the user to directly see the model and its details. Additional kinds of resources – training logs, lessons learned, etc. – belong in the [More Information](#more-information-optional) section. If you include one thing for this section, link to the repository._
# Uses
**Section Overview:** This section addresses questions around how the model is intended to be used in different applied contexts, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.  Note this section is not intended to include the license usage details. For that, link directly to the license.
## Direct Use
`direct_use`
_Explain how the model can be used without fine-tuning, post-processing, or plugging into a pipeline. An example code snippet is recommended._
## Downstream Use [optional]
`downstream_use`
_Explain how this model can be used when fine-tuned for a task or when plugged into a larger ecosystem or app. An example code snippet is recommended._
## Out-of-Scope Use
`out_of_scope_use`
_List how the model may foreseeably be misused (used in a way it will not work for) and address what users ought not do with the model._
# Bias, Risks, and Limitations
**Section Overview:** This section identifies foreseeable harms, misunderstandings, and technical and sociotechnical limitations. It also provides information on warnings and potential mitigations. Bias, risks, and limitations can sometimes be inseparable/refer to the same issues. Generally, bias and risks are sociotechnical, while limitations are technical: 
- A **bias** is a stereotype or disproportionate performance (skew) for some subpopulations. 
- A **risk** is a socially-relevant issue that the model might cause.
- A **limitation** is a likely failure mode that can be addressed following the listed Recommendations.
`bias_risks_limitations`
_What are the known or foreseeable issues stemming from this model?_
## Recommendations
`bias_recommendations`
_What are recommendations with respect to the foreseeable issues? This can include everything from “downsample your image” to filtering explicit content._
# Training Details
**Section Overview:** This section provides information to describe and replicate training, including the training data, the speed and size of training elements, and the environmental impact of training. This relates heavily to the [Technical Specifications](#technical-specifications-optional) as well, and content here should link to that section when it is relevant to the training procedure.  It is useful for people who want to learn more about the model inputs and training footprint.
It is relevant for anyone who wants to know the basics of what the model is learning.
## Training Data
`training_data`
_Write 1-2 sentences on what the training data is. Ideally this links to a Dataset Card for further information. Links to documentation related to data pre-processing or additional filtering may go here as well as in [More Information](#more-information-optional)._
 
## Training Procedure [optional]
### Preprocessing
`preprocessing`
_Detail tokenization, resizing/rewriting (depending on the modality), etc._
### Speeds, Sizes, Times
`speeds_sizes_times`
_Detail throughput, start/end time, checkpoint sizes, etc._
# Evaluation
**Section Overview:** This section describes the evaluation protocols, what is being measured in the evaluation, and provides the results.  Evaluation ideally has at least two parts, with one part looking at quantitative measurement of general performance ([Testing Data, Factors & Metrics](#testing-data-factors--metrics)), such as may be done with benchmarking; and another looking at performance with respect to specific social safety issues ([Societal Impact Assessment](#societal-impact-assessment-optional)), such as may be done with red-teaming. You can also specify your model's evaluation results in a structured way in the model card metadata. Results are parsed by the Hub and displayed in a widget on the model page. See https://huggingface.co/docs/hub/model-cards#evaluation-results.
## Testing Data, Factors & Metrics
_Evaluation is ideally **disaggregated** with respect to different factors, such as task, domain and population subgroup; and calculated with metrics that are most meaningful for foreseeable contexts of use. Equal evaluation performance across different subgroups is said to be "fair" across those subgroups; target fairness metrics should be decided based on which errors are more likely to be problematic in light of the model use. However, this section is most commonly used to report aggregate evaluation performance on different task benchmarks._
### Testing Data
`testing_data`
_Describe testing data or link to its Dataset Card._
### Factors
`testing_factors`
_What are the foreseeable characteristics that will influence how the model behaves? Evaluation should ideally be disaggregated across these factors in order to uncover disparities in performance._
### Metrics
`testing_metrics`
_What metrics will be used for evaluation?_
## Results
`results`
_Results should be based on the Factors and Metrics defined above._
### Summary
`results_summary`
_What do the results say? This can function as a kind of tl;dr for general audiences._
## Societal Impact Assessment [optional]
_Use this free text section to explain how this model has been evaluated for risk of societal harm, such as for child safety, NCII, privacy, and violence. This might take the form of answers to the following questions:_
- _Is this model safe for kids to use? Why or why not?_
- _Has this model been tested to evaluate risks pertaining to non-consensual intimate imagery (including CSEM)?_
- _Has this model been tested to evaluate risks pertaining to violent activities, or depictions of violence? What were the results?_
_Quantitative numbers on each issue may also be provided._
# Model Examination [optional]
**Section Overview:** This is an experimental section some developers are beginning to add, where work on explainability/interpretability may go.
`model_examination`
# Environmental Impact
**Sectio
| 
         @@ -1,3 +1,13 @@ 
     | 
|
| 1 | 
         
            -
            ---
         
     | 
| 2 | 
         
            -
            license: apache-2.0
         
     | 
| 3 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
| 
         | 
|
| 1 | 
         
            +
            ---
         
     | 
| 2 | 
         
            +
            license: apache-2.0
         
     | 
| 3 | 
         
            +
            datasets:
         
     | 
| 4 | 
         
            +
            - openai/gdpval
         
     | 
| 5 | 
         
            +
            language:
         
     | 
| 6 | 
         
            +
            - en
         
     | 
| 7 | 
         
            +
            metrics:
         
     | 
| 8 | 
         
            +
            - accuracy
         
     | 
| 9 | 
         
            +
            base_model:
         
     | 
| 10 | 
         
            +
            - inclusionAI/Ling-1T
         
     | 
| 11 | 
         
            +
            - inclusionAI/Ling-mini-2.0
         
     | 
| 12 | 
         
            +
            library_name: adapter-transformers
         
     | 
| 13 | 
         
            +
            ---
         
     |