wacc2 commited on
Commit
9af11e0
·
verified ·
1 Parent(s): 9c030a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -3,12 +3,15 @@ language:
3
  - pt
4
  metrics:
5
  - accuracy
 
 
6
  base_model:
7
  - Qwen/Qwen2.5-7B
8
  pipeline_tag: text-generation
9
  library_name: transformers
10
  tags:
11
  - text-generation-inference
 
12
  ---
13
 
14
  ### Amadeus-Verbo-Qwen2.5-7B-PT-BR-Instruct
@@ -38,7 +41,7 @@ KeyError: 'qwen2'
38
  Below, we have provided a simple example of how to load the model and generate text:
39
 
40
  #### Quickstart
41
- The following code snippet uses apply_chat_template to show how to load the tokenizer, the model, and how to generate content.
42
 
43
  Using the pipeline:
44
  ```python
@@ -50,8 +53,7 @@ messages = [
50
  pipe = pipeline("text-generation", model="amadeusai/qwen2.5-7B-PT-BR-Instruct")
51
  pipe(messages)
52
  ```
53
-
54
- The following code snippet uses `AutoTokenizer`, `AutoModelForCausalLM` and apply_chat_template to show how to load the tokenizer, the model, and how to generate content.
55
  ```python
56
  from transformers import AutoModelForCausalLM, AutoTokenizer
57
 
 
3
  - pt
4
  metrics:
5
  - accuracy
6
+ - f1
7
+ - pearsonr
8
  base_model:
9
  - Qwen/Qwen2.5-7B
10
  pipeline_tag: text-generation
11
  library_name: transformers
12
  tags:
13
  - text-generation-inference
14
+ license: apache-2.0
15
  ---
16
 
17
  ### Amadeus-Verbo-Qwen2.5-7B-PT-BR-Instruct
 
41
  Below, we have provided a simple example of how to load the model and generate text:
42
 
43
  #### Quickstart
44
+ The following code snippet uses `pipeline`, `AutoTokenizer`, `AutoModelForCausalLM` and apply_chat_template to show how to load the tokenizer, the model, and how to generate content.
45
 
46
  Using the pipeline:
47
  ```python
 
53
  pipe = pipeline("text-generation", model="amadeusai/qwen2.5-7B-PT-BR-Instruct")
54
  pipe(messages)
55
  ```
56
+ OR
 
57
  ```python
58
  from transformers import AutoModelForCausalLM, AutoTokenizer
59