Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,15 @@ Vocabulary size: 65103
|
|
| 14 |
- relevant special tokens for T5 training added
|
| 15 |
- post processor updated following t5's tokenizer
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
## post processor
|
| 18 |
|
| 19 |
|
|
|
|
| 14 |
- relevant special tokens for T5 training added
|
| 15 |
- post processor updated following t5's tokenizer
|
| 16 |
|
| 17 |
+
usage:
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
```py
|
| 21 |
+
from transformers import AutoTokenizer
|
| 22 |
+
tk = AutoTokenizer.from_pretrained('BEE-spoke-data/claude-tokenizer-forT5')
|
| 23 |
+
inputs = tk("here are some words", return_tensors="pt")
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
## post processor
|
| 27 |
|
| 28 |
|