add files
Browse files- README.md +120 -0
- config.json +46 -0
- preprocessor_config.json +11 -0
- pytorch_model.bin +3 -0
- sentencepiece.bpe.model +3 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.json +203 -0
README.md
ADDED
|
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- de
|
| 4 |
+
- en
|
| 5 |
+
datasets:
|
| 6 |
+
- covost2
|
| 7 |
+
tags:
|
| 8 |
+
- audio
|
| 9 |
+
- speech-translation
|
| 10 |
+
- automatic-speech-recognition
|
| 11 |
+
license: MIT
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
# S2T-SMALL-COVOST2-DE-EN-ST
|
| 16 |
+
|
| 17 |
+
`s2t-small-covost2-de-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
|
| 18 |
+
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
|
| 19 |
+
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
## Model description
|
| 23 |
+
|
| 24 |
+
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
|
| 25 |
+
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
|
| 26 |
+
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
|
| 27 |
+
transcripts/translations autoregressively.
|
| 28 |
+
|
| 29 |
+
## Intended uses & limitations
|
| 30 |
+
|
| 31 |
+
This model can be used for end-to-end German speech to English text translation.
|
| 32 |
+
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
### How to use
|
| 36 |
+
|
| 37 |
+
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
|
| 38 |
+
transcripts by passing the speech features to the model.
|
| 39 |
+
|
| 40 |
+
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
|
| 41 |
+
filter bank features. Make sure to install the `torchaudio` package before running this example.*
|
| 42 |
+
|
| 43 |
+
You could either install those as extra speech dependancies with
|
| 44 |
+
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
|
| 45 |
+
with `pip install torchaudio sentencepiece`.
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
import torch
|
| 50 |
+
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
|
| 51 |
+
from datasets import load_dataset
|
| 52 |
+
import soundfile as sf
|
| 53 |
+
|
| 54 |
+
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-de-en-st")
|
| 55 |
+
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-de-en-st")
|
| 56 |
+
|
| 57 |
+
def map_to_array(batch):
|
| 58 |
+
speech, _ = sf.read(batch["file"])
|
| 59 |
+
batch["speech"] = speech
|
| 60 |
+
return batch
|
| 61 |
+
|
| 62 |
+
ds = load_dataset(
|
| 63 |
+
"patrickvonplaten/librispeech_asr_dummy",
|
| 64 |
+
"clean",
|
| 65 |
+
split="validation"
|
| 66 |
+
)
|
| 67 |
+
ds = ds.map(map_to_array)
|
| 68 |
+
|
| 69 |
+
inputs = processor(
|
| 70 |
+
ds["speech"][0],
|
| 71 |
+
sampling_rate=48_000,
|
| 72 |
+
return_tensors="pt"
|
| 73 |
+
)
|
| 74 |
+
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
|
| 75 |
+
|
| 76 |
+
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
## Training data
|
| 81 |
+
|
| 82 |
+
The s2t-small-covost2-de-en-st is trained on German-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
|
| 83 |
+
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
|
| 84 |
+
ST research with the largest ever open dataset
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
## Training procedure
|
| 88 |
+
|
| 89 |
+
### Preprocessing
|
| 90 |
+
|
| 91 |
+
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
|
| 92 |
+
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
|
| 93 |
+
is applied to each example.
|
| 94 |
+
|
| 95 |
+
The texts are lowercased and tokenized using character based SentencePiece vocab.
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
### Training
|
| 99 |
+
|
| 100 |
+
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
|
| 101 |
+
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
|
| 102 |
+
model training and for better performance the encoder is pre-trained for English ASR.
|
| 103 |
+
|
| 104 |
+
## Evaluation results
|
| 105 |
+
|
| 106 |
+
CoVOST2 test results for de-en (BLEU score): 17.58
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
### BibTeX entry and citation info
|
| 111 |
+
|
| 112 |
+
```bibtex
|
| 113 |
+
@inproceedings{wang2020fairseqs2t,
|
| 114 |
+
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
|
| 115 |
+
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
|
| 116 |
+
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
|
| 117 |
+
year = {2020},
|
| 118 |
+
}
|
| 119 |
+
|
| 120 |
+
```
|
config.json
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"activation_dropout": 0.1,
|
| 3 |
+
"activation_function": "relu",
|
| 4 |
+
"architectures": [
|
| 5 |
+
"Speech2TextForConditionalGeneration"
|
| 6 |
+
],
|
| 7 |
+
"attention_dropout": 0.1,
|
| 8 |
+
"bos_token_id": 0,
|
| 9 |
+
"classifier_dropout": 0.0,
|
| 10 |
+
"conv_channels": 1024,
|
| 11 |
+
"conv_kernel_sizes": [
|
| 12 |
+
5,
|
| 13 |
+
5
|
| 14 |
+
],
|
| 15 |
+
"d_model": 256,
|
| 16 |
+
"decoder_attention_heads": 4,
|
| 17 |
+
"decoder_ffn_dim": 2048,
|
| 18 |
+
"decoder_layerdrop": 0.0,
|
| 19 |
+
"decoder_layers": 6,
|
| 20 |
+
"decoder_start_token_id": 2,
|
| 21 |
+
"dropout": 0.1,
|
| 22 |
+
"early_stopping": true,
|
| 23 |
+
"encoder_attention_heads": 4,
|
| 24 |
+
"encoder_ffn_dim": 2048,
|
| 25 |
+
"encoder_layerdrop": 0.0,
|
| 26 |
+
"encoder_layers": 12,
|
| 27 |
+
"eos_token_id": 2,
|
| 28 |
+
"gradient_checkpointing": false,
|
| 29 |
+
"init_std": 0.02,
|
| 30 |
+
"input_channels": 1,
|
| 31 |
+
"input_feat_per_channel": 80,
|
| 32 |
+
"is_encoder_decoder": true,
|
| 33 |
+
"max_length": 200,
|
| 34 |
+
"max_source_positions": 6000,
|
| 35 |
+
"max_target_positions": 1024,
|
| 36 |
+
"model_type": "speech_to_text",
|
| 37 |
+
"num_beams": 5,
|
| 38 |
+
"num_conv_layers": 2,
|
| 39 |
+
"num_hidden_layers": 12,
|
| 40 |
+
"pad_token_id": 1,
|
| 41 |
+
"scale_embedding": true,
|
| 42 |
+
"tie_word_embeddings": false,
|
| 43 |
+
"transformers_version": "4.4.0.dev0",
|
| 44 |
+
"use_cache": true,
|
| 45 |
+
"vocab_size": 201
|
| 46 |
+
}
|
preprocessor_config.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"do_ceptral_normalize": true,
|
| 3 |
+
"feature_size": 80,
|
| 4 |
+
"normalize_means": true,
|
| 5 |
+
"normalize_vars": true,
|
| 6 |
+
"num_mel_bins": 80,
|
| 7 |
+
"padding_side": "right",
|
| 8 |
+
"padding_value": 0.0,
|
| 9 |
+
"return_attention_mask": true,
|
| 10 |
+
"sampling_rate": 48000
|
| 11 |
+
}
|
pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d9c37134bd61e3dcf4ca533173f76e4ce90273760f338c5d0e6da2b99d09ec06
|
| 3 |
+
size 108439045
|
sentencepiece.bpe.model
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f3ecdf6958849f645b3335d2926180724ec693685e7d91082f5b2a905b834deb
|
| 3 |
+
size 239865
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "do_upper_case": false, "do_lower_case": false, "tgt_lang": null, "lang_codes": null, "tokenizer_file": null}
|
vocab.json
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"<s>": 0,
|
| 3 |
+
"<pad>": 1,
|
| 4 |
+
"</s>": 2,
|
| 5 |
+
"<unk>": 3,
|
| 6 |
+
"\u2581": 4,
|
| 7 |
+
"e": 5,
|
| 8 |
+
"t": 6,
|
| 9 |
+
"a": 7,
|
| 10 |
+
"o": 8,
|
| 11 |
+
"i": 9,
|
| 12 |
+
"n": 10,
|
| 13 |
+
"s": 11,
|
| 14 |
+
"r": 12,
|
| 15 |
+
"h": 13,
|
| 16 |
+
"l": 14,
|
| 17 |
+
"d": 15,
|
| 18 |
+
"c": 16,
|
| 19 |
+
"u": 17,
|
| 20 |
+
"m": 18,
|
| 21 |
+
"f": 19,
|
| 22 |
+
".": 20,
|
| 23 |
+
"p": 21,
|
| 24 |
+
"y": 22,
|
| 25 |
+
"g": 23,
|
| 26 |
+
"w": 24,
|
| 27 |
+
"b": 25,
|
| 28 |
+
"v": 26,
|
| 29 |
+
"k": 27,
|
| 30 |
+
"T": 28,
|
| 31 |
+
",": 29,
|
| 32 |
+
"I": 30,
|
| 33 |
+
"A": 31,
|
| 34 |
+
"H": 32,
|
| 35 |
+
"S": 33,
|
| 36 |
+
"W": 34,
|
| 37 |
+
"x": 35,
|
| 38 |
+
"B": 36,
|
| 39 |
+
"?": 37,
|
| 40 |
+
"\u2019": 38,
|
| 41 |
+
"-": 39,
|
| 42 |
+
"M": 40,
|
| 43 |
+
"C": 41,
|
| 44 |
+
"z": 42,
|
| 45 |
+
"D": 43,
|
| 46 |
+
"F": 44,
|
| 47 |
+
"G": 45,
|
| 48 |
+
"P": 46,
|
| 49 |
+
"j": 47,
|
| 50 |
+
"E": 48,
|
| 51 |
+
"L": 49,
|
| 52 |
+
"O": 50,
|
| 53 |
+
"N": 51,
|
| 54 |
+
"q": 52,
|
| 55 |
+
"R": 53,
|
| 56 |
+
"!": 54,
|
| 57 |
+
"\"": 55,
|
| 58 |
+
"'": 56,
|
| 59 |
+
"Y": 57,
|
| 60 |
+
"K": 58,
|
| 61 |
+
"J": 59,
|
| 62 |
+
"\u201d": 60,
|
| 63 |
+
"\u201c": 61,
|
| 64 |
+
"U": 62,
|
| 65 |
+
"V": 63,
|
| 66 |
+
"\u00fc": 64,
|
| 67 |
+
"\u00f6": 65,
|
| 68 |
+
"Z": 66,
|
| 69 |
+
"\u2018": 67,
|
| 70 |
+
"\u00e4": 68,
|
| 71 |
+
":": 69,
|
| 72 |
+
"\u00df": 70,
|
| 73 |
+
"Q": 71,
|
| 74 |
+
";": 72,
|
| 75 |
+
"X": 73,
|
| 76 |
+
"\u00e1": 74,
|
| 77 |
+
"(": 75,
|
| 78 |
+
")": 76,
|
| 79 |
+
"\u0301": 77,
|
| 80 |
+
"\u00f3": 78,
|
| 81 |
+
"\u00e9": 79,
|
| 82 |
+
"\u00ed": 80,
|
| 83 |
+
"/": 81,
|
| 84 |
+
"[": 82,
|
| 85 |
+
"]": 83,
|
| 86 |
+
"1": 84,
|
| 87 |
+
"\u2013": 85,
|
| 88 |
+
"0": 86,
|
| 89 |
+
"\u014d": 87,
|
| 90 |
+
"\u201e": 88,
|
| 91 |
+
"\u0161": 89,
|
| 92 |
+
"\u00d6": 90,
|
| 93 |
+
"\u0107": 91,
|
| 94 |
+
"\u00dc": 92,
|
| 95 |
+
"3": 93,
|
| 96 |
+
"\u0131": 94,
|
| 97 |
+
"\u0142": 95,
|
| 98 |
+
"7": 96,
|
| 99 |
+
"\u00e2": 97,
|
| 100 |
+
"\u0159": 98,
|
| 101 |
+
"\u00c9": 99,
|
| 102 |
+
"\u00e3": 100,
|
| 103 |
+
"4": 101,
|
| 104 |
+
"9": 102,
|
| 105 |
+
"\u00fd": 103,
|
| 106 |
+
"\u0101": 104,
|
| 107 |
+
"\u010d": 105,
|
| 108 |
+
"\u016b": 106,
|
| 109 |
+
"\u017e": 107,
|
| 110 |
+
"2": 108,
|
| 111 |
+
"6": 109,
|
| 112 |
+
"\u00eb": 110,
|
| 113 |
+
"\u00f8": 111,
|
| 114 |
+
"&": 112,
|
| 115 |
+
"\u00f4": 113,
|
| 116 |
+
"\u00fa": 114,
|
| 117 |
+
"\u00f1": 115,
|
| 118 |
+
"\u010c": 116,
|
| 119 |
+
"\u012b": 117,
|
| 120 |
+
"\u0160": 118,
|
| 121 |
+
"\u0219": 119,
|
| 122 |
+
"\u00c4": 120,
|
| 123 |
+
"\u02bf": 121,
|
| 124 |
+
"\u011b": 122,
|
| 125 |
+
"\u015f": 123,
|
| 126 |
+
"5": 124,
|
| 127 |
+
"8": 125,
|
| 128 |
+
"\u00c1": 126,
|
| 129 |
+
"\u0144": 127,
|
| 130 |
+
"\u014c": 128,
|
| 131 |
+
"\u00e7": 129,
|
| 132 |
+
"=": 130,
|
| 133 |
+
"\u00e0": 131,
|
| 134 |
+
"\u00e8": 132,
|
| 135 |
+
"\u015e": 133,
|
| 136 |
+
"\u017d": 134,
|
| 137 |
+
"\u021b": 135,
|
| 138 |
+
"\u2014": 136,
|
| 139 |
+
"\u00e5": 137,
|
| 140 |
+
"\u00ea": 138,
|
| 141 |
+
"\u00ee": 139,
|
| 142 |
+
"\u00f2": 140,
|
| 143 |
+
"\u0103": 141,
|
| 144 |
+
"\u0105": 142,
|
| 145 |
+
"\u0130": 143,
|
| 146 |
+
"\u015b": 144,
|
| 147 |
+
"\u0259": 145,
|
| 148 |
+
"%": 146,
|
| 149 |
+
"\u00ce": 147,
|
| 150 |
+
"\u00e6": 148,
|
| 151 |
+
"\u0110": 149,
|
| 152 |
+
"\u0141": 150,
|
| 153 |
+
"\u015a": 151,
|
| 154 |
+
"\u016f": 152,
|
| 155 |
+
"#": 153,
|
| 156 |
+
"`": 154,
|
| 157 |
+
"\u00ab": 155,
|
| 158 |
+
"\u00bb": 156,
|
| 159 |
+
"\u00d3": 157,
|
| 160 |
+
"\u00da": 158,
|
| 161 |
+
"\u00ef": 159,
|
| 162 |
+
"\u00fb": 160,
|
| 163 |
+
"\u011f": 161,
|
| 164 |
+
"\u0148": 162,
|
| 165 |
+
"\u0165": 163,
|
| 166 |
+
"\u1e2a": 164,
|
| 167 |
+
"\u201a": 165,
|
| 168 |
+
"\u2060": 166,
|
| 169 |
+
"$": 167,
|
| 170 |
+
"*": 168,
|
| 171 |
+
"+": 169,
|
| 172 |
+
"<": 170,
|
| 173 |
+
">": 171,
|
| 174 |
+
"_": 172,
|
| 175 |
+
"\u00c2": 173,
|
| 176 |
+
"\u00c6": 174,
|
| 177 |
+
"\u00c7": 175,
|
| 178 |
+
"\u00d4": 176,
|
| 179 |
+
"\u00d8": 177,
|
| 180 |
+
"\u00f0": 178,
|
| 181 |
+
"\u00f5": 179,
|
| 182 |
+
"\u00f9": 180,
|
| 183 |
+
"\u010f": 181,
|
| 184 |
+
"\u0111": 182,
|
| 185 |
+
"\u0117": 183,
|
| 186 |
+
"\u0126": 184,
|
| 187 |
+
"\u012a": 185,
|
| 188 |
+
"\u0146": 186,
|
| 189 |
+
"\u0151": 187,
|
| 190 |
+
"\u0158": 188,
|
| 191 |
+
"\u017a": 189,
|
| 192 |
+
"\u017c": 190,
|
| 193 |
+
"\u03bc": 191,
|
| 194 |
+
"\u1e63": 192,
|
| 195 |
+
"\u1eaf": 193,
|
| 196 |
+
"\u2212": 194,
|
| 197 |
+
"\u2261": 195,
|
| 198 |
+
"\u30ab": 196,
|
| 199 |
+
"\u4e34": 197,
|
| 200 |
+
"\u5b59": 198,
|
| 201 |
+
"\u5c23": 199,
|
| 202 |
+
"\u9053": 200
|
| 203 |
+
}
|