modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
voidful/dpr-ctx_encoder-bert-base-multilingual | 2021-02-21T09:00:44.000Z | [
"pytorch",
"dpr",
"multilingual",
"dataset:NQ",
"dataset:Trivia",
"dataset:SQuAD",
"dataset:MLQA",
"dataset:DRCD",
"arxiv:2004.04906",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | voidful | 130 | transformers | ---
language: multilingual
datasets:
- NQ
- Trivia
- SQuAD
- MLQA
- DRCD
---
# dpr-ctx_encoder-bert-base-multilingual
## Description
Multilingual DPR Model base on bert-base-multilingual-cased.
[DPR model](https://arxiv.org/abs/2004.04906)
[DPR repo](https://github.com/facebookresearch/DPR)
## Data
1. [NQ](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
2. [Trivia](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
3. [SQuAD](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
4. [DRCD*](https://github.com/DRCKnowledgeTeam/DRCD)
5. [MLQA*](https://github.com/facebookresearch/MLQA)
`question pairs for train`: 644,217
`question pairs for dev`: 73,710
*DRCD and MLQA are converted using script from haystack [squad_to_dpr.py](https://github.com/deepset-ai/haystack/blob/master/haystack/retriever/squad_to_dpr.py)
## Training Script
I use the script from [haystack](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial9_DPR_training.ipynb)
## Usage
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained('voidful/dpr-ctx_encoder-bert-base-multilingual')
model = DPRContextEncoder.from_pretrained('voidful/dpr-ctx_encoder-bert-base-multilingual')
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors='pt')["input_ids"]
embeddings = model(input_ids).pooler_output
```
Follow the tutorial from `haystack`:
[Better Retrievers via "Dense Passage Retrieval"](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial6_Better_Retrieval_via_DPR.ipynb)
```
from haystack.retriever.dense import DensePassageRetriever
retriever = DensePassageRetriever(document_store=document_store,
query_embedding_model="voidful/dpr-question_encoder-bert-base-multilingual",
passage_embedding_model="voidful/dpr-ctx_encoder-bert-base-multilingual",
max_seq_len_query=64,
max_seq_len_passage=256,
batch_size=16,
use_gpu=True,
embed_title=True,
use_fast_tokenizers=True)
```
|
|
voidful/dpr-question_encoder-bert-base-multilingual | 2021-02-21T09:00:19.000Z | [
"pytorch",
"dpr",
"multilingual",
"dataset:NQ",
"dataset:Trivia",
"dataset:SQuAD",
"dataset:MLQA",
"dataset:DRCD",
"arxiv:2004.04906",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | voidful | 133 | transformers | ---
language: multilingual
datasets:
- NQ
- Trivia
- SQuAD
- MLQA
- DRCD
---
# dpr-ctx_encoder-bert-base-multilingual
## Description
Multilingual DPR Model base on bert-base-multilingual-cased.
[DPR model](https://arxiv.org/abs/2004.04906)
[DPR repo](https://github.com/facebookresearch/DPR)
## Data
1. [NQ](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
2. [Trivia](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
3. [SQuAD](https://github.com/facebookresearch/DPR/blob/master/data/download_data.py)
4. [DRCD*](https://github.com/DRCKnowledgeTeam/DRCD)
5. [MLQA*](https://github.com/facebookresearch/MLQA)
`question pairs for train`: 644,217
`question pairs for dev`: 73,710
*DRCD and MLQA are converted using script from haystack [squad_to_dpr.py](https://github.com/deepset-ai/haystack/blob/master/haystack/retriever/squad_to_dpr.py)
## Training Script
I use the script from [haystack](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial9_DPR_training.ipynb)
## Usage
```python
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('voidful/dpr-question_encoder-bert-base-multilingual')
model = DPRQuestionEncoder.from_pretrained('voidful/dpr-question_encoder-bert-base-multilingual')
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors='pt')["input_ids"]
embeddings = model(input_ids).pooler_output
```
Follow the tutorial from `haystack`:
[Better Retrievers via "Dense Passage Retrieval"](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial6_Better_Retrieval_via_DPR.ipynb)
```
from haystack.retriever.dense import DensePassageRetriever
retriever = DensePassageRetriever(document_store=document_store,
query_embedding_model="voidful/dpr-question_encoder-bert-base-multilingual",
passage_embedding_model="voidful/dpr-ctx_encoder-bert-base-multilingual",
max_seq_len_query=64,
max_seq_len_passage=256,
batch_size=16,
use_gpu=True,
embed_title=True,
use_fast_tokenizers=True)
```
|
|
voidful/gpt2-base-ptt | 2021-06-19T15:24:20.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | voidful | 100 | transformers | |
voidful/question-answering-zh | 2021-05-20T09:01:40.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
".gitignore",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | voidful | 96 | transformers | |
voidful/wav2vec2-large-xlsr-53-hk | 2021-03-30T17:06:30.000Z | [
"pytorch",
"wav2vec2",
"zh",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"feature_extractor_config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | voidful | 35 | transformers | ---
language: zh
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Cantonese (Hong Kong) by Voidful
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-HK
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 16.41
---
# Wav2Vec2-Large-XLSR-53-hk
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
[Colab trial](https://colab.research.google.com/drive/1nBRLf4Pwiply_y5rXWoaIB8LxX41tfEI?usp=sharing)
```
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "voidful/wav2vec2-large-xlsr-53-hk"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-hk"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\\"#$%&()*+,\\-.\\:;<=>?@\\[\\]\\\\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
return processor.batch_decode(pred_ids)
```
Predict
```python
predict(load_file_to_data('voice file path'))
```
## Evaluation
The model can be evaluated as follows on the Cantonese (Hong Kong) test data of Common Voice.
CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese
```python
!mkdir cer
!wget -O cer/cer.py https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese/raw/main/cer.py
!pip install jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
cer = load_metric("./cer")
model_name = "voidful/wav2vec2-large-xlsr-53-hk"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-hk"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\\"#$%&()*+,\\-.\\:;<=>?@\\[\\]\\\\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-HK', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["predicted"], references=result["target"])))
```
`CER 16.41`
|
voidful/wav2vec2-large-xlsr-53-tw-gpt | 2021-04-20T18:05:25.000Z | [
"pytorch",
"wav2vec2",
"zh",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"feature_extractor_config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | voidful | 206 | transformers | ---
language: zh
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Taiwanese Mandarin(zh-tw) by Voidful
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-TW
type: common_voice
args: zh-TW
metrics:
- name: Test CER
type: cer
value: 25.57
---
# Wav2Vec2-Large-XLSR-53-tw-gpt
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on zh-tw using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
[Colab trial](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)
```
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
AutoTokenizer,
AutoModelWithLMHead
)
import torch
import re
import sys
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
gpt_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
gpt_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0)
gpt_prob = torch.nn.functional.softmax(gpt_model(gpt_input).logits, dim=-1)[:voice_prob.size()[0],:]
comb_pred_ids = torch.argmax(gpt_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
return decoded_results
```
Predict
```python
predict(load_file_to_data('voice file path'))
```
## Evaluation
The model can be evaluated as follows on the zh-tw test data of Common Voice.
CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese
env setup:
```
!mkdir cer
!wget -O cer/cer.py https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese/raw/main/cer.py
!pip install jiwer
!pip install torchaudio
!pip install datasets transformers
```
## Evaluation without LM:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
cer = load_metric("./cer")
print("CER: {:2f}".format(100 * cer.compute(predictions=result["predicted"], references=result["target"])))
```
`CER: 28.79`.
`TIME: 05:23 min`
## Evaluation with GPT:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelWithLMHead
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0)
lm_prob = torch.nn.functional.softmax(lm_model(lm_input).logits, dim=-1)[:voice_prob.size()[0],:]
comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
cer = load_metric("./cer")
print("CER: {:2f}".format(100 * cer.compute(predictions=result["predicted"], references=result["target"])))
```
`CER 25.75`.
`TIME: 06:04 min`
## Evaluation with BERT:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0)
mask_lm_prob = voice_prob.clone()
for i in range(lm_input.shape[-1]):
masked_lm_input = lm_input.clone()
masked_lm_input[0][i] = torch.tensor(tokenizer.mask_token_id).to('cuda')
lm_prob = torch.nn.functional.softmax(lm_model(masked_lm_input).logits, dim=-1).squeeze(0)
mask_lm_prob[i] = lm_prob[i]
comb_pred_ids = torch.argmax(mask_lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
cer = load_metric("./cer")
print("CER: {:2f}".format(100 * cer.compute(predictions=result["predicted"], references=result["target"])))
```
`CER 25.57`.
`TIME: 09:49 min`
## Evaluation with T-TA:
setup
```
!git clone https://github.com/voidful/pytorch-tta.git
!mv ./pytorch-tta/tta ./tta
!wget https://github.com/voidful/pytorch-tta/releases/download/wiki_zh/wiki_zh.pt
```
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from tta.modeling_tta import TTALMModel
from transformers import AutoTokenizer
import torch
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model = TTALMModel("bert-base-chinese")
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model.load_state_dict(torch.load("./wiki_zh.pt",map_location=torch.device('cuda')))
lm_model.to('cuda')
lm_model.eval()
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0)
lm_prob = torch.nn.functional.softmax(lm_model.forward(lm_input)[0], dim=-1).squeeze(0)
comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
cer = load_metric("./cer")
print("CER: {:2f}".format(100 * cer.compute(predictions=result["predicted"], references=result["target"])))
```
`CER: 25.77`.
`TIME: 06:01 min`
|
voidful/wav2vec2-xlsr-multilingual-56 | 2021-05-29T12:47:44.000Z | [
"pytorch",
"wav2vec2",
"multilingual",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"lang_ids.pk",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | voidful | 712 | transformers | ---
language: multilingual
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 for 56 language by Voidful
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
---
# wav2vec2-xlsr-multilingual-56
*56 language, 1 model Multilingual ASR*
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on 56 language using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
For more detail: [https://github.com/voidful/wav2vec2-xlsr-multilingual-56](https://github.com/voidful/wav2vec2-xlsr-multilingual-56)
## Env setup:
```
!pip install torchaudio
!pip install datasets transformers
!pip install asrp
!wget -O lang_ids.pk https://huggingface.co/voidful/wav2vec2-xlsr-multilingual-56/raw/main/lang_ids.pk
```
## Usage
```
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
AutoTokenizer,
AutoModelWithLMHead
)
import torch
import re
import sys
import soundfile as sf
model_name = "voidful/wav2vec2-xlsr-multilingual-56"
device = "cuda"
processor_name = "voidful/wav2vec2-xlsr-multilingual-56"
import pickle
with open("lang_ids.pk", 'rb') as output:
lang_ids = pickle.load(output)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
model.eval()
def load_file_to_data(file,sampling_rate=16_000):
batch = {}
speech, _ = torchaudio.load(file)
if sampling_rate != '16_000' or sampling_rate != '16000':
resampler = torchaudio.transforms.Resample(orig_freq=sampling_rate, new_freq=16_000)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
else:
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = '16000'
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
comb_pred_ids = torch.argmax(voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
return decoded_results
def predict_lang_specific(data,lang_code):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(processor.tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
filtered_input = pred_ids[pred_ids!=processor.tokenizer.pad_token_id].view(1,-1).to(device)
if len(filtered_input[0]) == 0:
decoded_results.append("")
else:
lang_mask = torch.empty(voice_prob.shape[-1]).fill_(0)
lang_index = torch.tensor(sorted(lang_ids[lang_code]))
lang_mask.index_fill_(0, lang_index, 1)
lang_mask = lang_mask.to(device)
comb_pred_ids = torch.argmax(lang_mask*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
return decoded_results
predict(load_file_to_data('audio file path'))
predict_lang_specific(load_file_to_data('audio file path'),'en')
```
## Result
| Common Voice Languages | Num. of data | Hour | CER |
|------------------------|--------------|--------|-------|
| ar | 21744 | 81.5 | 31.27 |
| as | 394 | 1.1 | 46.03 |
| br | 4777 | 7.4 | 41.14 |
| ca | 301308 | 692.8 | 10.39 |
| cnh | 1563 | 2.4 | 23.11 |
| cs | 9773 | 39.5 | 12.57 |
| cv | 1749 | 5.9 | 34.01 |
| cy | 11615 | 106.7 | 23.93 |
| de | 262113 | 822.8 | 6.51 |
| dv | 4757 | 18.6 | 30.18 |
| el | 3717 | 11.1 | 58.69 |
| en | 580501 | 1763.6 | 14.84 |
| eo | 28574 | 162.3 | 6.23 |
| es | 176902 | 337.7 | 5.42 |
| et | 5473 | 35.9 | 20.80 |
| eu | 12677 | 90.2 | 7.32 |
| fa | 12806 | 290.6 | 15.09 |
| fi | 875 | 2.6 | 27.60 |
| fr | 314745 | 664.1 | 13.94 |
| fy-NL | 6717 | 27.2 | 26.58 |
| ga-IE | 1038 | 3.5 | 50.98 |
| hi | 292 | 2.0 | 57.34 |
| hsb | 980 | 2.3 | 27.18 |
| hu | 4782 | 9.3 | 36.74 |
| ia | 5078 | 10.4 | 11.37 |
| id | 3965 | 9.9 | 22.82 |
| it | 70943 | 178.0 | 8.72 |
| ja | 1308 | 8.2 | 61.91 |
| ka | 1585 | 4.0 | 18.57 |
| ky | 3466 | 12.2 | 19.83 |
| lg | 1634 | 17.1 | 43.84 |
| lt | 1175 | 3.9 | 26.82 |
| lv | 4554 | 6.3 | 30.79 |
| mn | 4020 | 11.6 | 30.15 |
| mt | 3552 | 7.8 | 22.94 |
| nl | 14398 | 71.8 | 19.01 |
| or | 517 | 0.9 | 27.42 |
| pa-IN | 255 | 0.8 | 42.00 |
| pl | 12621 | 112.0 | 12.07 |
| pt | 11106 | 61.3 | 16.33 |
| rm-sursilv | 2589 | 5.9 | 23.30 |
| rm-vallader | 931 | 2.3 | 21.70 |
| ro | 4257 | 8.7 | 21.93 |
| ru | 23444 | 119.1 | 15.18 |
| sah | 1847 | 4.4 | 38.47 |
| sl | 2594 | 6.7 | 20.52 |
| sv-SE | 4350 | 20.8 | 30.78 |
| ta | 3788 | 18.4 | 21.60 |
| th | 4839 | 11.7 | 37.24 |
| tr | 3478 | 22.3 | 15.55 |
| tt | 13338 | 26.7 | 33.59 |
| uk | 7271 | 39.4 | 14.35 |
| vi | 421 | 1.7 | 66.31 |
| zh-CN | 27284 | 58.7 | 23.94 |
| zh-HK | 12678 | 92.1 | 18.82 |
| zh-TW | 6402 | 56.6 | 29.08 | |
voidism/12to6_distilbert | 2020-04-24T16:06:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"checkpoint.pth",
"config.json",
"git_log.json",
"model_epoch_0.pth",
"parameters.json",
"pytorch_model.bin",
"vocab.txt",
"log/train/events.out.tfevents.1576343235.vi0p30ctr1576334306773-w45z7.3798.0"
] | voidism | 22 | transformers | |
vr25/fin_BERT-v1 | 2021-05-20T23:05:00.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | vr25 | 23 | transformers | |
vr25/fin_RoBERTa-v1 | 2021-05-20T23:06:21.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | vr25 | 28 | transformers | |
vslaykovsky/roberta-news-duplicates | 2021-05-20T23:07:11.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | vslaykovsky | 15 | transformers | |
vtass/SentimentAnalysis | 2021-03-02T14:40:41.000Z | [] | [
".gitattributes",
"main.py"
] | vtass | 0 | |||
vumichien/wav2vec2-large-xlsr-japanese-hiragana | 2021-06-18T11:22:28.000Z | [
"pytorch",
"wav2vec2",
"ja",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | vumichien | 0 | transformers | |
vumichien/wav2vec2-large-xlsr-japanese | 2021-04-07T01:31:25.000Z | [
"pytorch",
"wav2vec2",
"ja",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"Fine-Tune-Wav2Vec2-Large-XLSR-Japan.ipynb",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | vumichien | 465 | transformers | ---
language: ja
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Japanese by Chien Vu
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Japanese
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 30.84
- name: Test CER
type: cer
value: 17.85
---
# Wav2Vec2-Large-XLSR-53-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice) and Japanese speech corpus of Saruwatari-lab, University of Tokyo [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
import torch
import torchaudio
import librosa
from datasets import load_dataset
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・]'
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = wakati.parse(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
import torch
import librosa
import torchaudio
from datasets import load_dataset, load_metric
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
#config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・]'
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model.to("cuda")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = wakati.parse(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# evaluate function
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
## Test Result
**WER:** 30.84%,
**CER:** 17.85%
## Training
The Common Voice `train`, `validation` datasets and Japanese speech corpus `basic5000` datasets were used for training.
|
w11wo/indo-gpt2-small | 2021-05-23T13:41:42.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"id",
"dataset:wikipedia",
"transformers",
"indo-gpt2-small",
"license:mit",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 45 | transformers | ---
language: id
tags:
- indo-gpt2-small
license: mit
datasets:
- wikipedia
widget:
- text: "Nama saya Budi, dari Indonesia"
---
## Indo GPT-2 Small
Indo GPT-2 Small is a language model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on the latest (late December 2020) Indonesian Wikipedia articles.
The model was originally HuggingFace's pretrained [English GPT-2 model](https://huggingface.co/transformers/model_doc/gpt2.html) and is later fine-tuned on the Indonesian dataset. Many of the techniques used
are based on a [notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb)/[blog](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) shared by [Pierre Guillou](https://medium.com/@pierre_guillou), where Pierre Guillou fine-tuned the English GPT-2 model on a Portuguese dataset.
Frameworks used include HuggingFace's [Transformers](https://huggingface.co/transformers) and fast.ai's [Deep Learning library](https://docs.fast.ai/). PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training /Validation data (text) |
|-------------------|---------|-------------|---------------------------------------|
| `indo-gpt2-small` | 124M | GPT-2 Small | Indonesian Wikipedia (3.1 GB of text) |
## Evaluation Results
The model was trained for only 1 epoch and the following is the final result once the training ended.
| epoch | train loss | valid loss | perplexity | total time |
|-------|------------|------------|------------|------------|
| 0 | 2.981 | 2.936 | 18.85 | 2:45:25 |
## How to Use (PyTorch)
### Load Model and Byte-level Tokenizer
```python
from transformers import GPT2TokenizerFast, GPT2LMHeadModel
pretrained_name = "w11wo/indo-gpt2-small"
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
tokenizer.model_max_length = 1024
model = GPT2LMHeadModel.from_pretrained(pretrained_name)
```
### Generate a Sequence
```python
# sample prompt
prompt = "Nama saya Budi, dari Indonesia"
input_ids = tokenizer.encode(prompt, return_tensors='pt')
model.eval()
# generate output using top-k sampling
sample_outputs = model.generate(input_ids,
pad_token_id=50256,
do_sample=True,
max_length=40,
min_length=40,
top_k=40,
num_return_sequences=1)
for i, sample_output in enumerate(sample_outputs):
print(tokenizer.decode(sample_output.tolist()))
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Credits
Major thanks to Pierre Guillou for sharing his work, which did not only enable me to realize this project but also taught me tons of new, exciting stuff.
## Author
Indo GPT-2 Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
w11wo/indo-roberta-small | 2021-05-20T23:08:29.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"id",
"dataset:wikipedia",
"arxiv:1907.11692",
"transformers",
"indo-roberta-small",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 49 | transformers | ---
language: id
tags:
- indo-roberta-small
license: mit
datasets:
- wikipedia
widget:
- text: "Karena pandemi ini, kita harus <mask> di rumah saja."
---
## Indo RoBERTa Small
Indo RoBERTa Small is a masked language model based on the [RoBERTa model](https://arxiv.org/abs/1907.11692). It was trained on the latest (late December 2020) Indonesian Wikipedia articles.
The model was trained from scratch and achieved a perplexity of 48.27 on the validation dataset (20% of the articles). Many of the techniques used
are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), where Sylvain Gugger fine-tuned a [DistilGPT-2](https://huggingface.co/distilgpt2) on [Wikitext2](https://render.githubusercontent.com/view/ipynb?color_mode=dark&commit=43d63e390e8a82f7ae49aa1a877419343a213cb4&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f68756767696e67666163652f6e6f7465626f6f6b732f343364363365333930653861383266376165343961613161383737343139333433613231336362342f6578616d706c65732f6c616e67756167655f6d6f64656c696e672e6970796e62&nwo=huggingface%2Fnotebooks&path=examples%2Flanguage_modeling.ipynb&repository_id=272452525&repository_type=Repository).
Hugging Face's [Transformers]((https://huggingface.co/transformers)) library was used to train the model -- utilizing the base RoBERTa model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------|---------|----------|---------------------------------------|
| `indo-roberta-small` | 84M | RoBERTa | Indonesian Wikipedia (3.1 GB of text) |
## Evaluation Results
The model was trained for 3 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 4.071 | 3.876 | 48.27 | 3:40:55 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/indo-roberta-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Budi sedang <mask> di sekolah.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/indo-roberta-small"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Budi sedang berada di sekolah."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Indo RoBERTa Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-bert-small-imdb-classifier | 2021-05-20T09:02:37.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1810.04805",
"transformers",
"javanese-bert-small-imdb-classifier",
"license:mit"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | w11wo | 30 | transformers | ---
language: jv
tags:
- javanese-bert-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Dhuh Gusti, film iki elek banget. Aku getun ndelok !!!"
---
## Javanese BERT Small IMDB Classifier
Javanese BERT Small IMDB Classifier is a movie-classification model based on the [BERT model](https://arxiv.org/abs/1810.04805). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-bert-small-imdb`](https://huggingface.co/w11wo/javanese-bert-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 76.37% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|---------------------------------------|----------|----------------|---------------------------------|
| `javanese-bert-small-imdb-classifier` | 110M | BERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|------------|
| 0.131 | 1.113 | 0.763 | 59:16 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese BERT Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-bert-small-imdb | 2021-05-20T09:03:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1810.04805",
"transformers",
"javanese-bert-small-imdb",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | w11wo | 10 | transformers | ---
language: jv
tags:
- javanese-bert-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Fast and Furious iku film sing [MASK]."
---
## Javanese BERT Small IMDB
Javanese BERT Small IMDB is a masked language model based on the [BERT model](https://arxiv.org/abs/1810.04805). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese BERT Small model](https://huggingface.co/w11wo/javanese-bert-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 19.87 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------------|----------|----------------|---------------------------------|
| `javanese-bert-small-imdb` | 110M | BERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|-------------|
| 3.070 | 2.989 | 19.87 | 3:12:33 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small-imdb"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import BertModel, BertTokenizerFast
pretrained_name = "w11wo/javanese-bert-small-imdb"
model = BertModel.from_pretrained(pretrained_name)
tokenizer = BertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese BERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-bert-small | 2021-05-20T09:04:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"jv",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"javanese-bert-small",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | w11wo | 32 | transformers | ---
language: jv
tags:
- javanese-bert-small
license: mit
datasets:
- wikipedia
widget:
- text: "Aku mangan sate ing [MASK] bareng konco-konco"
---
## Javanese BERT Small
Javanese BERT Small is a masked language model based on the [BERT model](https://arxiv.org/abs/1810.04805). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English BERT model](https://huggingface.co/bert-base-uncased) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 22.00 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers]((https://huggingface.co/transformers)) library was used to train the model -- utilizing the base BERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|-----------------------|----------|----------------|-------------------------------------|
| `javanese-bert-small` | 110M | BERT Small | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 3.116 | 3.091 | 22.00 | 2:7:42 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import BertModel, BertTokenizerFast
pretrained_name = "w11wo/javanese-bert-small"
model = BertModel.from_pretrained(pretrained_name)
tokenizer = BertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Javanese BERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-distilbert-small-imdb-classifier | 2021-05-14T08:14:14.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1910.01108",
"transformers",
"javanese-distilbert-small-imdb-classifier",
"license:mit"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | w11wo | 23 | transformers | ---
language: jv
tags:
- javanese-distilbert-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Aku babar pisan ora nikmati film iki."
---
## Javanese DistilBERT Small IMDB Classifier
Javanese DistilBERT Small IMDB Classifier is a movie-classification model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-distilbert-small-imdb`](https://huggingface.co/w11wo/javanese-distilbert-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 76.04% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|---------------------------------------------|---------|------------------|---------------------------------|
| `javanese-distilbert-small-imdb-classifier` | 66M | DistilBERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|------------|
| 0.131 | 1.113 | 0.760 | 1:26:4 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-distilbert-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese DistilBERT Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-distilbert-small-imdb | 2021-05-14T08:08:47.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1910.01108",
"transformers",
"javanese-distilbert-small-imdb",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | w11wo | 9 | transformers | ---
language: jv
tags:
- javanese-distilbert-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Film favoritku yaiku Interstellar [MASK] Christopher Nolan."
---
## Javanese DistilBERT Small IMDB
Javanese DistilBERT Small IMDB is a masked language model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese DistilBERT Small model](https://huggingface.co/w11wo/javanese-distilbert-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 21.01 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------------------|----------|----------------------|---------------------------------|
| `javanese-distilbert-small-imdb` | 66M | DistilBERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|-------------|
| 3.126 | 3.039 | 21.01 | 5:6:4 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-distilbert-small-imdb"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import DistilBertModel, DistilBertTokenizerFast
pretrained_name = "w11wo/javanese-distilbert-small-imdb"
model = DistilBertModel.from_pretrained(pretrained_name)
tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese DistilBERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-distilbert-small | 2021-04-13T08:45:01.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"jv",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"javanese-distilbert-small",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | w11wo | 11 | transformers | ---
language: jv
tags:
- javanese-distilbert-small
license: mit
datasets:
- wikipedia
widget:
- text: "Joko [MASK] wis kelas siji SMA."
---
## Javanese DistilBERT Small
Javanese DistilBERT Small is a masked language model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English DistilBERT model](https://huggingface.co/distilbert-base-uncased) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 23.54 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers]((https://huggingface.co/transformers)) library was used to train the model -- utilizing the base DistilBERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|-----------------------------|---------|------------------|-------------------------------------|
| `javanese-distilbert-small` | 66M | DistilBERT Small | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 3.088 | 3.153 | 23.54 | 1:46:37 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-distilbert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import DistilBertModel, DistilBertTokenizerFast
pretrained_name = "w11wo/javanese-distilbert-small"
model = DistilBertModel.from_pretrained(pretrained_name)
tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Javanese DistilBERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-gpt2-small-imdb-classifier | 2021-05-23T13:42:30.000Z | [
"pytorch",
"tf",
"gpt2",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"transformers",
"javanese-gpt2-small-imdb-classifier",
"license:mit"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 14 | transformers | ---
language: jv
tags:
- javanese-gpt2-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Film sing apik banget!"
---
## Javanese GPT-2 Small IMDB Classifier
Javanese GPT-2 Small IMDB Classifier is a movie-classification model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-gpt2-small-imdb`](https://huggingface.co/w11wo/javanese-gpt2-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 76.70% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|---------------------------------------|----------|-----------------|---------------------------------|
| `javanese-gpt2-small-imdb-classifier` | 124M | GPT-2 Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|-------------|
| 0.324 | 0.574 | 0.767 | 2:0:14 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-gpt2-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese GPT-2 Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-gpt2-small-imdb | 2021-05-23T13:43:42.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"jv",
"dataset:w11wo/imdb-javanese",
"transformers",
"javanese-gpt2-small-imdb",
"license:mit",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 12 | transformers | ---
language: jv
tags:
- javanese-gpt2-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Train to Busan yaiku film sing digawe ing Korea Selatan"
---
## Javanese GPT-2 Small IMDB
Javanese GPT-2 Small IMDB is a causal language model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese GPT-2 Small model](https://huggingface.co/w11wo/javanese-gpt2-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 60.54 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------------|----------|-----------------|---------------------------------|
| `javanese-gpt2-small-imdb` | 124M | GPT-2 Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 4.135 | 4.103 | 60.54 | 6:22:40 |
## How to Use (PyTorch)
### As Causal Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-gpt2-small-imdb"
nlp = pipeline(
"text-generation",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Jenengku Budi, saka Indonesia")
```
### Feature Extraction in PyTorch
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
pretrained_name = "w11wo/javanese-gpt2-small-imdb"
model = GPT2LMHeadModel.from_pretrained(pretrained_name)
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese GPT-2 Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-gpt2-small | 2021-05-23T13:44:51.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"jv",
"dataset:wikipedia",
"transformers",
"javanese-gpt2-small",
"license:mit",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 29 | transformers | ---
language: jv
tags:
- javanese-gpt2-small
license: mit
datasets:
- wikipedia
widget:
- text: "Jenengku Budi, saka Indonesia"
---
## Javanese GPT-2 Small
Javanese GPT-2 Small is a language model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English GPT-2 model](https://huggingface.co/transformers/model_doc/gpt2.html) and is later fine-tuned on the Javanese dataset. Many of the techniques used
are based on a [notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb)/[blog](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) shared by [Pierre Guillou](https://medium.com/@pierre_guillou), where Pierre Guillou fine-tuned the English GPT-2 model on a Portuguese dataset.
Frameworks used include HuggingFace's [Transformers](https://huggingface.co/transformers) and fast.ai's [Deep Learning library](https://docs.fast.ai/). PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training /Validation data (text) |
|-----------------------|---------|-------------|-------------------------------------|
| `javanese-gpt2-small` | 124M | GPT-2 Small | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
Before fine-tuning, the English GPT-2 model went through a validation step just to see how the model fairs prior to training.
| valid loss | perplexity |
|------------|------------|
| 10.845 | 51313.62 |
The model was then trained afterwards for 5 epochs and the following are the results.
| epoch | train loss | valid loss | perplexity | total time |
|-------|------------|------------|------------|------------|
| 0 | 4.336 | 4.110 | 60.94 | 22:28 |
| 1 | 3.598 | 3.543 | 34.58 | 23:27 |
| 2 | 3.161 | 3.331 | 27.98 | 24:17 |
| 3 | 2.974 | 3.265 | 26.18 | 25:03 |
| 4 | 2.932 | 3.234 | 25.39 | 25:06 |
## How to Use (PyTorch)
### Load Model and Byte-level Tokenizer
```python
from transformers import GPT2TokenizerFast, GPT2LMHeadModel
pretrained_name = "w11wo/javanese-gpt2-small"
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
tokenizer.model_max_length = 1024
model = GPT2LMHeadModel.from_pretrained(pretrained_name)
```
### Generate a Sequence
```python
# sample prompt
prompt = "Jenengku Budi, saka Indonesia"
input_ids = tokenizer.encode(prompt, return_tensors='pt')
model.eval()
# generate output using top-k sampling
sample_outputs = model.generate(input_ids,
pad_token_id=50256,
do_sample=True,
max_length=40,
min_length=40,
top_k=40,
num_return_sequences=1)
for i, sample_output in enumerate(sample_outputs):
print(tokenizer.decode(sample_output.tolist()))
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Credits
Major thanks to Pierre Guillou for sharing his work, which did not only enable me to realize this project but also taught me tons of new, exciting stuff.
## Author
Javanese GPT-2 Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
w11wo/javanese-roberta-small-imdb-classifier | 2021-05-20T23:09:25.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1907.11692",
"transformers",
"javanese-roberta-small-imdb-classifier",
"license:mit"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 18 | transformers | ---
language: jv
tags:
- javanese-roberta-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Aku bakal menehi rating film iki 1 bintang."
---
## Javanese RoBERTa Small IMDB Classifier
Javanese RoBERTa Small IMDB Classifier is a movie-classification model based on the [RoBERTa model](https://arxiv.org/abs/1907.11692). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-roberta-small-imdb`](https://huggingface.co/w11wo/javanese-roberta-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 77.70% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|------------------------------------------|---------|------------------|---------------------------------|
| `javanese-roberta-small-imdb-classifier` | 124M | RoBERTa Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|-------------|
| 0.281 | 0.593 | 0.777 | 1:48:31 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-roberta-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese RoBERTa Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-roberta-small-imdb | 2021-05-20T23:10:31.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1907.11692",
"transformers",
"javanese-roberta-small-imdb",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 12 | transformers | ---
language: jv
tags:
- javanese-roberta-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Aku bakal menehi rating film iki 5 <mask>."
---
## Javanese RoBERTa Small IMDB
Javanese RoBERTa Small IMDB is a masked language model based on the [RoBERTa model](https://arxiv.org/abs/1907.11692). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese RoBERTa Small model](https://huggingface.co/w11wo/javanese-roberta-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 20.83 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|-------------------------------|----------|-------------------|---------------------------------|
| `javanese-roberta-small-imdb` | 124M | RoBERTa Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|-------------|
| 3.140 | 3.036 | 20.83 | 2:59:28 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-roberta-small-imdb"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing <mask> bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/javanese-roberta-small-imdb"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese RoBERTa Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/javanese-roberta-small | 2021-05-20T23:13:35.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"masked-lm",
"jv",
"dataset:wikipedia",
"arxiv:1907.11692",
"transformers",
"javanese-roberta-small",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
] | w11wo | 16 | transformers | ---
language: jv
tags:
- javanese-roberta-small
license: mit
datasets:
- wikipedia
widget:
- text: "Ing mangsa rendheng awakedhewe kudu pinter njaga <mask>."
---
## Javanese RoBERTa Small
Javanese RoBERTa Small is a masked language model based on the [RoBERTa model](https://arxiv.org/abs/1907.11692). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English RoBERTa model](https://huggingface.co/roberta-base) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 33.30 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers]((https://huggingface.co/transformers)) library was used to train the model -- utilizing the base RoBERTa model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|--------------------------|---------|----------|-------------------------------------|
| `javanese-roberta-small` | 124M | RoBERTa | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 3.481 | 3.506 | 33.30 | 1:11:43 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-roberta-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Meja lan kursine lagi <mask>.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/javanese-roberta-small"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Javanese RoBERTa Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w11wo/malaysian-distilbert-small | 2021-04-13T08:52:57.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"ms",
"dataset:oscar",
"arxiv:1910.01108",
"transformers",
"malaysian-distilbert-small",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | w11wo | 14 | transformers | ---
language: ms
tags:
- malaysian-distilbert-small
license: mit
datasets:
- oscar
widget:
- text: "Hari ini adalah hari yang [MASK]!"
---
## Malaysian DistilBERT Small
Malaysian DistilBERT Small is a masked language model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_original_ms` subset.
The model was originally HuggingFace's pretrained [English DistilBERT model](https://huggingface.co/distilbert-base-uncased) and is later fine-tuned on the Malaysian dataset. It achieved a perplexity of 10.33 on the validation dataset (20% of the dataset). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers]((https://huggingface.co/transformers)) library was used to train the model -- utilizing the base DistilBERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|------------------------------|---------|------------------|----------------------------------------|
| `malaysian-distilbert-small` | 66M | DistilBERT Small | OSCAR `unshuffled_original_ms` Dataset |
## Evaluation Results
The model was trained for 1 epoch and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 2.476 | 2.336 | 10.33 | 0:40:05 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/malaysian-distilbert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Henry adalah seorang lelaki yang tinggal di [MASK].")
```
### Feature Extraction in PyTorch
```python
from transformers import DistilBertModel, DistilBertTokenizerFast
pretrained_name = "w11wo/malaysian-distilbert-small"
model = DistilBertModel.from_pretrained(pretrained_name)
tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Bolehkah anda [MASK] Bahasa Melayu?"
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the OSCAR dataset that may be carried over into the results of this model.
## Author
Malaysian DistilBERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. |
w414313506/disease-ner | 2021-05-17T05:40:53.000Z | [] | [
".gitattributes"
] | w414313506 | 0 | |||
walexi4great/biasjs | 2021-04-26T00:47:59.000Z | [] | [
".gitattributes"
] | walexi4great | 0 | |||
wangj2/domaingen | 2021-05-23T13:46:02.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | wangj2 | 10 | transformers | |
wangyuwei/bert_cn_finetuning | 2021-05-20T09:05:36.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | wangyuwei | 17 | transformers | |
wangyuwei/bert_finetuning_test | 2021-05-20T09:06:29.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | wangyuwei | 27 | transformers | |
webek18735/rsdfvdgffffddddd | 2021-04-16T15:46:01.000Z | [] | [
".gitattributes"
] | webek18735 | 0 | |||
wego/Emotion | 2021-03-24T07:19:41.000Z | [] | [
".gitattributes"
] | wego | 0 | |||
weizhen/prophetnet-large-uncased-squad-qg | 2020-10-20T18:25:13.000Z | [
"pytorch",
"prophetnet",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"prophetnet.tokenizer",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json"
] | weizhen | 33 | transformers | |
wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner | 2021-05-20T09:07:16.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 153 | transformers | |
wietsedv/bert-base-dutch-cased-finetuned-lassysmall-pos | 2021-05-20T09:08:08.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 17 | transformers | |
wietsedv/bert-base-dutch-cased-finetuned-sentiment | 2021-05-20T09:09:04.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 4,148 | transformers | |
wietsedv/bert-base-dutch-cased-finetuned-sonar-ner | 2021-05-20T09:09:52.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 67 | transformers | |
wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner | 2021-05-20T09:10:49.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 28 | transformers | |
wietsedv/bert-base-dutch-cased-finetuned-udlassy-pos | 2021-05-20T09:11:56.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 14 | transformers | |
wietsedv/bert-base-dutch-cased | 2021-05-20T09:12:57.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"checkpoint",
"config.json",
"flax_model.msgpack",
"model.ckpt.data-00000-of-00001",
"model.ckpt.index",
"model.ckpt.meta",
"pretraining_output_eval_results.txt",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 30,853 | transformers | # BERTje: A Dutch BERT model
BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.
⚠️ **The new home of this model is the [GroNLP](https://huggingface.co/GroNLP) organization.**
BERTje now lives at: [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased)
The model weights of the versions at `wietsedv/` and `GroNLP/` are the same, so do not worry if you use(d) `wietsedv/bert-base-dutch-cased`.
<img src="https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png" height="250">
|
wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner | 2021-05-20T09:13:44.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 285 | transformers | |
wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner | 2021-05-20T09:15:08.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 34 | transformers | |
wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner | 2021-05-20T09:16:27.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wietsedv | 42 | transformers | |
wietsedv/wav2vec2-large-xlsr-53-dutch | 2021-03-28T18:23:29.000Z | [
"pytorch",
"wav2vec2",
"nl",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | wietsedv | 188 | transformers | ---
language: nl
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Dutch XLSR Wav2Vec2 Large 53 by Wietse de Vries
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice nl
type: common_voice
args: nl
metrics:
- name: Test WER
type: wer
value: 17.09
---
# Wav2Vec2-Large-XLSR-53-Dutch
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dutch using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "nl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dutch test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "nl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\'\“\%\‘\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.09 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
wietsedv/wav2vec2-large-xlsr-53-frisian | 2021-03-28T20:09:35.000Z | [
"pytorch",
"wav2vec2",
"fy-NL",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | wietsedv | 16 | transformers | ---
language: fy-NL
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Frisian XLSR Wav2Vec2 Large 53 by Wietse de Vries
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fy-NL
type: common_voice
args: fy-NL
metrics:
- name: Test WER
type: wer
value: 16.25
---
# Wav2Vec2-Large-XLSR-53-Frisian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Frisian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fy-NL", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\'\“\%\‘\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 16.25 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
wilderspells/WilderSpells | 2021-03-27T02:37:38.000Z | [] | [
".gitattributes"
] | wilderspells | 0 | |||
willemjan/gado_gado | 2021-05-26T11:03:16.000Z | [
"pytorch"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"eval_results.txt",
"model_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | willemjan | 1 | |||
wissamantoun/araelectra-base-artydiqa | 2021-04-05T11:58:31.000Z | [
"pytorch",
"electra",
"question-answering",
"ar",
"dataset:tydiqa",
"arxiv:2012.15516",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | wissamantoun | 228 | transformers | ---
language: ar
datasets:
- tydiqa
widget:
- text: "ما هو نظام الحكم في لبنان؟"
context: "لبنان أو (رسميا: الجمهورية اللبنانية)، هي دولة عربية واقعة في الشرق الأوسط في غرب القارة الآسيوية. تحدها سوريا من الشمال و الشرق، و فلسطين المحتلة - إسرائيل من الجنوب، وتطل من جهة الغرب على البحر الأبيض المتوسط. هو بلد ديمقراطي جمهوري طوائفي. معظم سكانه من العرب المسلمين و المسيحيين. وبخلاف غالبية الدول العربية هناك وجود فعال للمسيحيين في الحياة العامة والسياسية. هاجر وانتشر أبناؤه حول العالم منذ أيام الفينيقيين، وحاليا فإن عدد اللبنانيين المهاجرين يقدر بضعف عدد اللبنانيين المقيمين. واجه لبنان منذ القدم تعدد الحضارات التي عبرت فيه أو احتلت أراضيه وذلك لموقعه الوسطي بين الشمال الأوروبي والجنوب العربي والشرق الآسيوي والغرب الأفريقي، ويعد هذا الموقع المتوسط من أبرز الأسباب لتنوع الثقافات في لبنان، وفي الوقت ذاته من الأسباب المؤدية للحروب والنزاعات على مر العصور تجلت بحروب أهلية ونزاع مصيري مع إسرائيل. ويعود أقدم دليل على استيطان الإنسان في لبنان ونشوء حضارة على أرضه إلى أكثر من 7000 سنة. في القدم، سكن الفينيقيون أرض لبنان الحالية مع جزء من أرض سوريا و فلسطين، وهؤلاء قوم ساميون اتخذوا من الملاحة والتجارة مهنة لهم، وازدهرت حضارتهم طيلة 2500 سنة تقريبا (من حوالي سنة 3000 حتى سنة 539 ق.م). وقد مرت على لبنان عدة حضارات وشعوب استقرت فيه منذ عهد الفينيقين، مثل المصريين القدماء، الآشوريين، الفرس، الإغريق، الرومان، الروم البيزنطيين، العرب، الصليبيين، الأتراك العثمانيين، فالفرنسيين."
---
<img src="https://raw.githubusercontent.com/WissamAntoun/arabic-wikipedia-qa-streamlit/main/is2alni_logo.png" width="150" align="center"/>
# Arabic QA
AraELECTRA powered Arabic Wikipedia QA system with Streamlit [![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/wissamantoun/arabic-wikipedia-qa-streamlit/main)
This model is trained on the Arabic section of ArTyDiQA using the colab here [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hik0L_Dxg6WwJFcDPP1v74motSkst4gE?usp=sharing)
# How to use:
```bash
git clone https://github.com/aub-mind/arabert
pip install pyarabic
```
```python
from arabert.preprocess import ArabertPreprocessor
from transformers import pipeline
prep = ArabertPreprocessor("aubmindlab/araelectra-base-discriminator") #or empty string it's the same
qa_pipe =pipeline("question-answering",model="wissamantoun/araelectra-base-artydiqa")
text = " ما هو نظام الحكم في لبنان؟"
context = """
لبنان أو (رسميًّا: الجُمْهُورِيَّة اللبنانيَّة)، هي دولة عربيّة واقِعَة في الشَرق الأوسط في غرب القارة الآسيويّة. تَحُدّها سوريا من الشمال والشرق، وفلسطين المحتلة - إسرائيل من الجنوب، وتطل من جهة الغرب على البحر الأبيض المتوسط. هو بلد ديمقراطي جمهوري طوائفي. مُعظم سكانه من العرب المسلمين والمسيحيين. وبخلاف غالبيّة الدول العربيّة هناك وجود فعّال للمسيحيين في الحياة العامّة والسياسيّة. هاجر وانتشر أبناؤه حول العالم منذ أيام الفينيقيين، وحاليًّا فإن عدد اللبنانيين المهاجرين يُقدَّر بضعف عدد اللبنانيين المقيمين.
واجه لبنان منذ القدم تعدد الحضارات التي عبرت فيه أو احتلّت أراضيه وذلك لموقعه الوسطي بين الشمال الأوروبي والجنوب العربي والشرق الآسيوي والغرب الأفريقي، ويعد هذا الموقع المتوسط من أبرز الأسباب لتنوع الثقافات في لبنان، وفي الوقت ذاته من الأسباب المؤدية للحروب والنزاعات على مر العصور تجلت بحروب أهلية ونزاع مصيري مع إسرائيل. ويعود أقدم دليل على استيطان الإنسان في لبنان ونشوء حضارة على أرضه إلى أكثر من 7000 سنة.
في القدم، سكن الفينيقيون أرض لبنان الحالية مع جزء من أرض سوريا وفلسطين، وهؤلاء قوم ساميون اتخذوا من الملاحة والتجارة مهنة لهم، وازدهرت حضارتهم طيلة 2500 سنة تقريبًا (من حوالي سنة 3000 حتى سنة 539 ق.م). وقد مرّت على لبنان عدّة حضارات وشعوب استقرت فيه منذ عهد الفينيقين، مثل المصريين القدماء، الآشوريين، الفرس، الإغريق، الرومان، الروم البيزنطيين، العرب، الصليبيين، الأتراك العثمانيين، فالفرنسيين.
"""
context = prep.preprocess(context)# don't forget to preprocess the question and the context to get the optimal results
result = qa_pipe(question=text,context=context)
"""
{'answer': 'ديمقراطي جمهوري طوائفي',
'end': 241,
'score': 0.4910127818584442,
'start': 219}
"""
```
# If you used this model please cite us as :
```
@misc{antoun2020araelectra,
title={AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15516},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
wmeddie/foo | 2021-03-04T08:30:13.000Z | [] | [
".gitattributes"
] | wmeddie | 0 | |||
wonbae/test | 2021-04-15T08:59:08.000Z | [] | [
".gitattributes"
] | wonbae | 0 | |||
wongjompo1/Model | 2021-05-29T19:23:58.000Z | [] | [
".gitattributes"
] | wongjompo1 | 0 | |||
workshopso/aitest | 2021-06-11T18:01:50.000Z | [] | [
".gitattributes"
] | workshopso | 0 | |||
workshopso/test | 2021-06-11T17:22:55.000Z | [] | [
".gitattributes"
] | workshopso | 0 | |||
worsterman/DialoGPT-small-mulder | 2021-06-18T23:23:45.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | worsterman | 0 | transformers | |
wowhdmovie/asasdadad | 2021-05-21T19:46:20.000Z | [] | [
".gitattributes",
"dsdd"
] | wowhdmovie | 0 | |||
wowhdmovie/freeonline | 2021-06-02T08:36:02.000Z | [] | [
".gitattributes",
"README.md",
"fddfsf",
"ssss"
] | wowhdmovie | 0 | |||
wpnbos/xlm-roberta-base-conll2002-dutch | 2021-06-02T20:15:15.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:1911.02116",
"transformers"
] | token-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"rng_state.pth",
"scheduler.pt",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin"
] | wpnbos | 11 | transformers | # XLM-RoBERTa base ConLL-2002 Dutch
XLM-Roberta base model finetuned on ConLL-2002 Dutch train set, which is a Named Entity Recognition dataset containing the following classes: PER, LOC, ORG and MISC.
Results from https://arxiv.org/pdf/1911.02116.pdf reciprocated (original results were 90.39 F1, this finetuned version here scored 90.57). |
wptoux/albert-chinese-large-qa | 2021-03-09T07:48:40.000Z | [
"pytorch",
"albert",
"question-answering",
"zh",
"dataset:webqa",
"dataset:dureader",
"transformers",
"Question Answering",
"license:apache-2.0"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | wptoux | 2,681 | transformers | ---
language:
- zh
tags:
- Question Answering
license: apache-2.0
datasets:
- webqa
- dureader
---
# albert-chinese-large-qa
Albert large QA model pretrained from baidu webqa and baidu dureader datasets.
## Data source
+ baidu webqa 1.0
+ baidu dureader
## Traing Method
We combined the two datasets together and created a new dataset in squad format, including 705139 samples for training and 69638 samples for validation.
We finetune the model based on the albert chinese large model.
## Hyperparams
+ learning_rate 1e-5
+ max_seq_length 512
+ max_query_length 50
+ max_answer_length 300
+ doc_stride 256
+ num_train_epochs 2
+ warmup_steps 1000
+ per_gpu_train_batch_size 8
+ gradient_accumulation_steps 3
+ n_gpu 2 (Nvidia Tesla P100)
## Usage
```
from transformers import AutoModelForQuestionAnswering, BertTokenizer
model = AutoModelForQuestionAnswering.from_pretrained('wptoux/albert-chinese-large-qa')
tokenizer = BertTokenizer.from_pretrained('wptoux/albert-chinese-large-qa')
```
***Important: use BertTokenizer***
## MoreInfo
Please visit https://github.com/wptoux/albert-chinese-large-webqa for details.
|
wrice/wav2vec2-base-960h | 2021-05-29T22:44:18.000Z | [
"tf",
"wav2vec2",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"tf_model.h5"
] | wrice | 831 | transformers | ||
wws/demo | 2020-12-15T08:41:35.000Z | [] | [
".gitattributes"
] | wws | 0 | |||
xacaxulu/learningHF | 2021-02-03T15:21:09.000Z | [] | [
".gitattributes"
] | xacaxulu | 0 | |||
xcjthu/Lawformer | 2021-05-05T11:57:20.000Z | [
"pytorch",
"longformer",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | xcjthu | 28 | transformers | ## Lawformer
### Introduction
This repository provides the source code and checkpoints of the paper "Lawformer: A Pre-trained Language Model forChinese Legal Long Documents". You can download the checkpoint from the [huggingface model hub](https://huggingface.co/xcjthu/Lawformer) or from [here](https://data.thunlp.org/legal/Lawformer.zip).
### Easy Start
We have uploaded our model to the huggingface model hub. Make sure you have installed transformers.
```python
>>> from transformers import AutoModel, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")
>>> model = AutoModel.from_pretrained("xcjthu/Lawformer")
>>> inputs = tokenizer("任某提起诉讼,请求判令解除婚姻关系并对夫妻共同财产进行分割。", return_tensors="pt")
>>> outputs = model(**inputs)
```
### Cite
If you use the pre-trained models, please cite this paper:
```
@article{xiao2021lawformer,
title={Lawformer: A Pre-trained Language Model forChinese Legal Long Documents},
author={Xiao, Chaojun and Hu, Xueyu and Liu, Zhiyuan and Tu, Cunchao and Sun, Maosong},
year={2021}
}
```
|
xgx/NeuBA | 2021-06-10T04:30:57.000Z | [] | [
".gitattributes"
] | xgx | 0 | |||
xhlu/electra-medal | 2020-11-16T18:44:46.000Z | [
"pytorch",
"tf",
"electra",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | xhlu | 69 | transformers | ||
xhlu/electra-model | 2020-11-16T18:40:41.000Z | [] | [
".gitattributes"
] | xhlu | 0 | |||
xianghua/K-Roberta | 2021-05-03T03:55:41.000Z | [] | [
".gitattributes",
"KRoberta",
"README.md"
] | xianghua | 0 | |||
xianghua/Kmodel | 2021-05-03T03:56:56.000Z | [] | [
".gitattributes",
"README.md"
] | xianghua | 0 | |||
xianghua/berthua | 2021-05-04T04:44:51.000Z | [] | [
".gitattributes",
"README.md"
] | xianghua | 0 | |||
xiaoguoer/bert-base-cased | 2021-03-27T03:07:17.000Z | [] | [
".gitattributes"
] | xiaoguoer | 0 | |||
xinzhi/biobert_large | 2020-09-23T13:43:17.000Z | [
"transformers"
] | [
".gitattributes",
"biobert_large.pytorch.bin",
"config.json",
"vocab.txt"
] | xinzhi | 27 | transformers | ||
xinzhi/biobert_v1.0_pmc | 2020-09-23T14:08:22.000Z | [
"transformers"
] | [
"._bert_config.json",
"._vocab.txt",
".gitattributes",
"biobert_v1.0_pmc.pytorch.bin",
"config.json",
"vocab.txt"
] | xinzhi | 14 | transformers | ||
xinzhi/biobert_v1.0_pubmed | 2020-09-23T14:12:28.000Z | [
"transformers"
] | [
"._bert_config.json",
"._vocab.txt",
".gitattributes",
"biobert_v1.0_pubmed.pytorch.bin",
"config.json",
"vocab.txt"
] | xinzhi | 9 | transformers | ||
xinzhi/biobert_v1.0_pubmed_pmc | 2020-09-23T14:20:44.000Z | [
"transformers"
] | [
"._bert_config.json",
"._vocab.txt",
".gitattributes",
"biobert_v1.0_pubmed_pmc.pytorch.bin",
"config.json",
"vocab.txt"
] | xinzhi | 13 | transformers | ||
xinzhi/biobert_v1.1_pubmed | 2020-09-23T13:20:15.000Z | [
"transformers"
] | [
".gitattributes",
"biobert_v1.1_pubmed.pytorch.bin",
"config.json",
"vocab.txt"
] | xinzhi | 15 | transformers | ||
xlch1127/My_Bert | 2020-12-30T06:17:14.000Z | [] | [
".gitattributes"
] | xlch1127 | 0 | |||
xsway/wav2vec2-large-xlsr-georgian | 2021-03-29T21:07:53.000Z | [
"pytorch",
"wav2vec2",
"ka",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | xsway | 44 | transformers | ---
language: ka
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec finetuned for Georgian
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ka
type: common_voice
args: ka
metrics:
- name: Test WER
type: wer
value: 45.28
---
# Wav2Vec2-Large-XLSR-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(sampling_rate, speech_array).squeeze()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import librosa
test_dataset = load_dataset("common_voice", "ka", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.28 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](...)
|
xz/test1 | 2021-04-16T09:10:34.000Z | [] | [
".gitattributes"
] | xz | 0 | |||
yacov/yacov-athena-DistilBertSC | 2021-03-12T19:40:04.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | yacov | 6 | transformers | hello
|
yair/HeadlineGeneration-sagemaker | 2021-05-17T05:39:29.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"test_generations.txt",
"test_results.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | yair | 39 | transformers |
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
|
yair/HeadlineGeneration-sagemaker2 | 2021-05-18T08:45:49.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"test_generations.txt",
"test_results.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | yair | 72 | transformers |
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
- Training 3000 examples
|
yair/HeadlineGeneration-sagemaker3 | 2021-05-19T00:35:41.000Z | [] | [
".gitattributes"
] | yair | 0 | |||
yair/HeadlineGeneration | 2021-05-05T07:26:40.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"model_args.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | yair | 110 | transformers | hello
|
yair/SummaryGeneration-sagemaker3 | 2021-05-19T01:16:23.000Z | [
"pytorch",
"bart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"test_generations.txt",
"test_results.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | yair | 67 | transformers |
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
- Training 3000 examples
|
yair/SummaryGeneration-sagemaker4 | 2021-05-19T23:50:41.000Z | [] | [
".gitattributes"
] | yair | 0 | |||
yanguojun123/bert-base-multilingual-uncased | 2021-03-11T12:50:55.000Z | [] | [
".gitattributes"
] | yanguojun123 | 0 | |||
yannis-papanikolaou/t5-code-generation | 2021-01-19T14:46:48.000Z | [
"arxiv:2101.07138"
] | [
".gitattributes",
"README.md",
"t5-large-CoNaLa/config.json",
"t5-large-CoNaLa/pytorch_model.bin",
"t5-small-CoNaLa/config.json",
"t5-small-CoNaLa/pytorch_model.bin"
] | yannis-papanikolaou | 0 | # T5 for Semantic Parsing
## Model description
T5 (small and large) finetuned on CoNaLa for semantic parsing (Natural Language descriptions to Python code)
Paper: https://arxiv.org/pdf/2101.07138.pdf
Code, data and how to use: https://github.com/ypapanik/t5-for-code-generation
### Cite
```
@misc{papanikolaou2021teach,
title={Teach me how to Label: Labeling Functions from Natural Language with Text-to-text Transformers},
author={Yannis Papanikolaou},
year={2021},
eprint={2101.07138},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
||
yanyk13/models | 2021-03-11T06:05:23.000Z | [] | [
".gitattributes"
] | yanyk13 | 0 | |||
yaoyinnan/bert-base-chinese-covid19 | 2021-05-20T09:18:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | yaoyinnan | 7,530 | transformers | |
yaoyinnan/roberta-fakeddit | 2021-05-20T23:15:25.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | yaoyinnan | 34 | transformers | |
yasirabd/wave2vec2-large-xlsr-indonesian | 2021-03-27T09:07:55.000Z | [] | [
".gitattributes"
] | yasirabd | 0 | |||
yasser-lin/NLP | 2020-11-23T15:21:58.000Z | [] | [
".gitattributes"
] | yasser-lin | 0 | |||
yazdipour/QALD | 2021-01-19T18:56:39.000Z | [] | [
".gitattributes"
] | yazdipour | 0 | |||
ycchen/macbert_large_drcd | 2021-05-19T15:44:11.000Z | [] | [
".gitattributes"
] | ycchen | 0 | |||
ydshieh/wav2vec2-large-xlsr-53-French | 2021-04-09T17:19:33.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"pred.json",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"trainer_state.json",
"training_args.bin"
] | ydshieh | 7 | transformers | hello
|
|
ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt | 2021-04-01T14:09:29.000Z | [
"pytorch",
"wav2vec2",
"zh",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"pred.json",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
] | ydshieh | 1,261 | transformers | ---
language: zh
datasets:
- common_voice
metrics:
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 - Chinese (zh-CN), by Yih-Dar SHIEH
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-CN
type: common_voice
args: zh-CN
metrics:
- name: Test CER
type: cer
value: 20.90
---
# Wav2Vec2-Large-XLSR-53-Chinese-zh-cn-gpt
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese (zh-CN) using the [Common Voice](https://huggingface.co/datasets/common_voice), included [Common Voice](https://huggingface.co/datasets/common_voice) Chinese (zh-TW) dataset (converting the label text to simplified Chinese).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "zh-CN", split="test")
processor = Wav2Vec2Processor.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt")
model = Wav2Vec2ForCTC.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the zh-CN test data of Common Voice.
Original CER calculation refer to https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese
```python
#!pip install datasets==1.4.1
#!pip install transformers==4.4.0
#!pip install torchaudio
#!pip install jiwer
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import jiwer
def chunked_cer(targets, predictions, chunk_size=None):
_predictions = [char for seq in predictions for char in list(seq)]
_targets = [char for seq in targets for char in list(seq)]
if chunk_size is None: return jiwer.wer(_targets, _predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
_predictions = [char for seq in predictions[start:end] for char in list(seq)]
_targets = [char for seq in targets[start:end] for char in list(seq)]
chunk_metrics = jiwer.compute_measures(_targets, _predictions)
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
test_dataset = load_dataset("common_voice", "zh-CN", split="test")
processor = Wav2Vec2Processor.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt")
model = Wav2Vec2ForCTC.from_pretrained("ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\⋯\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\–\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\》\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\~\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\~\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\︰\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‧\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\《\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\﹔\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\﹖\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\·\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\×\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\̃\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\̌\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ε\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\λ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\μ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\и\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\т\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\─\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\□\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\〈\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\〉\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\『\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\』\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ア\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\オ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\カ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\チ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ド\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ベ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ャ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ヤ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ン\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\丶\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\a\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\b\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\f\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\g\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\i\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\p\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t' + "\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\']"
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") + " "
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("CER: {:2f}".format(100 * chunked_cer(predictions=result["pred_strings"], targets=result["sentence"], chunk_size=1000)))
```
**Test Result**: 20.902244 %
## Training
The Common Voice zh-CN `train`, `validation` were used for training, as well as Common Voice zh-TW `train`, `validation` and `test` datasets.
The script used for training can be found [to be uploaded later](...) |
yechen/bert-base-chinese-jinyong | 2021-05-20T09:20:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"zh",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | yechen | 38 | transformers | ---
language: zh
---
|