modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
21 values
files
sequence
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
AlexaRyck/KEITH
2021-01-21T15:42:09.000Z
[]
[ ".gitattributes" ]
AlexaRyck
0
AlexeyIgnatov/albert-xlarge-v2-squad-v2
2021-03-26T11:37:40.000Z
[]
[ ".gitattributes" ]
AlexeyIgnatov
1
Alfia/anekdotes
2021-02-28T21:02:56.000Z
[]
[ ".gitattributes" ]
Alfia
0
Amir99/toxic
2021-04-09T10:47:58.000Z
[]
[ ".gitattributes" ]
Amir99
0
AmirServi/MyModel
2021-03-24T12:57:36.000Z
[]
[ ".gitattributes", "README.md" ]
AmirServi
0
Amro-Kamal/gpt
2020-12-19T13:24:23.000Z
[]
[ ".gitattributes" ]
Amro-Kamal
0
Amrrs/wav2vec2-large-xlsr-53-tamil
2021-03-22T07:04:07.000Z
[ "pytorch", "wav2vec2", "ta", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "vocab.json" ]
Amrrs
18
transformers
--- language: ta datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Tamil by Amrrs results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ta type: common_voice args: ta metrics: - name: Test WER type: wer value: 82.94 --- # Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ta", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 82.94 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
AnnettJaeger/AnneJae
2021-01-19T17:24:27.000Z
[]
[ ".gitattributes" ]
AnnettJaeger
0
Anonymous/ReasonBERT-BERT
2021-05-23T02:33:35.000Z
[ "pytorch", "bert", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin" ]
Anonymous
13
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on bert-base-uncased model and pre-trained for text input
Anonymous/ReasonBERT-RoBERTa
2021-05-23T02:34:08.000Z
[ "pytorch", "roberta", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin" ]
Anonymous
9
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on roberta-base model and pre-trained for text input
Anonymous/ReasonBERT-TAPAS
2021-05-23T02:34:38.000Z
[ "pytorch", "tapas", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin" ]
Anonymous
10
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on tapas-base(no_reset) model and pre-trained for table input
AnonymousNLP/pretrained-model-1
2021-05-21T09:27:54.000Z
[ "pytorch", "gpt2", "transformers" ]
[ ".gitattributes", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
AnonymousNLP
10
transformers
AnonymousNLP/pretrained-model-2
2021-05-21T09:28:24.000Z
[ "pytorch", "gpt2", "transformers" ]
[ ".gitattributes", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
AnonymousNLP
9
transformers
AnonymousSubmission/pretrained-model-1
2021-02-01T09:22:13.000Z
[]
[ ".gitattributes" ]
AnonymousSubmission
0
Aries/T5_question_answering
2020-12-11T17:10:33.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "added_tokens.json", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
Aries
12
transformers
Aries/T5_question_generation
2020-11-28T20:11:38.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "added_tokens.json", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
Aries
73
transformers
ArseniyBolotin/bert-multi-PAD-ner
2021-05-18T17:06:50.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
ArseniyBolotin
20
transformers
Ashl3y/model_name
2021-05-14T15:54:02.000Z
[]
[ ".gitattributes" ]
Ashl3y
0
Ateeb/EmotionDetector
2021-03-22T18:03:50.000Z
[ "pytorch", "funnel", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
Ateeb
32
transformers
Ateeb/FullEmotionDetector
2021-03-22T19:28:37.000Z
[ "pytorch", "funnel", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
Ateeb
22
transformers
Ateeb/QA
2021-05-03T11:41:12.000Z
[ "pytorch", "distilbert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "__init__.py", "config.json", "main.py", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt", "__pycache__/preprocess.cpython-37.pyc" ]
Ateeb
21
transformers
Ateeb/SquadQA
2021-05-03T09:47:52.000Z
[]
[ ".gitattributes" ]
Ateeb
0
Ateeb/asd
2021-05-03T09:31:28.000Z
[]
[ ".gitattributes" ]
Ateeb
0
Atlasky/Turkish-Negator
2021-01-24T09:27:53.000Z
[]
[ ".gitattributes", "README.md" ]
Atlasky
0
Placeholder
Atlasky/turkish-negator-nn
2021-01-24T09:57:49.000Z
[]
[ ".gitattributes" ]
Atlasky
0
Aurora/asdawd
2021-04-06T19:15:11.000Z
[]
[ ".gitattributes", "README.md" ]
Aurora
0
https://www.geogebra.org/m/bbuczchu https://www.geogebra.org/m/xwyasqje https://www.geogebra.org/m/mx2cqkwr https://www.geogebra.org/m/tkqqqthm https://www.geogebra.org/m/asdaf9mj https://www.geogebra.org/m/ywuaj7p5 https://www.geogebra.org/m/jkfkayj3 https://www.geogebra.org/m/hptnn7ar https://www.geogebra.org/m/de9cwmrf https://www.geogebra.org/m/yjc5hdep https://www.geogebra.org/m/nm8r56w5 https://www.geogebra.org/m/j7wfcpxj
Aurora/community.afpglobal
2021-04-08T08:34:53.000Z
[]
[ ".gitattributes", "README.md" ]
Aurora
0
https://community.afpglobal.org/network/members/profile?UserKey=b0b38adc-86c7-4d30-85c6-ac7d15c5eeb0 https://community.afpglobal.org/network/members/profile?UserKey=f4ddef89-b508-4695-9d1e-3d4d1a583279 https://community.afpglobal.org/network/members/profile?UserKey=36081479-5e7b-41ba-8370-ecf72989107a https://community.afpglobal.org/network/members/profile?UserKey=e1a88332-be7f-4997-af4e-9fcb7bb366da https://community.afpglobal.org/network/members/profile?UserKey=4738b405-2017-4025-9e5f-eadbf7674840 https://community.afpglobal.org/network/members/profile?UserKey=eb96d91c-31ae-46e1-8297-a3c8551f2e6a https://u.mpi.org/network/members/profile?UserKey=9867e2d9-d22a-4dab-8bcf-3da5c2f30745 https://u.mpi.org/network/members/profile?UserKey=5af232f2-a66e-438f-a5ab-9768321f791d https://community.afpglobal.org/network/members/profile?UserKey=481305df-48ea-4c50-bca4-a82008efb427 https://u.mpi.org/network/members/profile?UserKey=039fbb91-52c6-40aa-b58d-432fb4081e32 https://www.geogebra.org/m/jkfkayj3 https://www.geogebra.org/m/hptnn7ar https://www.geogebra.org/m/de9cwmrf https://www.geogebra.org/m/yjc5hdep https://www.geogebra.org/m/nm8r56w5 https://www.geogebra.org/m/j7wfcpxj https://www.geogebra.org/m/bbuczchu https://www.geogebra.org/m/xwyasqje https://www.geogebra.org/m/mx2cqkwr https://www.geogebra.org/m/tkqqqthm https://www.geogebra.org/m/asdaf9mj https://www.geogebra.org/m/ywuaj7p5
Aviora/news2vec
2021-01-29T08:11:40.000Z
[]
[ ".gitattributes", "README.md" ]
Aviora
0
# w2v with news
Aviora/phobert-ner
2021-04-29T06:49:47.000Z
[]
[ ".gitattributes" ]
Aviora
0
Azura/data
2021-03-01T08:08:20.000Z
[]
[ ".gitattributes", "README.md" ]
Azura
0
BOON/electra-xlnet
2021-02-11T05:57:07.000Z
[]
[ ".gitattributes" ]
BOON
0
BOON/electra_qa
2021-02-11T05:45:36.000Z
[]
[ ".gitattributes" ]
BOON
0
Bakkes/BakkesModWiki
2021-04-06T17:04:42.000Z
[]
[ ".gitattributes", "README.md" ]
Bakkes
0
BaptisteDoyen/camembert-base-xlni
2021-04-08T14:11:55.000Z
[ "pytorch", "camembert", "text-classification", "fr", "dataset:xnli", "transformers", "zero-shot-classification", "xnli", "nli", "license:mit", "pipeline_tag:zero-shot-classification" ]
zero-shot-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
BaptisteDoyen
3,725
transformers
--- language: - fr thumbnail: tags: - zero-shot-classification - xnli - nli - fr license: mit pipeline_tag: zero-shot-classification datasets: - xnli metrics: - accuracy --- # camembert-base-xnli ## Model description Camembert-base model fine-tuned on french part of XNLI dataset. <br> One of the few Zero-Shot classification model working on french 🇫🇷 ## Intended uses & limitations #### How to use Two different usages : - As a Zero-Shot sequence classifier : ```python classifier = pipeline("zero-shot-classification", model="BaptisteDoyen/camembert-base-xnli") sequence = "L'équipe de France joue aujourd'hui au Parc des Princes" candidate_labels = ["sport","politique","science"] hypothesis_template = "Ce texte parle de {}." classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template) # outputs : # {'sequence': "L'équipe de France joue aujourd'hui au Parc des Princes", # 'labels': ['sport', 'politique', 'science'], # 'scores': [0.8595073223114014, 0.10821866989135742, 0.0322740375995636]} ``` - As a premise/hypothesis checker : <br> The idea is here to compute a probability of the form \\( P(premise|hypothesis ) \\) ```python # load model and tokenizer nli_model = AutoModelForSequenceClassification.from_pretrained("BaptisteDoyen/camembert-base-xnli") tokenizer = AutoTokenizer.from_pretrained("BaptisteDoyen/camembert-base-xnli") # sequences premise = "le score pour les bleus est élevé" hypothesis = "L'équipe de France a fait un bon match" # tokenize and run through model x = tokenizer.encode(premise, hypothesis, return_tensors='pt') logits = nli_model(x)[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (0) as the probability of the label being true entail_contradiction_logits = logits[:,::2] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,0] prob_label_is_true[0].tolist() * 100 # outputs # 86.40775084495544 ``` ## Training data Training data is the french fold of the [XNLI](https://research.fb.com/publications/xnli-evaluating-cross-lingual-sentence-representations/) dataset released in 2018 by Facebook. <br> Available with great ease using the ```datasets``` library : ```python from datasets import load_dataset dataset = load_dataset('xnli', 'fr') ``` ## Training/Fine-Tuning procedure Training procedure is here pretty basic and was performed on the cloud using a single GPU. <br> Main training parameters : - ```lr = 2e-5``` with ```lr_scheduler_type = "linear"``` - ```num_train_epochs = 4``` - ```batch_size = 12``` (limited by GPU-memory) - ```weight_decay = 0.01``` - ```metric_for_best_model = "eval_accuracy"``` ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | Accuracy | | ---------- |-------------| | validation | 81.4 | | test | 81.7 |
BeIR/query-gen-msmarco-t5-base-v1
2021-03-01T15:25:52.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
BeIR
241
transformers
# Query Generation This model is the t5-base model from [docTTTTTquery](https://github.com/castorini/docTTTTTquery). The T5-base model was trained on the [MS MARCO Passage Dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking), which consists of about 500k real search queries from Bing together with the relevant passage. The model can be used for query generation to learn semantic search models without requiring annotated training data: [Synthetic Query Generation](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/query_generation). ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('model-name') model = T5ForConditionalGeneration.from_pretrained('model-name') para = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(para, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=3) print("Paragraph:") print(para) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ```
BeIR/query-gen-msmarco-t5-large-v1
2021-03-01T15:27:56.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
BeIR
260
transformers
# Query Generation This model is the t5-base model from [docTTTTTquery](https://github.com/castorini/docTTTTTquery). The T5-base model was trained on the [MS MARCO Passage Dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking), which consists of about 500k real search queries from Bing together with the relevant passage. The model can be used for query generation to learn semantic search models without requiring annotated training data: [Synthetic Query Generation](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/query_generation). ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('model-name') model = T5ForConditionalGeneration.from_pretrained('model-name') para = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(para, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=3) print("Paragraph:") print(para) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ```
BeIR/sparta-msmarco-distilbert-base-v1
2021-04-20T14:54:42.000Z
[ "pytorch", "distilbert", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "train_script.py", "vocab.txt" ]
BeIR
63
transformers
Belin/T5-Terms-and-Conditions
2021-06-10T15:22:15.000Z
[]
[ ".gitattributes" ]
Belin
0
BenDavis71/GPT-2-Finetuning-AIRaid
2021-05-21T09:29:22.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
BenDavis71
26
transformers
BenQLange/HF_bot
2021-02-12T17:40:17.000Z
[]
[ ".gitattributes" ]
BenQLange
0
BigBoy/model
2021-04-09T13:12:58.000Z
[]
[ ".gitattributes" ]
BigBoy
0
BigSalmon/BlankSlots
2021-03-27T18:50:29.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
BigSalmon
14
transformers
BigSalmon/DaBlank
2021-03-20T03:53:42.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
BigSalmon
8
transformers
BigSalmon/Flowberta
2021-06-12T01:20:12.000Z
[ "pytorch", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "pytorch_model.bin", "training_args.bin" ]
BigSalmon
2,048
transformers
BigSalmon/GPT2HardArticleEasyArticle
2021-05-21T09:31:52.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "events.out.tfevents.1619624233.d987fc993321.71.0", "flax_model.msgpack", "pytorch_model.bin", "training_args.bin", "1619624233.34817/events.out.tfevents.1619624233.d987fc993321.71.1" ]
BigSalmon
14
transformers
BigSalmon/Neo
2021-04-07T15:05:25.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "training_args.bin" ]
BigSalmon
20
transformers
BigSalmon/Robertsy
2021-06-10T23:23:33.000Z
[ "pytorch", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "pytorch_model.bin", "training_args.bin" ]
BigSalmon
15
transformers
BigSalmon/Rowerta
2021-06-11T01:07:05.000Z
[ "pytorch", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "config.json", "pytorch_model.bin", "training_args.bin" ]
BigSalmon
9
transformers
BigSalmon/T5Salmon
2021-03-12T07:18:37.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
BigSalmon
8
transformers
BigSalmon/T5Salmon2
2021-03-15T23:17:03.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
BigSalmon
9
transformers
Binbin/test
2021-03-19T10:17:22.000Z
[]
[ ".gitattributes" ]
Binbin
0
BinksSachary/DialoGPT-small-shaxx
2021-06-03T04:48:29.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "conversational", "text-generation" ]
conversational
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
BinksSachary
40
transformers
--- tags: - conversational --- # My Awesome Model
BinksSachary/ShaxxBot
2021-06-03T04:51:56.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "conversational", "text-generation" ]
conversational
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
BinksSachary
32
transformers
--- tags: - conversational --- # My Awesome Model
BinksSachary/ShaxxBot2
2021-06-03T04:37:46.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "conversational", "text-generation" ]
conversational
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
BinksSachary
45
transformers
--- tags: - conversational --- # My Awesome Model from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
Blazeolmo/Scrabunzi
2021-06-12T17:05:19.000Z
[]
[ ".gitattributes" ]
Blazeolmo
0
BonjinKim/dst_kor_bert
2021-05-19T05:35:57.000Z
[ "pytorch", "jax", "bert", "pretraining", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
BonjinKim
23
transformers
# Korean bert base model for DST - This is ConversationBert for dsksd/bert-ko-small-minimal(base-module) + 5 datasets - Use dsksd/bert-ko-small-minimal tokenizer - 5 datasets - tweeter_dialogue : xlsx - speech : trn - office_dialogue : json - KETI_dialogue : txt - WOS_dataset : json ```python tokenizer = AutoTokenizer.from_pretrained("BonjinKim/dst_kor_bert") model = AutoModel.from_pretrained("BonjinKim/dst_kor_bert") ```
Boondong/Wandee
2021-03-18T11:13:33.000Z
[]
[ ".gitattributes" ]
Boondong
0
BrianTin/MTBERT
2021-05-18T17:08:50.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".DS_Store", ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
BrianTin
27
transformers
CAMeL-Lab/bert-base-camelbert-ca
2021-05-18T17:09:46.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
72
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| |✔|`bert-base-camelbert-ca`|CA|6GB|847M| ||`bert-base-camelbert-da`|DA|54GB|5.8B| ||`bert-base-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-CA** (`bert-base-camelbert-ca`), a model pre-trained on the CA dataset. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-ca') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.11048116534948349, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو الإسلام. [SEP]', 'score': 0.03481195122003555, 'token': 4677, 'token_str': 'الإسلام'}, {'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]', 'score': 0.03402028977870941, 'token': 4295, 'token_str': 'الموت'}, {'sequence': '[CLS] الهدف من الحياة هو العلم. [SEP]', 'score': 0.027655426412820816, 'token': 2789, 'token_str': 'العلم'}, {'sequence': '[CLS] الهدف من الحياة هو هذا. [SEP]', 'score': 0.023059621453285217, 'token': 2085, 'token_str': 'هذا'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-ca') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-ca') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-ca') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-ca') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - CA - [OpenITI (Version 2020.1.2)](https://zenodo.org/record/3891466#.YEX4-F0zbzc) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-camelbert-da
2021-05-18T17:11:39.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
131
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-camelbert-ca`|CA|6GB|847M| |✔|`bert-base-camelbert-da`|DA|54GB|5.8B| ||`bert-base-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-DA** (`bert-base-camelbert-da`), a model pre-trained on the DA dataset. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-da') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]', 'score': 0.062508225440979, 'token': 18, 'token_str': '.'}, {'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]', 'score': 0.033172328025102615, 'token': 4295, 'token_str': 'الموت'}, {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.029575437307357788, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو الرحيل. [SEP]', 'score': 0.02724040113389492, 'token': 11449, 'token_str': 'الرحيل'}, {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]', 'score': 0.01564178802073002, 'token': 3088, 'token_str': 'الحب'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-da') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-da') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-da') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-da') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - DA - A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678). ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-camelbert-mix
2021-05-18T17:14:22.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
1,283
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| |✔|`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-camelbert-ca`|CA|6GB|847M| ||`bert-base-camelbert-da`|DA|54GB|5.8B| ||`bert-base-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-Mix** (`bert-base-camelbert-mix`), a model pre-trained on a mixture of these variants: CA, DA, and MSA. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-mix') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.10861027985811234, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو.. [SEP]', 'score': 0.07626965641975403, 'token': 18, 'token_str': '.'}, {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.05131986364722252, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]', 'score': 0.03734956309199333, 'token': 4295, 'token_str': 'الموت'}, {'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.027189988642930984, 'token': 2854, 'token_str': 'العمل'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-mix') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-mix') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-mix') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-mix') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) - DA - A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678). - CA - [OpenITI (Version 2020.1.2)](https://zenodo.org/record/3891466#.YEX4-F0zbzc) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-camelbert-msa-eighth
2021-05-18T17:15:20.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
76
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-camelbert-ca`|CA|6GB|847M| ||`bert-base-camelbert-da`|DA|54GB|5.8B| ||`bert-base-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| |✔|`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-MSA-eighth** (`bert-base-camelbert-msa-eighth`), a model pre-trained on an eighth of the full MSA dataset. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-msa-eighth') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.057812128216028214, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.05573025345802307, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]', 'score': 0.035942986607551575, 'token': 17188, 'token_str': 'الكمال'}, {'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]', 'score': 0.03375256434082985, 'token': 12554, 'token_str': 'التعلم'}, {'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.030303971841931343, 'token': 2854, 'token_str': 'العمل'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-eighth') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-eighth') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-eighth') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-eighth') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-camelbert-msa-half
2021-05-18T17:16:22.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
18
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-camelbert-ca`|CA|6GB|847M| ||`bert-base-camelbert-da`|DA|54GB|5.8B| ||`bert-base-camelbert-msa`|MSA|107GB|12.6B| |✔|`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-MSA-half** (`bert-base-camelbert-msa-half`), a model pre-trained on a half of the full MSA dataset. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-msa-half') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.09132730215787888, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو.. [SEP]', 'score': 0.08282623440027237, 'token': 18, 'token_str': '.'}, {'sequence': '[CLS] الهدف من الحياة هو البقاء. [SEP]', 'score': 0.04031957685947418, 'token': 9331, 'token_str': 'البقاء'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.032019514590501785, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]', 'score': 0.028731243684887886, 'token': 3088, 'token_str': 'الحب'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-half') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-half') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-half') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-half') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-camelbert-msa-quarter
2021-05-18T17:18:06.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
13
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-camelbert-ca`|CA|6GB|847M| ||`bert-base-camelbert-da`|DA|54GB|5.8B| ||`bert-base-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| |✔|`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-MSA-quarter** (`bert-base-camelbert-msa-quarter`), a model pre-trained on a quarter of the full MSA dataset. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-msa-quarter') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.17437894642353058, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.042852893471717834, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو البقاء. [SEP]', 'score': 0.030925093218684196, 'token': 9331, 'token_str': 'البقاء'}, {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]', 'score': 0.02964409440755844, 'token': 3088, 'token_str': 'الحب'}, {'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]', 'score': 0.028030086308717728, 'token': 17188, 'token_str': 'الكمال'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-quarter') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-quarter') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-quarter') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-quarter') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-camelbert-msa-sixteenth
2021-05-18T17:19:03.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
18
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-camelbert-ca`|CA|6GB|847M| ||`bert-base-camelbert-da`|DA|54GB|5.8B| ||`bert-base-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| |✔|`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-MSA-sixteenth** (`bert-base-camelbert-msa-sixteenth`), a model pre-trained on a sixteenth of the full MSA dataset. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-msa-sixteenth') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو التغيير. [SEP]', 'score': 0.08320745080709457, 'token': 7946, 'token_str': 'التغيير'}, {'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]', 'score': 0.04305094853043556, 'token': 12554, 'token_str': 'التعلم'}, {'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.0417640283703804, 'token': 2854, 'token_str': 'العمل'}, {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.041371218860149384, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو المعرفة. [SEP]', 'score': 0.039794355630874634, 'token': 7344, 'token_str': 'المعرفة'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa-sixteenth') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-camelbert-msa
2021-05-18T17:19:58.000Z
[ "pytorch", "tf", "jax", "bert", "masked-lm", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
CAMeL-Lab
385
transformers
--- language: - ar license: apache-2.0 widget: - text: "الهدف من الحياة هو [MASK] ." --- # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks ## Model description **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants. The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* We release eight models with different sizes and variants as follows: ||Model|Variant|Size|#Word| |-|-|:-:|-:|-:| ||`bert-base-camelbert-mix`|CA,DA,MSA|167GB|17.3B| ||`bert-base-camelbert-ca`|CA|6GB|847M| ||`bert-base-camelbert-da`|DA|54GB|5.8B| |✔|`bert-base-camelbert-msa`|MSA|107GB|12.6B| ||`bert-base-camelbert-msa-half`|MSA|53GB|6.3B| ||`bert-base-camelbert-msa-quarter`|MSA|27GB|3.1B| ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B| ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M| This model card describes **CAMeLBERT-MSA** (`bert-base-camelbert-msa`), a model pre-trained on the entire MSA dataset. ## Intended uses You can use the released model for either masked language modeling or next sentence prediction. However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT). #### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-camelbert-msa') >>> unmasker("الهدف من الحياة هو [MASK] .") [{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]', 'score': 0.08507660031318665, 'token': 2854, 'token_str': 'العمل'}, {'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]', 'score': 0.058905381709337234, 'token': 3696, 'token_str': 'الحياة'}, {'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]', 'score': 0.04660581797361374, 'token': 6232, 'token_str': 'النجاح'}, {'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]', 'score': 0.04156001657247543, 'token': 12413, 'token_str': 'الربح'}, {'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]', 'score': 0.03534102067351341, 'token': 3088, 'token_str': 'الحب'}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa') model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa') model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-camelbert-msa') text = "مرحبا يا عالم." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data - MSA - [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11) - [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus) - [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian) - [Arabic Wikipedia](https://archive.org/details/arwiki-20190201) - The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/) ## Training procedure We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training. We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. ### Preprocessing - After extracting the raw text from each corpus, we apply the following pre-processing. - We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297). - We also remove lines without any Arabic characters. - We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools). - Finally, we split each line into sentences with a heuristics-based sentence segmenter. - We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers). - We do not lowercase letters nor strip accents. ### Pre-training - The model was trained on a single cloud TPU (`v3-8`) for one million steps in total. - The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256. - The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. - We use whole word masking and a duplicate factor of 10. - We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens. - We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1. - The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results - We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification. - We fine-tune and evaluate the models using 12 dataset. - We used Hugging Face's transformers to fine-tune our CAMeLBERT models. - We used transformers `v3.1.0` along with PyTorch `v1.5.1`. - The fine-tuning was done by adding a fully connected linear layer to the last hidden state. - We use \\(F_{1}\\) score as a metric for all tasks. - Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT). ### Results | Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | NER | ANERcorp | MSA | 80.2% | 66.2% | 74.2% | 82.4% | 82.3% | 82.0% | 82.3% | 80.5% | | POS | PATB (MSA) | MSA | 97.3% | 96.6% | 96.5% | 97.4% | 97.4% | 97.4% | 97.4% | 97.4% | | | ARZTB (EGY) | DA | 90.1% | 88.6% | 89.4% | 90.8% | 90.3% | 90.5% | 90.5% | 90.4% | | | Gumar (GLF) | DA | 97.3% | 96.5% | 97.0% | 97.1% | 97.0% | 97.0% | 97.1% | 97.0% | | SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% | | | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% | | | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% | | DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% | | | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% | | | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% | | | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% | | Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | ### Results (Average) | | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 | | -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- | | Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 81.9% | 75.3% | 79.9% | 83.2% | 82.9% | 83.1% | 83.0% | 82.1% | | | DA | 73.5% | 71.1% | 72.1% | 73.5% | 73.1% | 73.4% | 73.3% | 73.1% | | | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% | | Macro-Average | ALL | 78.2% | 74.0% | 76.6% | 78.9% | 78.6% | 78.8% | 78.7% | 78.2% | <a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant. ## Acknowledgements This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CLEE/CLEE
2021-05-17T13:29:33.000Z
[]
[ ".gitattributes" ]
CLEE
0
CTBC/ATS
2020-12-12T15:10:21.000Z
[]
[ ".gitattributes" ]
CTBC
0
Callidior/bert2bert-base-arxiv-titlegen
2021-03-04T09:49:47.000Z
[ "pytorch", "encoder-decoder", "seq2seq", "en", "dataset:arxiv_dataset", "transformers", "summarization", "license:apache-2.0", "text2text-generation" ]
summarization
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
Callidior
103
transformers
--- language: - en tags: - summarization license: apache-2.0 datasets: - arxiv_dataset metrics: - rouge widget: - text: "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." --- # Paper Title Generator Generates titles for computer science papers given an abstract. The model is a BERT2BERT Encoder-Decoder using the official `bert-base-uncased` checkpoint as initialization for the encoder and decoder. It was fine-tuned on 318,500 computer science papers posted on arXiv.org between 2007 and 2020 and achieved a 26.3% Rouge2 F1-Score on held-out validation data. **Live Demo:** [https://paper-titles.ey.r.appspot.com/](https://paper-titles.ey.r.appspot.com/)
CallumRai/HansardGPT2
2021-05-21T09:33:25.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", ".gitignore", "README.md", "added_tokens.json", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
CallumRai
18
transformers
A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01 For more information see: https://github.com/CallumRai/Hansard/
Cameron/BERT-Jigsaw
2021-05-18T17:21:10.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
17
transformers
Cameron/BERT-SBIC-offensive
2021-05-18T17:22:32.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
10
transformers
Cameron/BERT-SBIC-targetcategory
2021-05-18T17:23:42.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
17
transformers
Cameron/BERT-eec-emotion
2021-05-18T17:25:51.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
19
transformers
Cameron/BERT-jigsaw-identityhate
2021-05-18T17:27:44.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
32
transformers
Cameron/BERT-jigsaw-severetoxic
2021-05-18T17:28:58.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
15
transformers
Cameron/BERT-mdgender-convai-binary
2021-05-18T17:30:21.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
11
transformers
Cameron/BERT-mdgender-convai-ternary
2021-05-18T17:31:21.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
7
transformers
Cameron/BERT-mdgender-wizard
2021-05-18T17:33:48.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
11
transformers
Cameron/BERT-rtgender-opgender-annotations
2021-05-18T17:34:57.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.txt" ]
Cameron
16
transformers
Capreolus/bert-base-msmarco
2021-05-18T17:35:58.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
Capreolus
143
transformers
# capreolus/bert-base-msmarco ## Model description BERT-Base model (`google/bert_uncased_L-12_H-768_A-12`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model; see the [Capreolus BERT-MaxP implementation](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) for a usage example. This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_bert_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
Capreolus/birch-bert-large-car_mb
2021-05-18T17:38:06.000Z
[ "pytorch", "tf", "jax", "bert", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
Capreolus
17
transformers
Capreolus/birch-bert-large-mb
2021-05-18T17:40:31.000Z
[ "pytorch", "tf", "jax", "bert", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
Capreolus
14
transformers
Capreolus/birch-bert-large-msmarco_mb
2021-05-18T17:43:33.000Z
[ "pytorch", "tf", "jax", "bert", "transformers" ]
[ ".gitattributes", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
Capreolus
75
transformers
Capreolus/electra-base-msmarco
2020-09-08T14:53:10.000Z
[ "pytorch", "tf", "electra", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
Capreolus
360
transformers
# capreolus/electra-base-msmarco ## Model description ELECTRA-Base model (`google/electra-base-discriminator`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the [TFElectraRelevanceHead](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) in the Capreolus BERT-MaxP implementation for a usage example. This corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_electra_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
Cat/Kitty
2020-12-21T15:44:34.000Z
[]
[ ".gitattributes" ]
Cat
0
Chaima/TunBerto
2021-04-01T12:56:56.000Z
[]
[ ".gitattributes" ]
Chaima
0
ChaitanyaU/FineTuneLM
2021-01-13T10:27:29.000Z
[]
[ ".gitattributes", "FineTuneLM/config.json", "FineTuneLM/pytorch_model.bin", "FineTuneLM/special_tokens_map.json", "FineTuneLM/tokenizer_config.json", "FineTuneLM/training_args.bin", "FineTuneLM/vocab.txt" ]
ChaitanyaU
0
Chakita/Friends
2021-06-04T10:36:40.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "conversational", "text-generation" ]
conversational
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
Chakita
67
transformers
--- tags: - conversational --- # Model trained on F.R.I.E.N.D.S dialogue
Charlotte/text2dm_models
2021-04-28T15:42:33.000Z
[]
[ ".gitattributes" ]
Charlotte
0
ChristopherA08/IndoELECTRA
2021-02-04T06:23:59.000Z
[ "pytorch", "electra", "pretraining", "id", "dataset:oscar", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "vocab.txt" ]
ChristopherA08
180
transformers
--- language: id datasets: - oscar --- # IndoBERT (Indonesian BERT Model) ## Model description ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words). IndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language. This model is base version which use electra-base config. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ChristopherA08/IndoELECTRA") model = AutoModel.from_pretrained("ChristopherA08/IndoELECTRA") tokenizer.encode("hai aku mau makan.") [2, 8078, 1785, 2318, 1946, 18, 4] ``` ## Training procedure The training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2. We used a Google Cloud Storage bucket, for persistent storage of training data and models.
Cinnamon/electra-small-japanese-discriminator
2020-12-11T21:26:13.000Z
[ "pytorch", "electra", "pretraining", "ja", "transformers", "license:apache-2.0" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
Cinnamon
189
transformers
--- language: ja license: apache-2.0 --- ## Japanese ELECTRA-small We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately. ## How to use the discriminator in `transformers` ``` from transformers import BertJapaneseTokenizer, ElectraForPreTraining tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-discriminator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"}) model = ElectraForPreTraining.from_pretrained('Cinnamon/electra-small-japanese-discriminator') ```
Cinnamon/electra-small-japanese-generator
2020-12-11T21:26:17.000Z
[ "pytorch", "electra", "masked-lm", "ja", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
Cinnamon
435
transformers
--- language: ja --- ## Japanese ELECTRA-small We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately. ``` # ELECTRA-small generator usage from transformers import BertJapaneseTokenizer, ElectraForMaskedLM tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-generator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"}) model = ElectraForMaskedLM.from_pretrained('Cinnamon/electra-small-japanese-generator') ```
CodeNinja1126/bert-p-encoder
2021-05-12T01:26:46.000Z
[ "pytorch" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
CodeNinja1126
6
CodeNinja1126/bert-q-encoder
2021-05-12T01:31:17.000Z
[ "pytorch" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
CodeNinja1126
5
CodeNinja1126/koelectra-model
2021-04-18T07:34:52.000Z
[]
[ ".gitattributes" ]
CodeNinja1126
0
CodeNinja1126/test-model
2021-05-18T17:45:32.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "flax_model.msgpack", "optimizer.pt", "pytorch_model.bin", "scheduler.pt", "trainer_state.json", "training_args.bin" ]
CodeNinja1126
12
transformers
CodeNinja1126/xlm-roberta-large-kor-mrc
2021-05-19T06:11:31.000Z
[ "pytorch", "xlm-roberta", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
CodeNinja1126
35
transformers
CoderEFE/DialoGPT-marxbot
2021-06-07T01:24:25.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "conversational", "text-generation" ]
conversational
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
CoderEFE
125
transformers
--- tags: - conversational --- Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-marxbot") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-marxbot") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("MarxBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
CoderEFE/DialoGPT-medium-marx
2021-06-05T07:08:34.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "TAGS.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
CoderEFE
19
transformers