url
stringlengths
23
7.17k
text
stringlengths
0
1.65M
https://huggingface.co/Creo
1 Moll Creo Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/hasanawais
Hasan Awais hasanawais hasanawais Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/enkufeleke
Enku Feleke enkufeleke Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/lingjies
Lingjie Sang lingjies Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/Jar111
Hen111 Jar111 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/keerthanpg
Keerthana Gopalakrishnan keerthanpg keerthanpg Research interests None yet Organizations Papers 2 arxiv:2307.15818 arxiv:2309.10150 models 1 keerthanpg/robotics_transformer Updated Dec 9, 2022 datasets None public yet
https://huggingface.co/dwj926
DJ dwj926 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/jackcao
Xujia Cao jackcao JackCaoG Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/bert-base-cased
BERT base model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-cased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] Hello I'm a fashion model. [SEP]", 'score': 0.09019174426794052, 'token': 4633, 'token_str': 'fashion'}, {'sequence': "[CLS] Hello I'm a new model. [SEP]", 'score': 0.06349995732307434, 'token': 1207, 'token_str': 'new'}, {'sequence': "[CLS] Hello I'm a male model. [SEP]", 'score': 0.06228214129805565, 'token': 2581, 'token_str': 'male'}, {'sequence': "[CLS] Hello I'm a professional model. [SEP]", 'score': 0.0441727414727211, 'token': 1848, 'token_str': 'professional'}, {'sequence': "[CLS] Hello I'm a super model. [SEP]", 'score': 0.03326151892542839, 'token': 7688, 'token_str': 'super'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = BertModel.from_pretrained("bert-base-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = TFBertModel.from_pretrained("bert-base-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-cased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] The man worked as a lawyer. [SEP]', 'score': 0.04804691672325134, 'token': 4545, 'token_str': 'lawyer'}, {'sequence': '[CLS] The man worked as a waiter. [SEP]', 'score': 0.037494491785764694, 'token': 17989, 'token_str': 'waiter'}, {'sequence': '[CLS] The man worked as a cop. [SEP]', 'score': 0.035512614995241165, 'token': 9947, 'token_str': 'cop'}, {'sequence': '[CLS] The man worked as a detective. [SEP]', 'score': 0.031271643936634064, 'token': 9140, 'token_str': 'detective'}, {'sequence': '[CLS] The man worked as a doctor. [SEP]', 'score': 0.027423162013292313, 'token': 3995, 'token_str': 'doctor'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] The woman worked as a nurse. [SEP]', 'score': 0.16927455365657806, 'token': 7439, 'token_str': 'nurse'}, {'sequence': '[CLS] The woman worked as a waitress. [SEP]', 'score': 0.1501094549894333, 'token': 15098, 'token_str': 'waitress'}, {'sequence': '[CLS] The woman worked as a maid. [SEP]', 'score': 0.05600163713097572, 'token': 13487, 'token_str': 'maid'}, {'sequence': '[CLS] The woman worked as a housekeeper. [SEP]', 'score': 0.04838843643665314, 'token': 26458, 'token_str': 'housekeeper'}, {'sequence': '[CLS] The woman worked as a cook. [SEP]', 'score': 0.029980547726154327, 'token': 9834, 'token_str': 'cook'}] This bias will also affect all fine-tuned versions of this model. Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9 and β2=0.999\beta_{2} = 0.999, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: Task MNLI-(m/mm) QQP QNLI SST-2 CoLA STS-B MRPC RTE Average 84.6/83.4 71.2 90.5 93.5 52.1 85.8 88.9 66.4 79.6 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/ybelkada
Younes Belkada PRO ybelkada Research interests Large Language Models, Quantization, Vision, Multimodality, Diffusion models Organizations Collections 1 Papers 4 spaces 17 models 78 datasets 9
https://huggingface.co/ebrevdo
Eugene Brevdo ebrevdo ebrevdo Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/paragsmhatre
Parag Sanjay Mhatre paragsmhatre Research interests None yet Organizations models 4 paragsmhatre/20_news_group_classifier_open_web_text Text Classification • Updated Jan 10 • 4 paragsmhatre/20_news_group_classifier Text Classification • Updated Jan 8 • 77 • 1 paragsmhatre/my_awesome_model Updated Jan 6 paragsmhatre/my_awesome_qa_model Updated Dec 27, 2022 datasets None public yet
https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc
google 's Collections BERT release updated 22 days ago Regroups the original BERT models released by the Google team. Except for the models marked otherwise, the checkpoints support English. bert-base-cased Fill-Mask • Updated Nov 16, 2022 • 5.9M • 146 Note Base BERT model, smaller variant. Trained on the "cased" dataset, meaning that it wasn't lowercase and all accents were kept. 12-layer, 768-hidden, 12-heads , 110M parameters bert-base-uncased Fill-Mask • Updated Jun 30 • 45.7M • 1.12k Note Base BERT model, smaller variant. Trained on the "uncased" dataset, meaning that it was lowercase and all accents were removed. 12-layer, 768-hidden, 12-heads , 110M parameters bert-large-cased Fill-Mask • Updated Apr 6 • 87.6k • 12 Note Large BERT model, larger variant. Trained on the "cased" dataset, meaning that it wasn't lowercase and all accents were kept. 24-layer, 1024-hidden, 16-heads, 340M parameters bert-large-uncased Fill-Mask • Updated Nov 14, 2022 • 984k • 49 Note Large BERT model, larger variant. Trained on the "uncased" dataset, meaning that it was lowercase and all accents were removed. 24-layer, 1024-hidden, 16-heads, 340M parameters bert-base-multilingual-cased Fill-Mask • Updated Nov 16, 2022 • 2.3M • 233 Note Base BERT model, smaller variant. The list of supported languages is available here: https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters bert-base-chinese Fill-Mask • Updated Mar 21 • 418k • 562 Note Base BERT model, smaller variant. Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M parameters bert-large-cased-whole-word-masking Fill-Mask • Updated May 18, 2021 • 1.3k • 4 Note Large BERT model, larger variant. Trained on the "cased" dataset, meaning that it wasn't lowercase and all accents were kept. Whole word masking indicates a different preprocessing where entire words are masked rather than subwords. The BERT team reports better metrics with the wwm models. 24-layer, 1024-hidden, 16-heads, 340M parameters bert-large-uncased-whole-word-masking Fill-Mask • Updated Apr 6 • 75.9k • 10 Note Large BERT model, larger variant. Trained on the "uncased" dataset, meaning that it was lowercase and all accents were removed. Whole word masking indicates a different preprocessing where entire words are masked rather than subwords. The BERT team reports better metrics with the wwm models. 24-layer, 1024-hidden, 16-heads, 340M parameters
https://huggingface.co/kcaluwae
Ken kcaluwae Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/orlygonzalez
Orly Gonzalez orlygonzalez Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/errollw
Erroll errollw http://www.errollw.com errollw errollw Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/joewoodworth
Joe Woodworth joewoodworth Research interests None yet Organizations models 1 joewoodworth/ddpm-butterflies-128 Updated Jan 7 datasets None public yet
https://huggingface.co/chris113113
Christopher Pirillo chris113113 chris113113 Research interests LLM, Transformers, Inferencing Organizations models None public yet datasets None public yet
https://huggingface.co/dtheron
Danie Theron dtheron dptrsa-300 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/liviutzu
Liviu Panait liviutzu Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/evolvedHow
Vish Ganapathy evolvedHow Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/BasilMustafa
Basil Mustafa BasilMustafa Research interests Computer vision, multimodality, generative modelling, uncertainty quantification, sparse mixtures of experts, conditional computation Organizations Papers 4
https://huggingface.co/FPRZ
François Pérez FPRZ Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/ivywang
Ivy Wang ivywang Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/angusl72
2 Angus Laird angusl72 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/william-apollo
William Apollo william-apollo Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/qwertier
2 Aaron Ma qwertier Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/MarkusRabe
Markus Rabe MarkusRabe Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/gevang
Georgios Evangelopoulos gevang gevangel Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/atredi
Lewis Therin atredi Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/williamabernathy
Fengxiang LI williamabernathy Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/manutt
Manuel Tragut manutt Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/metrizable
Eric Johnson metrizable metrizable Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/becky520
Becky Zhang becky520 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/bert-base-uncased
BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. Model variations BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers. Chinese and multilingual uncased and cased versions followed shortly after. Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models. Other 24 smaller models are released afterward. The detailed release history can be found on the google-research/bert readme on github. Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] This bias will also affect all fine-tuned versions of this model. Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9 and β2=0.999\beta_{2} = 0.999, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: Task MNLI-(m/mm) QQP QNLI SST-2 CoLA STS-B MRPC RTE Average 84.6/83.4 71.2 90.5 93.5 52.1 85.8 88.9 66.4 79.6 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/albert-xlarge-v1
ALBERT XLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: 24 repeating layers 128 embedding dimension 2048 hidden dimension 16 attention heads 58M parameters Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] Here is how to use this model to get the features of a given text in PyTorch: from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1') model = AlbertModel.from_pretrained("albert-xlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1') model = TFAlbertModel.from_pretrained("albert-xlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xlarge-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] This bias will also affect all fine-tuned versions of this model. Training data The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: Average SQuAD1.1 SQuAD2.0 MNLI SST-2 RACE V2 ALBERT-base 82.3 90.2/83.2 82.1/79.3 84.6 92.9 66.8 ALBERT-large 85.7 91.8/85.2 84.9/81.8 86.5 94.9 75.2 ALBERT-xlarge 87.9 92.9/86.4 87.9/84.1 87.9 95.4 80.7 ALBERT-xxlarge 90.9 94.6/89.1 89.8/86.9 90.6 96.8 86.8 V1 ALBERT-base 80.1 89.3/82.3 80.0/77.1 81.6 90.3 64.0 ALBERT-large 82.4 90.6/83.9 82.3/79.4 83.5 91.7 68.5 ALBERT-xlarge 85.5 92.5/86.1 86.1/83.1 86.4 92.4 74.8 ALBERT-xxlarge 91.0 94.8/89.3 90.2/87.4 90.8 96.9 86.5 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/bert-large-uncased
BERT large model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: 24-layer 1024 hidden dimension 16 attention heads 336M parameters. Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1886913776397705, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a professional model. [SEP]", 'score': 0.07157472521066666, 'token': 2658, 'token_str': 'professional'}, {'sequence': "[CLS] hello i'm a male model. [SEP]", 'score': 0.04053466394543648, 'token': 3287, 'token_str': 'male'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.03891477733850479, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a fitness model. [SEP]", 'score': 0.03038121573626995, 'token': 10516, 'token_str': 'fitness'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained("bert-large-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = TFBertModel.from_pretrained("bert-large-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a bartender. [SEP]', 'score': 0.10426565259695053, 'token': 15812, 'token_str': 'bartender'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.10232779383659363, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.06281787157058716, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a lawyer. [SEP]', 'score': 0.050936125218868256, 'token': 5160, 'token_str': 'lawyer'}, {'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.041034240275621414, 'token': 10533, 'token_str': 'carpenter'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.28473711013793945, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.11336520314216614, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a bartender. [SEP]', 'score': 0.09574324637651443, 'token': 15812, 'token_str': 'bartender'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.06351090222597122, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a secretary. [SEP]', 'score': 0.048970773816108704, 'token': 3187, 'token_str': 'secretary'}] This bias will also affect all fine-tuned versions of this model. Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9 and β2=0.999\beta_{2} = 0.999, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Model SQUAD 1.1 F1/EM Multi NLI Accuracy BERT-Large, Uncased (Original) 91.0/84.3 86.05 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/kannappan
Kannappan Sirchabesan kannappan hashkanna Research interests None yet Organizations spaces 1 No application file 👀 Glacier models None public yet datasets None public yet
https://huggingface.co/albert-base-v1
ALBERT Base v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: 12 repeating layers 128 embedding dimension 768 hidden dimension 12 attention heads 11M parameters Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-base-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] Here is how to use this model to get the features of a given text in PyTorch: from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1') model = AlbertModel.from_pretrained("albert-base-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1') model = TFAlbertModel.from_pretrained("albert-base-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-base-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] This bias will also affect all fine-tuned versions of this model. Training data The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: Average SQuAD1.1 SQuAD2.0 MNLI SST-2 RACE V2 ALBERT-base 82.3 90.2/83.2 82.1/79.3 84.6 92.9 66.8 ALBERT-large 85.7 91.8/85.2 84.9/81.8 86.5 94.9 75.2 ALBERT-xlarge 87.9 92.9/86.4 87.9/84.1 87.9 95.4 80.7 ALBERT-xxlarge 90.9 94.6/89.1 89.8/86.9 90.6 96.8 86.8 V1 ALBERT-base 80.1 89.3/82.3 80.0/77.1 81.6 90.3 64.0 ALBERT-large 82.4 90.6/83.9 82.3/79.4 83.5 91.7 68.5 ALBERT-xlarge 85.5 92.5/86.1 86.1/83.1 86.4 92.4 74.8 ALBERT-xxlarge 91.0 94.8/89.3 90.2/87.4 90.8 96.9 86.5 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/google/mt5-small
Google's mT5 mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: mC4 Other Community Checkpoints: here Paper: mT5: A massively multilingual pre-trained text-to-text transformer Authors: Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
https://huggingface.co/google/t5_11b_trueteacher_and_anli
TrueTeacher This is a Factual Consistency Evaluation model, introduced in the TrueTeacher paper (Gekhman et al, 2023). Model Details The model is optimized for evaluating factual consistency in summarization. It is the main model from the paper (see "T5-11B w. ANLI + TrueTeacher full" in Table 1) which is based on a T5-11B (Raffel et al., 2020) fine-tuned with a mixture of the following datasets: TrueTeacher (Gekhman et al., 2023) ANLI (Nie et al., 2020) The TrueTeacher dataset contains model-generated summaries of articles from the train split of the CNN/DailyMail dataset (Hermann et al., 2015) which are annotated for factual consistency using FLAN-PaLM 540B (Chung et al.,2022). Summaries were generated using summarization models which were trained on the XSum dataset (Narayan et al., 2018). The input format for the model is: "premise: GROUNDING_DOCUMENT hypothesis: HYPOTHESIS_SUMMARY". To accomodate the input length of common summarization datasets we recommend setting max_length to 2048. The model predicts a binary label ('1' - Factualy Consistent, '0' - Factualy Inconsistent). Evaluation results This model achieves the following ROC AUC results on the summarization subset of the TRUE benchmark (Honovich et al, 2022): MNBM QAGS-X FRANK SummEval QAGS-C Average 78.1 89.4 93.6 88.5 89.4 87.8 Intended Use This model is intended for a research use (non-commercial) in English. The recommended use case is evaluating factual consistency in summarization. Out-of-scope use Any use cases which violate the cc-by-nc-4.0 license. Usage in languages other than English. Usage examples classification from transformers import T5ForConditionalGeneration from transformers import T5Tokenizer model_path = 'google/t5_11b_trueteacher_and_anli' tokenizer = T5Tokenizer.from_pretrained(model_path) model = T5ForConditionalGeneration.from_pretrained(model_path) premise = 'the sun is shining' for hypothesis, expected in [('the sun is out in the sky', '1'), ('the cat is shiny', '0')]: input_ids = tokenizer( f'premise: {premise} hypothesis: {hypothesis}', return_tensors='pt', truncation=True, max_length=2048).input_ids outputs = model.generate(input_ids) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(f'premise: {premise}') print(f'hypothesis: {hypothesis}') print(f'result: {result} (expected: {expected})\n') scoring from transformers import T5ForConditionalGeneration from transformers import T5Tokenizer import torch model_path = 'google/t5_11b_trueteacher_and_anli' tokenizer = T5Tokenizer.from_pretrained(model_path) model = T5ForConditionalGeneration.from_pretrained(model_path) premise = 'the sun is shining' for hypothesis, expected in [('the sun is out in the sky', '>> 0.5'), ('the cat is shiny', '<< 0.5')]: input_ids = tokenizer( f'premise: {premise} hypothesis: {hypothesis}', return_tensors='pt', truncation=True, max_length=2048).input_ids decoder_input_ids = torch.tensor([[tokenizer.pad_token_id]]) outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) logits = outputs.logits probs = torch.softmax(logits[0], dim=-1) one_token_id = tokenizer('1').input_ids[0] entailment_prob = probs[0, one_token_id].item() print(f'premise: {premise}') print(f'hypothesis: {hypothesis}') print(f'score: {entailment_prob:.3f} (expected: {expected})\n') Citation If you use this model for a research publication, please cite the TrueTeacher paper (using the bibtex entry below), as well as the ANLI, CNN/DailyMail, XSum, T5 and FLAN papers mentioned above. @misc{gekhman2023trueteacher, title={TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models}, author={Zorik Gekhman and Jonathan Herzig and Roee Aharoni and Chen Elkind and Idan Szpektor}, year={2023}, eprint={2305.11171}, archivePrefix={arXiv}, primaryClass={cs.CL} }
https://huggingface.co/bert-large-cased
BERT large model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is cased: it makes a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: 24-layer 1024 hidden dimension 16 attention heads 336M parameters. Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-cased') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] Hello I'm a male model. [SEP]", "score":0.22748498618602753, "token":2581, "token_str":"male" }, { "sequence":"[CLS] Hello I'm a fashion model. [SEP]", "score":0.09146175533533096, "token":4633, "token_str":"fashion" }, { "sequence":"[CLS] Hello I'm a new model. [SEP]", "score":0.05823173746466637, "token":1207, "token_str":"new" }, { "sequence":"[CLS] Hello I'm a super model. [SEP]", "score":0.04488750174641609, "token":7688, "token_str":"super" }, { "sequence":"[CLS] Hello I'm a famous model. [SEP]", "score":0.03271442651748657, "token":2505, "token_str":"famous" } ] Here is how to use this model to get the features of a given text in PyTorch: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-cased') model = BertModel.from_pretrained("bert-large-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-large-cased') model = TFBertModel.from_pretrained("bert-large-cased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-cased') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] The man worked as a doctor. [SEP]", "score":0.0645911768078804, "token":3995, "token_str":"doctor" }, { "sequence":"[CLS] The man worked as a cop. [SEP]", "score":0.057450827211141586, "token":9947, "token_str":"cop" }, { "sequence":"[CLS] The man worked as a mechanic. [SEP]", "score":0.04392256215214729, "token":19459, "token_str":"mechanic" }, { "sequence":"[CLS] The man worked as a waiter. [SEP]", "score":0.03755280375480652, "token":17989, "token_str":"waiter" }, { "sequence":"[CLS] The man worked as a teacher. [SEP]", "score":0.03458863124251366, "token":3218, "token_str":"teacher" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] The woman worked as a nurse. [SEP]", "score":0.2572779953479767, "token":7439, "token_str":"nurse" }, { "sequence":"[CLS] The woman worked as a waitress. [SEP]", "score":0.16706500947475433, "token":15098, "token_str":"waitress" }, { "sequence":"[CLS] The woman worked as a teacher. [SEP]", "score":0.04587847739458084, "token":3218, "token_str":"teacher" }, { "sequence":"[CLS] The woman worked as a secretary. [SEP]", "score":0.03577028587460518, "token":4848, "token_str":"secretary" }, { "sequence":"[CLS] The woman worked as a maid. [SEP]", "score":0.03298963978886604, "token":13487, "token_str":"maid" } ] This bias will also affect all fine-tuned versions of this model. Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9 and β2=0.999\beta_{2} = 0.999, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Model SQUAD 1.1 F1/EM Multi NLI Accuracy BERT-Large, Cased (Original) 91.5/84.8 86.09 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/collections/google/albert-release-64ff65ba18830fabea2f2cec
google 's Collections ALBERT release updated 22 days ago The ALBERT release was done in two steps, over 4 checkpoints of different sizes each time. The first version is noted as "v1", the second as "v2". albert-base-v1 Fill-Mask • Updated Apr 6 • 43.5k • 3 Note This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters Metrics: Average (80.1), Squad v1.1 (89.3/82.3), Squad v2 (80.0/77.1), MNLI (81.6) SST-2 (90.3) RACE(64.0) albert-large-v1 Fill-Mask • Updated Jan 13, 2021 • 1.3k Note This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 1024 hidden dimension - 16 attention heads - 17M parameters Metrics: Average (82.4), Squad v1.1 (90.6/83.9), Squad v2 (82.3/79.4), MNLI (83.5) SST-2 (91.7) RACE(68.5) albert-xlarge-v1 Fill-Mask • Updated Aug 11 • 1.07k Note This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 2048 hidden dimension - 16 attention heads - 58M parameters Metrics: Average (85.5), Squad v1.1 (92.5/86.1), Squad v2 (86.1/83.1), MNLI (86.4) SST-2 (92.4) RACE(74.8) albert-xxlarge-v1 Fill-Mask • Updated Jan 13, 2021 • 2.34k • 2 Note This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 4096 hidden dimension - 64 attention heads - 223M parameters Metrics: Average (91.0), Squad v1.1 (94.8/89.3), Squad v2 (90.2/87.4), MNLI (90.8) SST-2 (96.9) RACE(86.5) albert-base-v2 Fill-Mask • Updated May 30 • 6.09M • 62 Note This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters Metrics: Average (82.3) Squad v1.1 (90.2/83.2) Squad v2 (82.1/79.3) MNLI (84.6) SST-2 (92.9) RACE (66.8) albert-large-v2 Fill-Mask • Updated Apr 6 • 10.3k • 12 Note This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 1024 hidden dimension - 16 attention heads - 17M parameters Metrics: Average (85.7) Squad v1.1 (91.8/85.2) Squad v2 (84.9/81.8) MNLI (86.5) SST-2 (94.9) RACE (75.2) albert-xlarge-v2 Fill-Mask • Updated Jan 13, 2021 • 2.17k • 3 Note This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 2048 hidden dimension - 16 attention heads - 58M parameters Metrics: Average (87.9) Squad v1.1 (92.9/86.4) Squad v2 (87.9/84.1) MNLI (87.9) SST-2 (95.4) RACE (80.7) albert-xxlarge-v2 Fill-Mask • Updated Apr 6 • 25.8k • 11 Note This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 4096 hidden dimension - 64 attention heads - 223M parameters Metrics: Average (90.9) Squad v1.1 (94.6/89.1) Squad v2 (89.8/86.9) MNLI (90.6) SST-2 (96.8) RACE (86.8)
https://huggingface.co/albert-large-v1
ALBERT Large v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: 24 repeating layers 128 embedding dimension 1024 hidden dimension 16 attention heads 17M parameters Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] Here is how to use this model to get the features of a given text in PyTorch: from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1') model = AlbertModel.from_pretrained("albert-large-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1') model = TFAlbertModel.from_pretrained("albert-large-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] This bias will also affect all fine-tuned versions of this model. Training data The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: Average SQuAD1.1 SQuAD2.0 MNLI SST-2 RACE V2 ALBERT-base 82.3 90.2/83.2 82.1/79.3 84.6 92.9 66.8 ALBERT-large 85.7 91.8/85.2 84.9/81.8 86.5 94.9 75.2 ALBERT-xlarge 87.9 92.9/86.4 87.9/84.1 87.9 95.4 80.7 ALBERT-xxlarge 90.9 94.6/89.1 89.8/86.9 90.6 96.8 86.8 V1 ALBERT-base 80.1 89.3/82.3 80.0/77.1 81.6 90.3 64.0 ALBERT-large 82.4 90.6/83.9 82.3/79.4 83.5 91.7 68.5 ALBERT-xlarge 85.5 92.5/86.1 86.1/83.1 86.4 92.4 74.8 ALBERT-xxlarge 91.0 94.8/89.3 90.2/87.4 90.8 96.9 86.5 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/google/long-t5-local-base
LongT5 (local attention, base-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper LongT5: Efficient Text-To-Text Transformer for Long Sequences by Guo et al. and first released in the LongT5 repository. All the model architecture and configuration can be found in Flaxformer repository which uses another Google research project repository T5x. Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team. Model description LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting (Pegasus-like generation pre-training). LongT5 model is an extension of T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you. How to use from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base") model = LongT5Model.from_pretrained("google/long-t5-local-base") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state BibTeX entry and citation info @article{guo2021longt5, title={LongT5: Efficient Text-To-Text Transformer for Long Sequences}, author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei}, journal={arXiv preprint arXiv:2112.07916}, year={2021} }
https://huggingface.co/google/vit-base-patch16-384
Vision Transformer (base-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-384') model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. Training data The ViT model was pretrained on ImageNet-21k, a dataset consisting of 14 million images and 21k classes, and fine-tuned on ImageNet, a dataset consisting of 1 million images and 1k classes. Training procedure Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. BibTeX entry and citation info @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} }
https://huggingface.co/google/pix2struct-widget-captioning-base
Model card for Pix2Struct - Finetuned on Widget Captioning (Captioning a UI component on a screen) Table of Contents TL;DR Using the model Contribution Citation TL;DR Pix2Struct is an image encoder - text decoder model that is trained on image-text pairs for various tasks, including image captionning and visual question answering. The full list of available models can be found on the Table 1 of the paper: The abstract of the model states that: Visually-situated language is ubiquitous—sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domainspecific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images. Using the model Converting from T5x to huggingface You can use the convert_pix2struct_checkpoint_to_pytorch.py script as follows: python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE if you are converting a large model, run: python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large Once saved, you can push your converted model with the following snippet: from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE) processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE) model.push_to_hub("USERNAME/MODEL_NAME") processor.push_to_hub("USERNAME/MODEL_NAME") Running the model The instructions for running the model are exactly the same as the instructions stated on pix2struct-textcaps-base model. Contribution This model was originally contributed by Kenton Lee, Mandar Joshi et al. and added to the Hugging Face ecosystem by Younes Belkada. Citation If you want to cite this work, please consider citing the original paper: @misc{https://doi.org/10.48550/arxiv.2210.03347, doi = {10.48550/ARXIV.2210.03347}, url = {https://arxiv.org/abs/2210.03347}, author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming-Wei and Toutanova, Kristina}, keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} }
https://huggingface.co/google/long-t5-tglobal-large
LongT5 (transient-global attention, large-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper LongT5: Efficient Text-To-Text Transformer for Long Sequences by Guo et al. and first released in the LongT5 repository. All the model architecture and configuration can be found in Flaxformer repository which uses another Google research project repository T5x. Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team. Model description LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting (Pegasus-like generation pre-training). LongT5 model is an extension of T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). Results of LongT5 (transient-global attention, large-sized model) fine-tuned on multiple (summarization, QA) tasks. Dataset Rouge-1 Rouge-2 Rouge-Lsum arXiv (16k input) 48.28 21.63 44.11 PubMed (16k input) 49.98 24.69 46.46 BigPatent (16k input) 70.38 56.81 62.73 MultiNews (8k input) 47.18 18.44 24.18 MediaSum (4k input) 35.54 19.04 32.20 CNN / DailyMail (4k input) 42.49 20.51 40.18 Dataset EM F1 Natural Questions (4k input) 60.77 65.38 Trivia QA (16k input) 78.38 82.45 Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you. How to use from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-large") model = LongT5Model.from_pretrained("google/long-t5-tglobal-large") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state BibTeX entry and citation info @article{guo2021longt5, title={LongT5: Efficient Text-To-Text Transformer for Long Sequences}, author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei}, journal={arXiv preprint arXiv:2112.07916}, year={2021} }
https://huggingface.co/datasets/google/trueteacher
e135a5c627501cd817dd0f54bbf61e86ba9336ff Barcelona's Champions League hopes were kept alive as they came from behind to beat Paris St-Germain and finish top of Group F. a4b473e2dd4770ccea0169d72e1bb482cfda1e20 If you're on a budget, here are some tips for saving money in one of the world's most expensive cities, Oslo. c347f533ae4864f0a79e340abd53bdc12600790e The number of people who have been wrongly paid tax in the last 12 months has increased by more than a quarter, HM Revenue and Customs has said. 4f515f307bdcf4eac1acd2ecd464a0e92aa1baaa What do your Premier League clubs need to end the season on a high? d65bbce603133a022c9cc148859f09f90d018fbe Ian Watkins, a paedophile rock star jailed for 29 years, will earn more than £100,000 in royalties while serving a 29-year jail sentence for child sex offences, it has emerged. e4dfbfb2d3522bdc021af394454c1783163888dd As the world mourns the death of Margaret Thatcher, the BBC's Damian Barr reports from London on the events of the day. b410c8ee81d3c0d3e1cef73ebc35e13a60022b07 A man has been charged with the murder of a mother-of-two whose body was found by her 11-year-old son. 97d984728b2e2775bfce39eda745b0a9a1553ae8 "I was told I had two years to live, but I met Professor Justin Stebbing and he gave me back my life. 366138c7193fa4a468bf264cde31ecc0137b95d7 A dashboard camera has helped a woman get an $88 parking fine she wasn't entitled to, thanks to footage from her car's camera. c90b6789d9e50f9a8446078042cd37fd176ffcf2 They were thrilled when they both fell pregnant at around the same time - but never imagined that their double deliveries would spark the ultimate family celebration. 58d2d9f16edd6fe40100100d9dc3c4e839962588 Ireland's Jonny Sexton suffered a black eye after a bruising Six Nations match against France. 02800bbc00f7b779c7510ce45d72cdc4401071f7 The White House has been urged to press ahead with a short-term debt limit fix, but it's not clear how it would work. d8140eae8609faaa3b56ea0dd3cfcc7f1011153b Labour's Tristram Hunt has been accused of hypocrisy after revealing he takes lessons himself while condemning plans to allow more unqualified teachers into schools. d3bc76fa2aa6350d10e7fefcdd593137e88b88bb A BBC employee who sent a package that was mistaken for a bomb has been suspended over the blunder. 1ee83856b88d36f2209b300019a5d46ca55b6bdc Liverpool striker Mario Balotelli is a man who is a man of character. 072c913df706e5c822c2aa8545ad6dcd500a6712 A man who served 39 years in prison for the rape and murder of a young girl has been freed after a judge ruled he was given a fair trial. f726cd2d59546685e9331373125ad65e90a5d907 A van driver who killed a man in a road rage attack has been warned he faces a lengthy jail term. 99331f20ba5cde9d28cd3bd096f3f7e849629d8a The family of a boy shot twice outside his home in Milwaukee has decided to leave the area and will not stay for another night. 671eb41b1b5b8ebf17c4a5953266661d8a523907 A former chef of Kim Jong Un's father has described his reunion with the young leader, saying he was "betrayed" by Kim. da53f799eafca26f36e306364dc5b9e6a4c21e52 ""Dancing With the Stars"" champ Ian Ziering is reportedly swapping the ballroom for a striptease with the Chippendales. 3f671415fbec654756ad80211821e4c42545f416 Iraq's government has said it will revise its plan to increase production in the coming years, despite the recent violence in the country. a32af72f365970a5cac8cb0c4ba8c00c1ace6ff9 A group of women in Uganda is tackling the issue of energy poverty by empowering women to earn a steady income. 72a555c33d8ecd4d7586f2f6cda9bb995c9077ec China is behind a massive cyber-espionage campaign against the United States, a leading cybersecurity expert has told CNN. f3f60734fe2e5c5ba168e5908bd5d5bc73d98d1f A woman who shattered a bridesmaid's face with a wine glass at a wedding party has been jailed for four and a half years. c5013d5ffb32420be3643f88729052ca7f870dd6 Amid the escalating violence in Iraq and Syria, the BBC's Scott Shane looks at the latest developments in the conflict. 18ab7a561b479b38e20c9bb52ee675d339de215a The US senator from Arizona has defended the use of torture in the war on terrorism, saying that the tactics were ""not just ineffective but also damaging." 870573203dab1515db0edb617c5774fe9529501c A charter school in New Orleans is revising a policy that required pregnant students to be removed from class and home-schooled, a school official says. bec88a7292476903d5df86377ea9c45a56274bfb A former felon who helped launch a US presidential campaign is helping a Republican build a bridge to African-American voters. 6aeb270408c5e64babb60793a92c76c46252ba37 The police investigating the death of a mother of two in custody have been removed from their duties. f5aff3d6234eae416ebc0b153aee858f1e58d2cf The UK Ministry of Defence (MoD) has said it is investigating reports that a British-trained Iraqi policeman may have saved the lives of dozens of Shia pilgrims by throwing himself on a suicide bomber. 776b2d775e8e661f4592ad39cf314530d4b9507f A US TV show host has been accused of putting filming of his latest documentary on a house he bought after a mother and her three children were possessed. d9184d567834003417acb1fbe8007da5419343f7 A bus driver who stole a bus and drove away with no insurance has been jailed for two years. 41b7b6fc94aad249e483e439187c890adc496c89 A shortlist of three candidates has been drawn up for the vacant Fulham manager's job, BBC Radio Solent understands. 99db081a58a48097e2e6aa2b56890c562750b5fe A Vietnamese woman has been found guilty of severing her husband's penis and slicing off his tofu, a court has heard. f2ee3434b505f04a8dff0867bd0a954b4a08c84a The future of Britain’s nuclear deterrent could be at the heart of a deal worth £31bn worth of a deal, the BBC understands. 66e2092d4ea3a7cbeb504e512c63b9c3ffc3d5b0 A 90-year-old Canadian pilot is still flying his beloved Tiger Moth plane - despite the fact he first started flying in it when he was a teenager. e1649a343b9ae8607e9462d85e7117da01ecc129 The Today Show host is set to run the Boston Marathon in a bid to raise money for victims of the Boston Marathon bombings. 61fe1d2c34f1f289fbf573f63ce113bdae3b1352 The scandal surrounding the mistress of French President Francois Hollande has been uncovered by a BBC investigation. ae5103d6cae3b4032414c77e1c4582272cf93b71 YouTube users will soon be able to give uploaded videos a rating to help protect children from inappropriate online content, according to the British Board of Film Classification (BBFC). 80971fb266f5a3ebcbfc360a55630fe6ae287ad2 A couple have been given a supposedly ancient hot cross bun - but it could be the world's oldest - dating back to the year the slave trade was abolished. be15aa65f6c29309215de3e2c54b5f8bae341892 ITV’s The X Factor has been criticised for a lack of singing during its latest series. 69c8917177a1736b61ccb67c5f18ad6e2ef8275d Toners are a staple part of many women's beauty regimes - but do they do more harm than good? 34af053ed9ede57a027810b656f2b387d91ffdef American Airlines grounded flights across the US on Tuesday due to a technical glitch. 5400ca8b3cfc1790c1e06cae4f4a3e6cb79dc806 Liverpool manager Brendan Rodgers says his side are back to their best after a 'brilliant' win over Tottenham. 16a9a9c924ef0bc0ada2d02db8f0d856a9da212a DNA samples from a kidnapping suspect in Ohio have been sent to a crime lab in the state to see if they can link him to other crimes. 8a83c962700c63be6027e339e99a34f5e872694b The Daily Record has learned that Rachel Canning and her ex-boyfriend have dropped domestic violence restraint orders against each other. 2e29ff8c5c16199aa7dd6ae3b8d72ed5fc7f0283 A judge in Guatemala has said the couple's bid to become the country's next president is a "personal right, the human right, the nepotism" of the constitution. bc0f194d2c9cf432662da33bf94e58c0100e0d29 The CDC has issued a warning to patients and doctors that a deadly outbreak of meningitis linked to a tainted batch of steroids could spread rapidly. 9ca7ce00bb2c2ab29346a7cf6a4485616d8ac69b Police are searching for a man believed to be linked to the disappearance of a 17-year-old girl in Massachusetts. 393375a6fd38eb0da4323b004dbc66888820f3ac Pork DNA has been found in Halal chicken sausages served in a Westminster school, the council has confirmed. 5b82fdfbf169eb5de6f30485e44c28b7af5f26dd Pope Francis has been sworn in as the new Pope of Argentina, a move feared to be a catalyst for nationalism. 51ff6ebe9ab156c07506d02493963bb68a3de0d6 A teenager who collapsed and almost died after drinking energy drinks has told of her experience on a BBC show. 149c646a3a824b75f75735c484e97dc561b31dad North Korea has destroyed a key part of its nuclear weapons program in a symbolic move, a CNN correspondent reports. 6c2c6826a83145ffd45eb59c02943475882d5abb The annual run of the Pamplona bulls in Spain has been a highlight of the city's festivities for centuries, but it has rarely gone as planned. 7d7304f54e5643be1148944c47c3917b49c9ce5f A man has been charged with trespassing after a tiger jumped out of a car and rolled under a wire, prompting a police investigation. be61eb82073a379fa779003ff45640787d0091e5 The White House has reacted angrily to President Vladimir Putin's comments that Russia is "stumbling into a revisionist Cold War mindset". 042cd99976d91af95e6b065befe2308ab78bd875 The legal services commission has blocked a request for information about how much taxpayers’ money they spent on legal aid during the Stephen Lawrence murder trial. 3a1c0448c239dfdbc1398f17c3a626e493206285 A 19-year-old woman who has spent £2,000 on her outfits has revealed her new style. 5d919e9be603dfd26deef78abda4d84c682a8d2b The deaths of a baby girl and her father have sparked a bitter custody battle between the parents. de0f09797724610c8b23e6c94cdb713d8e952d66 The number of westerners joining the fight in Syria is ""unprecedented,'' a senior U.S. official said. cb4e8170bcf56cc76a3b2a41924a75afb27eb54d It's a bit of a prank, but it's not a sexist joke. caba1ef1af497fa5a9dc864a8ff441c8cfbf9869 The NFL's most-wanted player has been charged with driving under the influence of alcohol. 8fa95033d9682438cbc134ac88331b13c010fd9b Manchester United's new signings could be announced as early as Monday, with Ed Woodward insisting the club are still a huge attraction to top players. dc58ee6b1761919cc74c8c8f292ea856d19b57b9 A pilot involved in the search for the missing Malaysia Airlines jet has become an internet sensation after online observers hailed his chiselled good looks. 8b81634fb722af8a6943a8a92c6577c5073be723 A grieving family have told how they learned the devastating news that their devoted daughter was killed in a head-on crash. c4808548c12a8b6b9d8cdb6a722c2761293496b6 A man with a massive tumour on his neck has had it removed after years of suffering with the condition. 0c74e1ae61834335e879ee1731ff523f72939d2b The illegal wildlife trade is a multi-billion dollar industry that is fueling conflict and undermining governments, and it is getting out of control. a91aa9263208c74da4bcd50e515625b1bce46e60 Sale Sharks fly-half Danny Cipriani has been left out of England's squad for the autumn internationals after being left out of the QBE Series. d0993609606029d606047823257cbaaa7d2d8abc The Home Office has been accused of failing to tell Abu Qatada he still had time to appeal against his deportation to Jordan. dc5d9dcda8938476d6a056deb128396d29a29a30 A dementia sufferer who died while handcuffed to a hospital bed was one of a string of cases at a London immigration centre where the elderly and vulnerable were shackled, a report has found. 3f92db67eda5b964c92fdc7b724b2ae6a4524fc8 Celtic manager says he is confident he can sign a goalkeeper to replace Fraser Forster if the England international is sold. 6195800abef2e5419b643c05c0e8432586d67abd The cheap, easy-to-get formula that has made China the world's manufacturing base is coming to an end, and the change will affect global businesses and consumers, argues the Financial Times's Paul Carter. 76e4e7c4cc6273a6fb41bce755946016ba31a2af Manchester City's most important player is the Spaniard's best player by far, according to a former player. e4770660c404c2d79aa5d29df301b43a0ed4949e Scientists are working on a project to produce a genetically modified (GM) apple that will not trigger an allergic reaction in those who are sensitive to the fruit. 63c6bdfe127f252b82bfc4c701c7876998128173 Hollywood's biggest names are attempting to crack the Indian film market, but are they finding success? b8136991b9967b6d4ffe6f2ce92e018959667746 Lord Justice Fulford, who was appointed as a judge in the wake of the paedophile campaigning scandal, has been suspended from the Court of Appeal. aef8ef6d611d6e77e3a70c8e3e18d677f21bc061 West Ham manager Slaven Bilic has been given permission to speak to Valencia about their bid for Cheikhou Kouyate. ed2701d5f3f55443b55f9b1969bbb6ecbe675e69 Everton striker Steven Naismith has donated tickets to unemployed people in Liverpool to help them enjoy a day out at a Premier League match. 624b504c2733a0dc3c276ec0c1041bcedc6bd95a A decorated RAF pilot who hunted down some of the UK's most deadly spies has died at the age of 98. 12d11bd91ab0acdecea530e3e1c5efd7c4312358 Roger Federer beat Jo-Wilfried Tsonga to reach the Australian Open semis. 658162f15c255252c8cd724c29bea9821b6146a1 A new species of human may have been discovered by scientists studying a bone found in a Siberian cave, according to a report in the journal Nature. 644c4f6a827414b05cbb6bdde0326bcbbfd5fc7f ""Ramadan has not arrived at the ideal moment for a player to play a football match,"" Inter Milan coach Jose Mourinho said after dropping a player from his team. 5052fee31cdc2a0a4e60c6d9a01f71af1007b328 The Advertising Standards Authority (ASA) has found the NHS is misleading the public by failing to list the cost of calling some 0844 GP phone numbers. 666aee1d3139fb17c4cf5c383370f9e149ff16d8 It's that time of year again and it's time for binky the blond to get the bare minimum. fb5c3e8abd68e2a5c2e89579fc03c76ab934fcfd A courtroom confrontation that ended with a woman being pinned to the ground and a police officer reaching into her mouth to retrieve a pill is now the subject of a viral video. 964f461e32dcb6a8d19adbb7c4ee2d8f54e52d7e Elephants are known to be one of the world's most intelligent animals, but now it seems they can also tell the time. 17662736c0b0056d18a33c078f04b64bc369ab0b The Pentagon has defended its cyberattack strategy after it uncovered the "previously classified" cyberattack. ca431d698b8df6cd0188ff8774e2710b4bf528d4 Tottenham's new manager is preparing for his first competitive game at White Hart Lane after a 2-0 win over QPR in Cyprus. b2b30a38f47d12c228c6186ed04ef47aa3482669 A team of tech-savvy inventors have invented the world's first greenhouse. 1e998102ff832608d25b7c2a966df2b5fc5a3476 The world's economic watchdog has warned that a rise in financial instability could threaten global growth. 0009ebb1967511741629926ef9f5faea2bb6be24 Hawaiian Airlines has the best on-time performance of the 14 largest U.S. airlines, according to a new report. 8d18d837945dd0ca3c9b99cb85cb97f45c37f3e4 Roger Federer's Wimbledon triumph has been described as ""the greatest player to have ever played the game"" by one of the men's game's most respected coaches. d391164eb789c29456f697bdb049b058ff77acc7 Bahrain has condemned the sentencing of protesters who were sentenced to life in prison for allegedly plotting to overthrow the government. 7688f3ed9e03177aaaec3fcff881a0505b404bb8 The Senate's debate on the Koch brothers' campaign spending has been interrupted by a debate about the billionaires' influence on the economy. d677052589e58a15610477188b8ee1d1537ea727 Connecticut will become the 17th state to abolish the death penalty, the Associated Press reports. 5c6edb53cdb8e885a7767ac9ffa6fa24c20c35bc A woman who claimed she was wrongly arrested and ill-treated by police over a taxi-harness bust-up has won a landmark High Court battle. 469f17dd0d48b4ef23a6929c425f0da389af834b A US mother has said she was sober enough to breastfeed her six-month-old baby while having two beers with dinner. d259d8b19632fb598c25cec86658ef5c5ed98454 A man who lied about falling over in a supermarket has been given a suspended prison sentence. 9459c25241e099399cff4579e66d00347507832e Ravel Morrison could be sold to QPR for £15m if the club can secure a new deal, reports BBC Radio Manchester. 6420430469f999ce00977cd6b2f033d4f28a8d19 "It was like a shock," says the man who led us to the airport in Guinea, where we were greeted by a crowd of people.
https://huggingface.co/albert-xxlarge-v1
ALBERT XXLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: 12 repeating layers 128 embedding dimension 4096 hidden dimension 64 attention heads 223M parameters Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. How to use You can use this model directly with a pipeline for masked language modeling: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] Here is how to use this model to get the features of a given text in PyTorch: from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1') model = AlbertModel.from_pretrained("albert-xxlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) and in TensorFlow: from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1') model = TFAlbertModel.from_pretrained("albert-xxlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] This bias will also affect all fine-tuned versions of this model. Training data The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are then of the form: [CLS] Sentence A [SEP] Sentence B [SEP] Training The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by [MASK]. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: Average SQuAD1.1 SQuAD2.0 MNLI SST-2 RACE V2 ALBERT-base 82.3 90.2/83.2 82.1/79.3 84.6 92.9 66.8 ALBERT-large 85.7 91.8/85.2 84.9/81.8 86.5 94.9 75.2 ALBERT-xlarge 87.9 92.9/86.4 87.9/84.1 87.9 95.4 80.7 ALBERT-xxlarge 90.9 94.6/89.1 89.8/86.9 90.6 96.8 86.8 V1 ALBERT-base 80.1 89.3/82.3 80.0/77.1 81.6 90.3 64.0 ALBERT-large 82.4 90.6/83.9 82.3/79.4 83.5 91.7 68.5 ALBERT-xlarge 85.5 92.5/86.1 86.1/83.1 86.4 92.4 74.8 ALBERT-xxlarge 91.0 94.8/89.3 90.2/87.4 90.8 96.9 86.5 BibTeX entry and citation info @article{DBLP:journals/corr/abs-1909-11942, author = {Zhenzhong Lan and Mingda Chen and Sebastian Goodman and Kevin Gimpel and Piyush Sharma and Radu Soricut}, title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language Representations}, journal = {CoRR}, volume = {abs/1909.11942}, year = {2019}, url = {http://arxiv.org/abs/1909.11942}, archivePrefix = {arXiv}, eprint = {1909.11942}, timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
https://huggingface.co/datasets/google/fleurs
families met kinders is nie 'n algemene verskynsel nie maar party koshuise laat hulle in privaat kamers toe Families met kinders is nie ‘n algemene verskynsel nie, maar party koshuise laat hulle in privaat kamers toe. 3 (sub_saharan_african_ssa) hsieh het ook geredeneer dat die fotogeniese ma meer styl as inhoud was Hsieh het ook geredeneer dat die fotogeniese Ma meer styl as inhoud was. 3 (sub_saharan_african_ssa) die volgende prent toon die atome wat protone afgee natuurlik is protone in die regte lewe baie kleiner as in die prent Die volgende prent toon die atome wat protone afgee. Natuurlik is protone in die regte lewe baie kleiner as in die prent. 3 (sub_saharan_african_ssa) dit was op hierdie tyd dat die oordrag van die titel van modehoofstad van konstantinopel na parys gemaak is Dit was op hierdie tyd dat die oordrag van die titel van Modehoofstad van Konstantinopel na Parys gemaak is. 3 (sub_saharan_african_ssa) gosling en stone het nominasies vir onderskeidelik beste akteur en beste aktrise ontvang Gosling en Stone het nominasies vir, onderskeidelik, Beste Akteur en Beste Aktrise ontvang. 3 (sub_saharan_african_ssa) op die trekklavier gebruik jy die blaasbalk met meer krag of spoed,om ekstra volume te kry Op die trekklavier, gebruik jy die blaasbalk met meer krag of spoed,om ekstra volume te kry. 3 (sub_saharan_african_ssa) gegewend hoe afgelee baie van die pueblos is sal jy nie in staat wees om 'n beduidende naglewe te kry sonder om na albuquerque of sante fe toe te reis nie Gegewend hoe afgelee baie van die pueblos is, sal jy nie in staat wees om ‘n beduidende naglewe te kry sonder om na Albuquerque of Sante Fe toe te reis nie. 3 (sub_saharan_african_ssa) verlede week het naked news aangekondig dat hulle die internasionale tale mandaat vir nuusberiggewing dramaties gaan verhoog met drie nuwe uitsendings Verlede week, het Naked News aangekondig dat hulle die internasionale tale mandaat vir nuusberiggewing dramaties gaan verhoog, met drie nuwe uitsendings. 3 (sub_saharan_african_ssa) hierdie plante lyk soos 'n klein palmboom met 'n kroon skerp stekelrige blare Hierdie plante lyk soos ‘n klein palmboom met ‘n kroon skerp, stekelrige blare. 3 (sub_saharan_african_ssa) vandat die federale regering ingetree het om die befondsing van die mersey hospitaal in devonport tasmania oor te neem het die staatsregering en sommige federale mps die aksie gekritiseer as 'n foefie in die aanloop tot die federale verkiesing wat in november besluit word Vandat die Federale Regering ingetree het om die befondsing van die Mersey hospitaal in Devonport, Tasmania oor te neem, het die staatsregering en sommige federale MPs die aksie gekritiseer as ‘n foefie in die aanloop tot die federale verkiesing wat in November besluit word. 3 (sub_saharan_african_ssa) oorblyfsels van dreinsisteme is gevind in huise van die minoïese stede krete en santorini in griekeland Oorblyfsels van dreinsisteme is gevind in huise van die Minoïese stede Krete en Santorini in Griekeland. 3 (sub_saharan_african_ssa) die groep het die opvoering by maui's oorlog gedenkteken stadion gekanselleer wat gereed was om deur 9,000 mense bygewoon te word en het aan bewonderaars om verskoning gevra Die groep het die opvoering by Maui’s Oorlog Gedenkteken Stadion gekanselleer wat gereed was om deur 9,000 mense bygewoon te word, en het aan bewonderaars om verskoning gevra. 3 (sub_saharan_african_ssa) duikbote is in wêreldoorlog i en wêreldoorlog ii gebruik op daardie stadium was hulle baie stadig en het 'n baie beperkte skietvaardigheid gehad Duikbote is in Wêreldoorlog I en Wêreldoorlog II gebruik. Op daardie stadium was hulle baie stadig en het ‘n baie beperkte skietvaardigheid gehad. 3 (sub_saharan_african_ssa) die gemeenskap se irritasie het gelei tot die huidige pogings om 'n beleid op te stel aangaande seksuele inhoud vir webwerf wat miljoene openlik-gelisensieerde media huisves Die gemeenskap se irritasie het gelei tot die huidige pogings om ‘n beleid op te stel aangaande seksuele inhoud vir webwerf wat miljoene openlik-gelisensieerde media huisves. 3 (sub_saharan_african_ssa) nogtans is daar baie kenmerkende maniere van koffie drink regoor die wêreld wat die moeite werd is om te ervaar Nogtans is daar baie kenmerkende maniere van koffie drink regoor die wêreld wat die moeite werd is om te ervaar. 3 (sub_saharan_african_ssa) japannese werksetiek is meer hierargies en formeel as waaraan westerlikes gewoond aan mag wees Japannese werksetiek is meer hierargies en formeel as waaraan Westerlikes gewoond aan mag wees. 3 (sub_saharan_african_ssa) werkplek harmonie is noodsaaklik met die klem op groeppoging eerder as om individuele prestasies te prys Werkplek harmonie is noodsaaklik, met die klem op groeppoging eerder as om individuele prestasies te prys. 3 (sub_saharan_african_ssa) vreemd genoeg is die selfoondiens hier beter as baie ander dele van die roete bv die pennsylvania wilds Vreemd genoeg, is die selfoondiens hier beter as baie ander dele van die roete, bv. die Pennsylvania Wilds. 3 (sub_saharan_african_ssa) die kapsel sal nes 'n verskietende ster lyk wat deur die lug gaan Die kapsel sal nes ‘n verskietende ster lyk wat deur die lug gaan. 3 (sub_saharan_african_ssa) goud kan in allerande soorte vorms bewerk word dit kan ook gerol word in klein vormpies Goud kan in allerande soorte vorms bewerk word. Dit kan ook gerol word in klein vormpies. 3 (sub_saharan_african_ssa) het die geallieërdes gefaal sou duitsland heel moontlik in staat wees om brittanje te oorwin aangesien dit die res van europa kon oorwin Het die Geallieërdes gefaal, sou Duitsland heel moontlik in staat wees om Brittanje te oorwin aangesien dit die res van Europa kon oorwin. 3 (sub_saharan_african_ssa) die wiel het die wêreld in ongelooflike maniere verander die grootste ding wat die wiel vir ons gegee het is baie makliker en vinniger vervoer Die wiel het die wêreld in ongelooflike maniere verander. Die grootste ding wat die wiel vir ons gegee het is baie makliker en vinniger vervoer. 3 (sub_saharan_african_ssa) indien 'n hommeltuig gebruik word moet daar vooraf bevestig word wat jy toegelaat is om te verfilm en watse permitte of addisionele lisensies benodig word Indien ‘n hommeltuig gebruik word, moet daar vooraf bevestig word wat jy toegelaat is om te verfilm en watse permitte of addisionele lisensies benodig word. 3 (sub_saharan_african_ssa) wetenskaplikes werk om 'n reaktor te skep wat energie op dieselfde manier opwek Wetenskaplikes werk om 'n reaktor te skep wat energie op dieselfde manier opwek. 3 (sub_saharan_african_ssa) ‘n ander verskil was dat terwyl die arm mense en die vrou hul kos eet terwyl hul in stoele sit,die ryk mans dit verkies het om bankette saam te hou waar hul op hul sye lê en hul maaltye eet ‘n Ander verskil was dat terwyl die arm mense en die vrou hul kos eet terwyl hul in stoele sit,die ryk mans dit verkies het om bankette saam te hou waar hul op hul sye lê en hul maaltye eet. 3 (sub_saharan_african_ssa) danksy ondersee se veseloptiese kabelskakels na europa en breëband satellite is groenland goed verbind met 93% van die bevolking wat toegang het tot die internet Danksy ondersee se veseloptiese kabelskakels na Europa en breëband satellite, is Groenland goed verbind met 93% van die bevolking wat toegang het tot die internet. 3 (sub_saharan_african_ssa) 'n dokter wat gewerk het by die kinderhospitaal van pittsburg pennsylvanië sal met 'n verergde graad van moord aangekla word na haar moeder dood in die kattebak van haar kar gevind is op woensdag sê owerhede in ohio n Dokter wat gewerk het by die Kinderhospitaal van Pittsburg, Pennsylvanië sal met 'n verergde graad van moord aangekla word na haar moeder dood in die kattebak van haar kar gevind is op Woensdag - sê owerhede in Ohio. 3 (sub_saharan_african_ssa) interaktiewe ontwerp vereis dat komponente met mekaar verbind maar kan ook sin maak as 'n aparte entiteit Interaktiewe ontwerp vereis dat komponente met mekaar verbind, maar kan ook sin maak as ‘n aparte entiteit. 3 (sub_saharan_african_ssa) hulle kan ook die besprekings hou vir jou indien jy tyd nodig het om te dink oor die aanbod of om ander dokumente te kry vir jou bestemming bv visa Hulle kan ook die besprekings hou vir jou indien jy tyd nodig het om te dink oor die aanbod of om ander dokumente te kry vir jou bestemming (bv visa). 3 (sub_saharan_african_ssa) ‘n persoon wat om en by 200 pond 90kg op aarde weeg sou 36 pond 16kg op io weeg. so die swaartekrag trek dus minder aan jou ‘n Persoon wat om en by 200 pond (90kg) op Aarde weeg, sou 36 pond (16kg) op Io weeg. So die swaartekrag trek dus minder aan jou. 3 (sub_saharan_african_ssa) inhoudsteorieë is gesentreer rondom wat mense motiveer of behaag Inhoudsteorieë is gesentreer rondom wat mense motiveer of behaag. 3 (sub_saharan_african_ssa) voor-operatiewe transseksuele mense moet nie verwag om deur die skandeerders te gaan met hulle privaatheid en waardigheid ongeskonde nie Voor-operatiewe transseksuele mense moet nie verwag om deur die skandeerders te gaan met hulle privaatheid en waardigheid ongeskonde nie. 3 (sub_saharan_african_ssa) as gevolg van hul sukses met duikbote word duitsers na die oorlog nie het nie As gevolg van hul sukses met duikbote, word Duitsers na die oorlog nie het nie. 3 (sub_saharan_african_ssa) die media maatskappye lieg gereeld oor die doel hiervan en meld dat dit is om kaping te voorkom Die media maatskappye lieg gereeld oor die doel hiervan, en meld dat dit is om "kaping te voorkom". 3 (sub_saharan_african_ssa) protesteerders hoop om 'n petisie van 1.2 miljoen handtekeninge te versamel om aan die nasionale kongres in november voor te lê Protesteerders hoop om ‘n petisie van 1.2 miljoen handtekeninge te versamel om aan die Nasionale Kongres in November voor te lê. 3 (sub_saharan_african_ssa) om in 'n ander land te woon en vrywilligerswerk te doen is 'n wonderlike manier om 'n ander kultuur te leer ken nuwe mense te ontmoet jouself te leer ken 'n gevoel van perspektief te kry en selfs nuwe vaardighede te leer Om in ‘n ander land te woon en vrywilligerswerk te doen is ‘n wonderlike manier om ‘n ander kultuur te leer ken, nuwe mense te ontmoet, jouself te leer ken, ‘n gevoel van perspektief te kry en selfs nuwe vaardighede te leer. 3 (sub_saharan_african_ssa) hamilton het bevestig die pasiënt is opgeneem in 'n stabile kondise in howard university hospital Hamilton het bevestig die pasiënt is opgeneem in ‘n stabile kondise in Howard University Hospital. 3 (sub_saharan_african_ssa) huhne en pryce sal na verwagting op 16 februarie in die westminster-landdroshof verskyn Huhne en Pryce sal na verwagting op 16 Februarie in die Westminster-landdroshof verskyn. 3 (sub_saharan_african_ssa) jy moet jou gereelde vliegpunte lugredery in 'n alliansie versigtig kies Jy moet jou Gereelde Vliegpunte lugredery in ‘n alliansie versigtig kies. 3 (sub_saharan_african_ssa) ski toere hierdie aktiwiteit kan ook binnelandse ski ski toer of ski stap genoem word Ski toere: Hierdie aktiwiteit kan ook binnelandse ski, ski toer of ski stap genoem word. 3 (sub_saharan_african_ssa) curtis cooper 'n wiskundige en rekenaarwetenskap professor by die universiteit van sentrale missouri het die grootste bekende priem getal tot op datum ontdek op die 25ste januarie Curtis Cooper, ‘n wiskundige en rekenaarwetenskap professor by die Universiteit van Sentrale Missouri, het die grootste bekende priem getal tot op datum ontdek op die 25ste Januarie. 3 (sub_saharan_african_ssa) dit is toegewys aan die v.s. vloot se sewende vloot en is gebaseer in sasebo nagasaki in japan Dit is toegewys aan die V.S. Vloot se Sewende Vloot en is gebaseer in Sasebo, Nagasaki in Japan. 3 (sub_saharan_african_ssa) die mutasieproses voeg nuwe genetiese variasie by en seleksie verwyder dit van die poel van uitgedrukte variasie Die mutasieproses voeg nuwe genetiese variasie by, en seleksie verwyder dit van die poel van uitgedrukte variasie. 3 (sub_saharan_african_ssa) in 1994 het die etniese armeense nagorno-karabakh streek van azerbaijan oorlog gevoer teen die aseris In 1994, het die etniese Armeense Nagorno-Karabakh streek van Azerbaijan oorlog gevoer teen die Aseris. 3 (sub_saharan_african_ssa) sommige landloop wedlope gedurende die winter gekombineer met gimnasium werk vir jou bolyf is die beste voorbereiding vir die hardloop seisoen Sommige landloop wedlope gedurende die winter, gekombineer met gimnasium werk vir jou bolyf, is die beste voorbereiding vir die hardloop seisoen. 3 (sub_saharan_african_ssa) mense het die vensterpanele met stoele geslaan maar die vensters was onbreekbaar Mense het die vensterpanele met stoele geslaan, maar die vensters was onbreekbaar. 3 (sub_saharan_african_ssa) terwyl die meeste kaarte goed is om na enige plek te bel spesialiseer sommiges met meer gunstige oproepkoste na spesifieke groepe lande Terwyl die meeste kaarte goed is om na enige plek te bel, spesialiseer sommiges met meer gunstige oproepkoste na spesifieke groepe lande. 3 (sub_saharan_african_ssa) jou paspoort moet vir ten minste 6 maande na jou reisdatums geldig wees. ‘n retoer/verdere rit kaartjie is nodig om die lengte van jou verblyf te bewys Jou paspoort moet vir ten minste 6 maande na jou reisdatums geldig wees. ‘n Retoer/verdere rit kaartjie is nodig om die lengte van jou verblyf te bewys. 3 (sub_saharan_african_ssa) hulle het 'n swakker weermag en 'n swakker vloot gehad alhoewel hulle vier nuwe skepe gebou het net voor die oorlog begin het Hulle het ‘n swakker weermag en ‘n swakker vloot gehad, alhoewel hulle vier nuwe skepe gebou het net voor die oorlog begin het. 3 (sub_saharan_african_ssa) die hoeveelheid gesamentlike yahoo en microsoft dienste gebruikers sal kan meeding met die hoeveelheid aol kliente Die hoeveelheid gesamentlike Yahoo! en Microsoft dienste gebruikers sal kan meeding met die hoeveelheid AOL kliente. 3 (sub_saharan_african_ssa) agtien persent van venezolane is werkloos en meeste wat werk word in die informele ekonomie in diens geneem Agtien persent van Venezolane is werkloos, en meeste wat werk word in die informele ekonomie in diens geneem. 3 (sub_saharan_african_ssa) pleegsorg is veronderstel om al die noodsaaklikhede te bied wat in die tuiste van waar hulle voorheen geneem is geskort het Pleegsorg is veronderstel om al die noodsaaklikhede te bied wat in die tuiste van waar hulle voorheen geneem is, geskort het. 3 (sub_saharan_african_ssa) hsieh het gedurende die verkiesing geïmpliseer dat ma uit die land in 'n tyd van mag vlug Hsieh het gedurende die verkiesing geïmpliseer dat Ma uit die land in ‘n tyd van mag vlug. 3 (sub_saharan_african_ssa) boks jellievis verskyn naby strande en naby riviermondings vanaf oktober tot april noord van 1770 hulle kan by geleentheid buite hierdie tye gevind word Boks jellievis verskyn naby strande en naby riviermondings vanaf Oktober tot April noord van 1770. Hulle kan by geleentheid buite hierdie tye gevind word. 3 (sub_saharan_african_ssa) qing dinastie 1644 – 1912 magte het in 1683 beheer geneem van taiwan se westelike en noordelike kus areas en taiwan as n provinsie van die qing ryk in 1885 verklaar Qing dinastie (1644 – 1912) magte het in 1683 beheer geneem van Taiwan se westelike en noordelike kus areas en Taiwan as n provinsie van die Qing Ryk in 1885 verklaar. 3 (sub_saharan_african_ssa) raadpleeg die instansie sowel as die immigrasie departement vir die land waarin jy graag wil studeer vir die gedetaileerde vereistes Raadpleeg die instansie, sowel as die immigrasie departement vir die land waarin jy graag wil studeer vir die gedetaileerde vereistes. 3 (sub_saharan_african_ssa) om te reis na walt disney world verteenwoordig 'n groot pelgrimsreis vir baie amerikaanse families Om te reis na Walt Disney World verteenwoordig ‘n groot pelgrimsreis vir baie Amerikaanse families. 3 (sub_saharan_african_ssa) terwyl ons geluister het na individue wat hul individuele familie en organisatoriese stories vertel het het ons waardevolle insigte in die verlede bekom en sommige van die persoonlikhede wat die kultuur van die organisasie ten goede of kwade beïnvloed het Terwyl ons geluister het na individue wat hul individuele, familie, en organisatoriese stories vertel het, het ons waardevolle insigte in die verlede bekom en sommige van die persoonlikhede wat die kultuur van die organisasie ten goede of kwade beïnvloed het. 3 (sub_saharan_african_ssa) vir australiërs is die idee van 'plat wit' koffie uitheems. 'n kort swart is 'espresso' cappuccino word met room opgehoop voorgesit nie skuim nie en tee word sonder melk bedien Vir Australiërs is die idee van 'plat wit' koffie uitheems. 'n Kort swart is 'espresso', cappuccino word met room opgehoop voorgesit (nie skuim nie), en tee word sonder melk bedien. 3 (sub_saharan_african_ssa) die 35mm formaat is eintlik ietwat verwarrend 36mm breed en 24mm hoog Die 35mm formaat is eintlik, ietwat verwarrend, 36mm breed en 24mm hoog. 3 (sub_saharan_african_ssa) hy het 2 doele en 2 assistente in washington se 5-3 oorwining teen die atlanta thrashers gehad Hy het 2 doele en 2 assistente in Washington se 5-3 oorwining teen die Atlanta Thrashers gehad. 3 (sub_saharan_african_ssa) in die laaste 3 maande is meer as 80 gearresteerdes vrygelaat van die sentrale aanhoudingsfasiliteit sonder dat hulle formeel aangekla is In die laaste 3 maande is meer as 80 gearresteerdes vrygelaat van die Sentrale Aanhoudingsfasiliteit sonder dat hulle formeel aangekla is. 3 (sub_saharan_african_ssa) indien jy met ‘n skootrekenaar of tablet reis stoor ‘n kopie in die hardeskyf of op ‘n skyfie toeganklik sonder die internet Indien jy met ‘n skootrekenaar of tablet reis, stoor ‘n kopie in die hardeskyf of op ‘n skyfie (toeganklik sonder die internet). 3 (sub_saharan_african_ssa) die kommissaris stel 'n borgtog vas en indien dit toegeken word word dit geformaliseer en die klagtes in die lêer opgeteken word deur die arrestasiebeampte die klagte word dan op die staat se rekenaarsisteem ingesleutel waar die saak nagespoor word Die kommissaris stel ‘n borgtog vas, en indien dit toegeken word, word dit geformaliseer en die klagtes in die lêer opgeteken word deur die arrestasiebeampte. Die klagte word dan op die staat se rekenaarsisteem ingesleutel waar die saak nagespoor word. 3 (sub_saharan_african_ssa) die staal naald dryf bo-op die water as gevolg van die oppervlak-spanning Die staal naald dryf bo-op die water as gevolg van die oppervlak-spanning. 3 (sub_saharan_african_ssa) pyle van hul dodelike kruisboë kon die pantser van mededingende soldate deurboor om en by 1000 v.c. het die assiriërs die eerste ruitery bekend gestel Pyle van hul dodelike kruisboë kon die pantser van mededingende soldate deurboor. Om en by 1000 V.C., het die Assiriërs die eerste ruitery bekend gestel. 3 (sub_saharan_african_ssa) kleiner toernooie en wedstryde kan ook hier gesien word ten ander tye van die jaar Kleiner toernooie en wedstryde kan ook hier gesien word ten ander tye van die jaar. 3 (sub_saharan_african_ssa) daar was wêreldwyd proteste verskeie kriminele vervolgings en die leiers van die regerings van ysland en pakistan het beide bedank Daar was wêreldwyd proteste, verskeie kriminele vervolgings, en die leiers van die regerings van Ysland en Pakistan het beide bedank. 3 (sub_saharan_african_ssa) hy was ook voorheen van kopieregskending beskuldig maar is nie vervolg nie Hy was ook voorheen van kopieregskending beskuldig, maar is nie vervolg nie. 3 (sub_saharan_african_ssa) van woensdag middag af het die drom luggate nog steeds gelek waarskynlik as gevolg van termiese uitsetting binne die drom Van Woensdag middag af, het die drom luggate nog steeds gelek waarskynlik as gevolg van termiese uitsetting binne die drom. 3 (sub_saharan_african_ssa) die mure en dakke van ysgrotte kan inmekaarval en skeure kan toegaan Die mure en dakke van ysgrotte kan inmekaarval en skeure kan toegaan. 3 (sub_saharan_african_ssa) vir sommige musiekfeeste besluit die grootste hoeveelheid van die bywoners om op die terrein te kampeer en die meeste sien dit as 'n noodsaaklike deel van die ondervinding Vir sommige musiekfeeste besluit die grootste hoeveelheid van die bywoners om op die terrein te kampeer, en die meeste sien dit as ‘n noodsaaklike deel van die ondervinding. 3 (sub_saharan_african_ssa) japanese judoka hitoshi saito wat die wenner was van twee olimpiese goue medaljes het op die ouderdom van 54 gesterf Japanese judoka Hitoshi Saito, wat die wenner was van twee Olimpiese goue medaljes, het op die ouderdom van 54 gesterf. 3 (sub_saharan_african_ssa) hoekom kweek vervoerstelsels sulke klagtes hoekom faal dit op 'n daaglikse basis is vervoer ingeneurs net onbevoeg of is iets meer fundamenteel aan die gang Hoekom kweek vervoerstelsels sulke klagtes, hoekom faal dit op ‘n daaglikse basis? Is vervoer ingeneurs net onbevoeg? Of is iets meer fundamenteel aan die gang? 3 (sub_saharan_african_ssa) bowen-eiland is populêr vir 'n dag of naweek-uitstappie en bied kajak-ritte staproetes winkels restaurante en meer Bowen-eiland is populêr vir ‘n dag- of naweek-uitstappie en bied kajak-ritte, staproetes, winkels, restaurante en meer. 3 (sub_saharan_african_ssa) hulle sluit in finansiële beperkings en 'n verbieding deur die europese unie op die uitvoer van ru-olie van waar af die iraniese ekonomie 80% van sy buitelandse inkomste ontvang Hulle sluit in finansiële beperkings en ‘n verbieding deur die Europese Unie op die uitvoer van ru-olie, van waar af die Iraniese ekonomie 80% van sy buitelandse inkomste ontvang. 3 (sub_saharan_african_ssa) wilde lewe-fotografie vereis dikwels 'n lang telefoto lens alhoewel goed soos 'n swerm voëls of 'n piepklein kreatuurtjie ander lense benodig Wilde lewe-fotografie vereis dikwels ‘n lang telefoto lens, alhoewel goed soos ‘n swerm voëls of ‘n piepklein kreatuurtjie ander lense benodig. 3 (sub_saharan_african_ssa) die tropiese storm danielle die vierde genoemde storm van die 2010 atlantiese orkaanseisoen het in die oos atlantiese oseaan gevorm Die tropiese storm Danielle, die vierde genoemde storm van die 2010 Atlantiese orkaanseisoen, het in die oos Atlantiese Oseaan gevorm. 3 (sub_saharan_african_ssa) die thais het cambodia verskeie kere ingeval in die 18e eeu en in 1772 het hulle phnom phen vernietig Die Thais het Cambodia verskeie kere ingeval in die 18e eeu en in 1772 het hulle Phnom Phen vernietig. 3 (sub_saharan_african_ssa) ongewensde muurskilderye of krabbels staan bekend as graffitti Ongewensde muurskilderye of krabbels staan bekend as graffitti. 3 (sub_saharan_african_ssa) dit maak nie saak hoe mak hulle voorkom nie buffels takbokke amerikaanse takbokke bere en amper alle groot diere kan aanval Dit maak nie saak hoe mak hulle voorkom nie, buffels, takbokke, Amerikaanse takbokke, bere en amper alle groot diere kan aanval. 3 (sub_saharan_african_ssa) die menslike hand is korter as die voet met phalankse wat meer reguit is Die menslike hand is korter as die voet, met phalankse wat meer reguit is. 3 (sub_saharan_african_ssa) sedert pakistani onafhanklikheid van britse oorheersing in 1947 het die pakistani president politiese agente aangestel om die fata te regeer wat byna-totale outonome beheer oor die areas uitvoer Sedert Pakistani onafhanklikheid van Britse oorheersing in 1947, het die Pakistani President “Politiese Agente” aangestel om die FATA te regeer, wat byna-totale outonome beheer oor die areas uitvoer. 3 (sub_saharan_african_ssa) die magsbalans was 'n sisteem waarin europese nasies gepoog het om die nasionale soewereiniteit van alle europese state te behou Die magsbalans was ‘n sisteem waarin Europese nasies gepoog het om die nasionale soewereiniteit van alle Europese state te behou. 3 (sub_saharan_african_ssa) die beroemde griekse prokureurs sakis kechagioglou en george nikolakopoulos is opgesluit in die ateniese tronk van korydallus want hulle is skuldig bevind aan verduistering en korrupsie Die beroemde Griekse prokureurs, Sakis Kechagioglou en George Nikolakopoulos is opgesluit in die Ateniese tronk van Korydallus, want hulle is skuldig bevind aan verduistering en korrupsie. 3 (sub_saharan_african_ssa) jou plaaslike telefoon-diensverskaffer behoort jou meer inligting te kan gee aangaande die aansluiting by die diens Jou plaaslike telefoon-diensverskaffer behoort jou meer inligting te kan gee aangaande die aansluiting by die diens. 3 (sub_saharan_african_ssa) koue weer is dalk die enigste gevaar wat onvoorbereides sal tegemoetkom Koue weer is dalk die enigste gevaar wat onvoorbereides sal tegemoetkom. 3 (sub_saharan_african_ssa) die protes het ongeveer 11:00 plaaslike tyd gut+1 op whitehall oorkant die polisie-bewaakte ingang na downing straat die eerste minister se offisiële tuiste begin Die protes het ongeveer 11:00 plaaslike tyd (GUT+1) op Whitehall oorkant die polisie-bewaakte ingang na Downing Straat, die Eerste Minister se offisiële tuiste, begin. 3 (sub_saharan_african_ssa) daarby maak seker dat jy die r en rr verskillend uitspreek caro beteken liewe en carro beteken strydwa Daarby, maak seker dat jy die r en rr verskillend uitspreek: caro beteken liewe en carro beteken strydwa. 3 (sub_saharan_african_ssa) fotone is selfs kleiner as die goed waaruit atome bestaan Fotone is selfs kleiner as die goed waaruit atome bestaan! 3 (sub_saharan_african_ssa) die nieu-seelandse polisie het gesukkel om met hul spoedradar-gewere te sien hoe vinnig mnr reid gery het omdat black beauty so laag is en die enigste tyd wat die polisie daarin geslaag het om mnr reid te meet was toe hy spoed na 160 km/h verminder het Die Nieu-Seelandse polisie het gesukkel om met hul spoedradar-gewere te sien hoe vinnig mnr. Reid gery het omdat Black Beauty so laag is, en die enigste tyd wat die polisie daarin geslaag het om mnr. Reid te meet, was toe hy spoed na 160 km/h verminder het. 3 (sub_saharan_african_ssa) ongeveer drieduisend jaar later in 1610 het 'n italiaanse astronoom galileo galilei 'n teleskoop gebruik om op te merk dat venus fases het nes die maan Ongeveer drieduisend jaar later, in 1610, het ‘n Italiaanse astronoom Galileo Galilei ‘n teleskoop gebruik om op te merk dat Venus fases het, nes die maan. 3 (sub_saharan_african_ssa) gebore in hong kong ma het by new york universiteit en harvard regsskool studeer en het op 'n tyd 'n permanente groenkaart vir amerika gehad Gebore in Hong Kong, Ma het by New York Universiteit en Harvard Regsskool studeer en het op ‘n tyd ‘n permanente groenkaart vir Amerika gehad”. 3 (sub_saharan_african_ssa) die gotiese styl het sy piek bereik tussen die 10e en 11de eeue en in die 14e eeu Die Gotiese styl het sy piek bereik tussen die 10e en 11de eeue en in die 14e eeu. 3 (sub_saharan_african_ssa) nou vir japan japan was 'n eiland nes britanje Nou vir Japan. Japan was ‘n eiland, nes Britanje. 3 (sub_saharan_african_ssa) die land se opperhoof ayatollah ali khamenei het die afhanklikheid aan olie beskryf as 'n lokval' gedateer van voor iran se islamitiese revolusie in 1979 en die land moet homself bevry Die land se opperhoof, Ayatollah Ali Khamenei, het die afhanklikheid aan olie beskryf as ‘‘n lokval’, gedateer van voor Iran se Islamitiese revolusie in 1979 en die land moet homself bevry. 3 (sub_saharan_african_ssa) die britse koerant the guardian het aangedui dat die deutsche bank rofweg 'n derde van die 1200 dop maatskappye gebruik om dit te bereik Die Britse koerant, The Guardian, het aangedui dat die Deutsche Bank rofweg ‘n derde van die 1200 dop maatskappye gebruik om dit te bereik. 3 (sub_saharan_african_ssa) in sommige oorgrense treine word inspeksies op lopende treine gehou en jy moet 'n geldige id by jou hê wanneer jy aan een van daardie treine aan boord gaan In sommige oorgrense treine word inspeksies op lopende treine gehou en jy moet ‘n geldige ID by jou hê wanneer jy aan een van daardie treine aan boord gaan. 3 (sub_saharan_african_ssa) die v.s. sê dat dit informasie ontvang het van 'n onbekende bron wat spesifiek meld dat daar gebruik gemaak word van selfmoord-bommers om prominente bakens in ethiopië en kenia op te blaas Die V.S. sê dat dit informasie ontvang het van 'n onbekende bron wat spesifiek meld dat daar gebruik gemaak word van selfmoord-bommers om "prominente bakens" in Ethiopië en Kenia op te blaas. 3 (sub_saharan_african_ssa) hierdie vereistes is ontwerp om georganiseerde beweging en vloei tussen beide lande te voorsien Hierdie vereistes is ontwerp om georganiseerde beweging en vloei tussen beide lande te voorsien. 3 (sub_saharan_african_ssa)
https://huggingface.co/datasets/google/MusicCaps
Dataset Card for MusicCaps Dataset Summary The MusicCaps dataset contains 5,521 music examples, each of which is labeled with an English aspect list and a free text caption written by musicians. An aspect list is for example "pop, tinny wide hi hats, mellow piano melody, high pitched female vocal melody, sustained pulsating synth lead", while the caption consists of multiple sentences about the music, e.g., "A low sounding male voice is rapping over a fast paced drums playing a reggaeton beat along with a bass. Something like a guitar is playing the melody along. This recording is of poor audio-quality. In the background a laughter can be noticed. This song may be playing in a bar." The text is solely focused on describing how the music sounds, not the metadata like the artist name. The labeled examples are 10s music clips from the AudioSet dataset (2,858 from the eval and 2,663 from the train split). Please cite the corresponding paper, when using this dataset: http://arxiv.org/abs/2301.11325 (DOI: 10.48550/arXiv.2301.11325) Dataset Usage The published dataset takes the form of a .csv file that contains the ID of YouTube videos and their start/end stamps. In order to use this dataset, one must download the corresponding YouTube videos and chunk them according to the start/end times. The following repository has an example script and notebook to load the clips. The notebook also includes a Gradio demo that helps explore some samples: https://github.com/nateraw/download-musiccaps-dataset Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields ytid YT ID pointing to the YouTube video in which the labeled music segment appears. You can listen to the segment by opening https://youtu.be/watch?v={ytid}&start={start_s} start_s Position in the YouTube video at which the music starts. end_s Position in the YouTube video at which the music end. All clips are 10s long. audioset_positive_labels Labels for this segment from the AudioSet (https://research.google.com/audioset/) dataset. aspect_list A list of aspects describing the music. caption A multi-sentence free text caption describing the music. author_id An integer for grouping samples by who wrote them. is_balanced_subset If this value is true, the row is a part of the 1k subset which is genre-balanced. is_audioset_eval If this value is true, the clip is from the AudioSet eval split. Otherwise it is from the AudioSet train split. Data Splits [More Information Needed] Dataset Creation Curation Rationale [More Information Needed] Source Data Initial Data Collection and Normalization [More Information Needed] Who are the source language producers? [More Information Needed] Annotations Annotation process [More Information Needed] Who are the annotators? [More Information Needed] Personal and Sensitive Information [More Information Needed] Considerations for Using the Data Social Impact of Dataset [More Information Needed] Discussion of Biases [More Information Needed] Other Known Limitations [More Information Needed] Additional Information Dataset Curators This dataset was shared by @googleai Licensing Information The license for this dataset is cc-by-sa-4.0 Citation Information [More Information Needed] Contributions [More Information Needed] Downloads last month1,956 Models trained or fine-tuned on google/MusicCaps Space using google/MusicCaps 1
https://huggingface.co/datasets/google/red_ace_asr_error_detection_and_correction
RED-ACE Dataset Summary This dataset can be used to train and evaluate ASR Error Detection or Correction models. It was introduced in the RED-ACE paper (Gekhman et al, 2022). The dataset contains ASR outputs on the LibriSpeech corpus (Panayotov et al., 2015) with annotated transcription errors. Dataset Details The LibriSpeech corpus was decoded using Google Cloud Speech-to-Text API, with the default and video models. The word-level confidence was enabled and is provided as part of the transcription hypothesis. To annotate word-level errors (for the error detection task), the hypothesis words were aligned with the reference (correct) transcription to find an edit path (insertions, deletions and substitutions) with the minimum edit distance (from the hypothesis to the reference). The hypothesis words with deletions and substitutions were then labeled as ERROR (1), the rest were labeled as NOTERROR (0). Data format The dataset has train, developement and test splits which correspond to the splits in Librispeech. The data contains json lines with the following keys (note that asr_hypothesis[i], confidence_scores[i] and error_labels[i] correpond to the same word): "id" - The librispeech id. "truth" - The reference (correct) transcript from Librispeech. "asr_model" - The ASR model used for transcription. "librispeech_pool": Corresponds to the original pool (split) in the librispeech data. "asr_hypothesis" - The transcription hypothesis. "confidence_scores" - The word-level confidence scores provided as part of the transcription hypothesis. "error_labels" - The error labels (1 error, 0 not error) that were obtained by alighning the hypothesis and the reference. Here is an example of a single data item: { "id": "test-other/6070/86744/6070-86744-0024", "truth": "my dear franz replied albert when upon receipt of my letter you found the necessity of asking the count's assistance you promptly went to him saying my friend albert de morcerf is in danger help me to deliver him", "asr_model": "default", "librispeech_pool": "other", "asr_hypothesis": ["my", "dear", "friends", "replied", "Albert", "received", "my", "letter", "you", "found", "the", "necessity", "of", "asking", "the", "county", "assistance", "you", "promptly", "went", "to", "him", "saying", "my", "friend", "all", "but", "the", "most", "stuff", "is", "in", "danger", "help", "me", "to", "deliver", "it"], "confidence_scores": ["0.9876290559768677", "0.9875272512435913", "0.6921446323394775", "0.9613730311393738", "0.9413103461265564", "0.6563355922698975", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "1.0", "1.0", "1.0", "1.0", "1.0", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.9876290559768677", "0.5291957855224609", "0.5291957855224609"], "error_labels": ["0", "0", "1", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "1", "1", "1", "0", "0", "0", "0", "0", "0", "0", "1"] } Loading the dataset The following code loads the dataset and locates the example data item from above: from datasets import load_dataset red_ace_data = load_dataset("google/red_ace_asr_error_detection_and_correction", split='test') for example in red_ace_data: if example['id'] == 'test-other/6070/86744/6070-86744-0024': break print(example) Citation If you use this dataset for a research publication, please cite the RED-ACE paper (using the bibtex entry below), as well as the Librispeech paper mentioned above. @inproceedings{gekhman-etal-2022-red, title = "{RED}-{ACE}: Robust Error Detection for {ASR} using Confidence Embeddings", author = "Gekhman, Zorik and Zverinski, Dina and Mallinson, Jonathan and Beryozkin, Genady", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.180", doi = "10.18653/v1/2022.emnlp-main.180", pages = "2800--2808", abstract = "ASR Error Detection (AED) models aim to post-process the output of Automatic Speech Recognition (ASR) systems, in order to detect transcription errors. Modern approaches usually use text-based input, comprised solely of the ASR transcription hypothesis, disregarding additional signals from the ASR model. Instead, we utilize the ASR system{'}s word-level confidence scores for improving AED performance. Specifically, we add an ASR Confidence Embedding (ACE) layer to the AED model{'}s encoder, allowing us to jointly encode the confidence scores and the transcribed text into a contextualized representation. Our experiments show the benefits of ASR confidence scores for AED, their complementary effect over the textual signal, as well as the effectiveness and robustness of ACE for combining these signals. To foster further research, we publish a novel AED dataset consisting of ASR outputs on the LibriSpeech corpus with annotated transcription errors.", } Downloads last month2
https://huggingface.co/google/vit-base-patch32-384
Vision Transformer (base-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384. Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch32-384') model = ViTForImageClassification.from_pretrained('google/vit-base-patch32-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. Training data The ViT model was pretrained on ImageNet-21k, a dataset consisting of 14 million images and 21k classes, and fine-tuned on ImageNet, a dataset consisting of 1 million images and 1k classes. Training procedure Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. BibTeX entry and citation info @misc{https://doi.org/10.48550/arxiv.2010.11929, doi = {10.48550/ARXIV.2010.11929}, url = {https://arxiv.org/abs/2010.11929}, author = {Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, keywords = {Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} }
https://huggingface.co/google/deplot
Model card for DePlot Table of Contents TL;DR Using the model Contribution Citation TL;DR The abstract of the paper states that: Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA. Using the model You can run a prediction by querying an input image together with a question as follows: from transformers import Pix2StructProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image processor = Pix2StructProcessor.from_pretrained('google/deplot') model = Pix2StructForConditionalGeneration.from_pretrained('google/deplot') url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt") predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True)) Converting from T5x to huggingface You can use the convert_pix2struct_checkpoint_to_pytorch.py script as follows: python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa if you are converting a large model, run: python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa Once saved, you can push your converted model with the following snippet: from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE) processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE) model.push_to_hub("USERNAME/MODEL_NAME") processor.push_to_hub("USERNAME/MODEL_NAME") Contribution This model was originally contributed by Fangyu Liu, Julian Martin Eisenschlos et al. and added to the Hugging Face ecosystem by Younes Belkada. Citation If you want to cite this work, please consider citing the original paper: @misc{liu2022deplot, title={DePlot: One-shot visual language reasoning by plot-to-table translation}, author={Liu, Fangyu and Eisenschlos, Julian Martin and Piccinno, Francesco and Krichene, Syrine and Pang, Chenxi and Lee, Kenton and Joshi, Mandar and Chen, Wenhu and Collier, Nigel and Altun, Yasemin}, year={2022}, eprint={2212.10505}, archivePrefix={arXiv}, primaryClass={cs.CL} }
https://huggingface.co/datasets/google/cvss
CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus CVSS is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the Common Voice speech corpus and the CoVoST 2 speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the LibriTTS corpus. CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values: CVSS-C: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications. CVSS-T: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages. Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech. In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation. Please check out our paper for the detailed description of this corpus, as well as the baseline models we trained on both datasets. Load the data The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from Common Voice v4.0 separately, and join them by the file names. from datasets import load_dataset # Load only ar-en and ja-en language pairs. Omitting the `languages` argument # would load all the language pairs. cvss_c = load_dataset('google/cvss', 'cvss_c', languages=['ar', 'ja']) # Print the structure of the dataset. print(cvss_c) License CVSS is released under the very permissive Creative Commons Attribution 4.0 International (CC BY 4.0) license. Citation Please cite this paper when referencing the CVSS corpus: @inproceedings{jia2022cvss, title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation}, author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga}, booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)}, pages={6691--6703}, year={2022} }
https://huggingface.co/google/vit-hybrid-base-bit-384
Vision Transformer (base-sized model) - Hybrid The hybrid Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the plain Vision Transformer, by leveraging a convolutional backbone (specifically, BiT) whose features are used as initial "tokens" for the Transformer. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: from transformers import ViTHybridImageProcessor, ViTHybridForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTHybridImageProcessor.from_pretrained('google/vit-hybrid-base-bit-384') model = ViTHybridForImageClassification.from_pretrained('google/vit-hybrid-base-bit-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) >>> tabby, tabby cat For more code examples, we refer to the documentation. Training data The ViT-Hybrid model was pretrained on ImageNet-21k, a dataset consisting of 14 million images and 21k classes, and fine-tuned on ImageNet, a dataset consisting of 1 million images and 1k classes. Training procedure Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224. Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. BibTeX entry and citation info @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} }
https://huggingface.co/datasets/google/dreambooth
Dataset Card for "dreambooth" Dataset of the Google paper DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation The dataset includes 30 subjects of 15 different classes. 9 out of these subjects are live subjects (dogs and cats) and 21 are objects. The dataset contains a variable number of images per subject (4-6). Images of the subjects are usually captured in different conditions, environments and under different angles. We include a file dataset/prompts_and_classes.txt which contains all of the prompts used in the paper for live subjects and objects, as well as the class name used for the subjects. The images have either been captured by the paper authors, or sourced from www.unsplash.com The dataset/references_and_licenses.txt file contains a list of all the reference links to the images in www.unsplash.com - and attribution to the photographer, along with the license of the image. project page Academic Citation If you use this work please cite: @inproceedings{ruiz2023dreambooth, title={Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation}, author={Ruiz, Nataniel and Li, Yuanzhen and Jampani, Varun and Pritch, Yael and Rubinstein, Michael and Aberman, Kfir}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, year={2023} } Disclaimer This is not an officially supported Google product.
https://huggingface.co/google/pix2struct-textcaps-base
Model card for Pix2Struct - Finetuned on TextCaps Table of Contents TL;DR Using the model Contribution Citation TL;DR Pix2Struct is an image encoder - text decoder model that is trained on image-text pairs for various tasks, including image captionning and visual question answering. The full list of available models can be found on the Table 1 of the paper: The abstract of the model states that: Visually-situated language is ubiquitous—sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domainspecific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images. Using the model Converting from T5x to huggingface You can use the convert_pix2struct_checkpoint_to_pytorch.py script as follows: python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE if you are converting a large model, run: python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large Once saved, you can push your converted model with the following snippet: from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE) processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE) model.push_to_hub("USERNAME/MODEL_NAME") processor.push_to_hub("USERNAME/MODEL_NAME") Running the model In full precision, on CPU: You can run the model in full precision on CPU: import requests from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = Image.open(requests.get(url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base") # image only inputs = processor(images=image, return_tensors="pt") predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) >>> A stop sign is on a street corner. In full precision, on GPU: You can run the model in full precision on CPU: import requests from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = Image.open(requests.get(url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base").to("cuda") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base") # image only inputs = processor(images=image, return_tensors="pt").to("cuda") predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) >>> A stop sign is on a street corner. In half precision, on GPU: You can run the model in full precision on CPU: import requests import torch from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = Image.open(requests.get(url, stream=True).raw) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base", torch_dtype=torch.bfloat16).to("cuda") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base") # image only inputs = processor(images=image, return_tensors="pt").to("cuda", torch.bfloat16) predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) >>> A stop sign is on a street corner. Use different sequence length This model has been trained on a sequence length of 2048. You can try to reduce the sequence length for a more memory efficient inference but you may observe some performance degradation for small sequence length (<512). Just pass max_patches when calling the processor: inputs = processor(images=image, return_tensors="pt", max_patches=512) Conditional generation You can also pre-pend some input text to perform conditional generation: import requests from PIL import Image from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "A picture of" model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base") processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base") # image only inputs = processor(images=image, text=text, return_tensors="pt") predictions = model.generate(**inputs) print(processor.decode(predictions[0], skip_special_tokens=True)) >>> A picture of a stop sign that says yes. Contribution This model was originally contributed by Kenton Lee, Mandar Joshi et al. and added to the Hugging Face ecosystem by Younes Belkada. Citation If you want to cite this work, please consider citing the original paper: @misc{https://doi.org/10.48550/arxiv.2210.03347, doi = {10.48550/ARXIV.2210.03347}, url = {https://arxiv.org/abs/2210.03347}, author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming-Wei and Toutanova, Kristina}, keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} }
https://huggingface.co/karukas
Stephen Karukas karukas Research interests None yet Organizations models None public yet datasets 3 karukas/mediasum-summary-matching Viewer • Updated Feb 11 • 6 karukas/pubmed-abstract-matching Viewer • Updated Feb 9 • 7 karukas/arxiv-abstract-matching Viewer • Updated Feb 9 • 6
https://huggingface.co/Leko12345
Leo Kertsman Leko12345 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/uniemann
Ulrich Niemann uniemann Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/jaaustin
Jacob Austin jaaustin Research interests None yet Organizations Papers 1 arxiv:2108.07732 models None public yet datasets None public yet
https://huggingface.co/eemmeme
1 M.E. Francis eemmeme Research interests ML for fun! Organizations models None public yet datasets None public yet
https://huggingface.co/suchitpuri
Suchit Puri suchitpuri https://suchitpuri.com suchitpuri Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/datasets/google/xtreme_s
YAML Metadata Warning: The task_categories "speech-processing" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, conversational, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, other YAML Metadata Warning: The task_ids "speech-recognition" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-generation, dialogue-modeling, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering XTREME-S The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval. TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code. An easy-to-use and flexible fine-tuning script is provided and actively maintained. XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions: Western Europe: Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh Eastern Europe: Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian Central-Asia/Middle-East/North-Africa: Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek Sub-Saharan Africa: Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu South-Asia: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu South-East Asia: Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese CJK languages: Cantonese and Mandarin Chinese, Japanese, Korean Design principles Diversity XTREME-S aims for task, domain and language diversity. Tasks should be diverse and cover several domains to provide a reliable evaluation of model generalization and robustness to noisy naturally-occurring speech in different environments. Languages should be diverse to ensure that models can adapt to a wide range of linguistic and phonological phenomena. Accessibility The sub-dataset for each task can be downloaded with a single line of code as shown in Supported Tasks. Each task is available under a permissive license that allows the use and redistribution of the data for research purposes. Tasks have been selected based on their usage by pre-existing multilingual pre-trained models, for simplicity. Reproducibility We produce fully open-sourced, maintained and easy-to-use fine-tuning scripts for each task as shown under Fine-tuning Example. XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use. In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning. Fine-tuning and Evaluation Example We provide a fine-tuning script under research-projects/xtreme-s. The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any Hugging Face model on XTREME-S. The example script is actively maintained by @anton-l and @patrickvonplaten. Feel free to reach out via issues or pull requests on GitHub if you have any questions. Leaderboards The leaderboard for the XTREME-S benchmark can be found at this address (TODO(PVP)). Supported Tasks Note that the suppoprted tasks are focused particularly on linguistic aspect of speech, while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are not evaluated. 1. Speech Recognition (ASR) We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets. FLEURS-ASR FLEURS-ASR is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages. from datasets import load_dataset fleurs_asr = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans # to download all data for multi-lingual fine-tuning uncomment following line # fleurs_asr = load_dataset("google/xtreme_s", "fleurs.all") # see structure print(fleurs_asr) # load audio sample on the fly audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample transcription = fleurs_asr["train"][0]["transcription"] # first transcription # use `audio_input` and `transcription` to fine-tune your model for ASR # for analyses see language groups all_language_groups = fleurs_asr["train"].features["lang_group_id"].names lang_group_id = fleurs_asr["train"][0]["lang_group_id"] all_language_groups[lang_group_id] Multilingual LibriSpeech (MLS) MLS is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits. from datasets import load_dataset mls = load_dataset("google/xtreme_s", "mls.pl") # for Polish # to download all data for multi-lingual fine-tuning uncomment following line # mls = load_dataset("google/xtreme_s", "mls.all") # see structure print(mls) # load audio sample on the fly audio_input = mls["train"][0]["audio"] # first decoded audio sample transcription = mls["train"][0]["transcription"] # first transcription # use `audio_input` and `transcription` to fine-tune your model for ASR VoxPopuli VoxPopuli is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials. VoxPopuli has to download the whole dataset 100GB since languages are entangled into each other - maybe not worth testing here due to the size from datasets import load_dataset voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.ro") # for Romanian # to download all data for multi-lingual fine-tuning uncomment following line # voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.all") # see structure print(voxpopuli) # load audio sample on the fly audio_input = voxpopuli["train"][0]["audio"] # first decoded audio sample transcription = voxpopuli["train"][0]["transcription"] # first transcription # use `audio_input` and `transcription` to fine-tune your model for ASR (Optionally) BABEL BABEL from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations). from datasets import load_dataset babel = load_dataset("google/xtreme_s", "babel.as") The above command is expected to fail with a nice error message, explaining how to download BABEL The following should work: from datasets import load_dataset babel = load_dataset("google/xtreme_s", "babel.as", data_dir="/path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip") # see structure print(babel) # load audio sample on the fly audio_input = babel["train"][0]["audio"] # first decoded audio sample transcription = babel["train"][0]["transcription"] # first transcription # use `audio_input` and `transcription` to fine-tune your model for ASR 2. Speech Translation (ST) We include the CoVoST-2 dataset for automatic speech translation. CoVoST-2 The CoVoST-2 benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))]. from datasets import load_dataset covost_2 = load_dataset("google/xtreme_s", "covost2.id.en") # for Indonesian to English # to download all data for multi-lingual fine-tuning uncomment following line # covost_2 = load_dataset("google/xtreme_s", "covost2.all") # see structure print(covost_2) # load audio sample on the fly audio_input = covost_2["train"][0]["audio"] # first decoded audio sample transcription = covost_2["train"][0]["transcription"] # first transcription translation = covost_2["train"][0]["translation"] # first translation # use audio_input and translation to fine-tune your model for AST 3. Speech Classification We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14. Language Identification - FLEURS-LangID LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all. from datasets import load_dataset fleurs_langID = load_dataset("google/xtreme_s", "fleurs.all") # to download all data # see structure print(fleurs_langID) # load audio sample on the fly audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample language_class = fleurs_langID["train"][0]["lang_id"] # first id class language = fleurs_langID["train"].features["lang_id"].names[language_class] # use audio_input and language_class to fine-tune your model for audio classification Intent classification - Minds-14 Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language. from datasets import load_dataset minds_14 = load_dataset("google/xtreme_s", "minds14.fr-FR") # for French # to download all data for multi-lingual fine-tuning uncomment following line # minds_14 = load_dataset("google/xtreme_s", "minds14.all") # see structure print(minds_14) # load audio sample on the fly audio_input = minds_14["train"][0]["audio"] # first decoded audio sample intent_class = minds_14["train"][0]["intent_class"] # first transcription intent = minds_14["train"].features["intent_class"].names[intent_class] # use audio_input and language_class to fine-tune your model for audio classification 4. (Optionally) Speech Retrieval We optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the FLEURS paper. FLEURS-Retrieval FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult. from datasets import load_dataset fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans # to download all data for multi-lingual fine-tuning uncomment following line # fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.all") # see structure print(fleurs_retrieval) # load audio sample on the fly audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples # use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech. Dataset Structure The XTREME-S benchmark is composed of the following datasets: FLEURS Multilingual Librispeech (MLS) Note that for MLS, XTREME-S uses path instead of file and transcription instead of text. Voxpopuli Minds14 Covost2 Note that for Covost2, XTREME-S uses path instead of file and transcription instead of sentence. BABEL Please click on the link of the dataset cards to get more information about its dataset structure. Dataset Creation The XTREME-S benchmark is composed of the following datasets: FLEURS Multilingual Librispeech (MLS) Voxpopuli Minds14 Covost2 BABEL Please visit the corresponding dataset cards to get more information about the source data. Considerations for Using the Data Social Impact of Dataset This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos). Discussion of Biases Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages. Other Known Limitations The benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding. Additional Information All datasets are licensed under the Creative Commons license (CC-BY). Citation Information XTREME-S @article{conneau2022xtreme, title={XTREME-S: Evaluating Cross-lingual Speech Representations}, author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others}, journal={arXiv preprint arXiv:2203.10752}, year={2022} } MLS @article{Pratap2020MLSAL, title={MLS: A Large-Scale Multilingual Dataset for Speech Research}, author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert}, journal={ArXiv}, year={2020}, volume={abs/2012.03411} } VoxPopuli @article{wang2021voxpopuli, title={Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation}, author={Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel}, journal={arXiv preprint arXiv:2101.00390}, year={2021} } CoVoST 2 @article{DBLP:journals/corr/abs-2007-10310, author = {Changhan Wang and Anne Wu and Juan Miguel Pino}, title = {CoVoST 2: {A} Massively Multilingual Speech-to-Text Translation Corpus}, journal = {CoRR}, volume = {abs/2007.10310}, year = {2020}, url = {https://arxiv.org/abs/2007.10310}, eprinttype = {arXiv}, eprint = {2007.10310}, timestamp = {Thu, 12 Aug 2021 15:37:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-10310.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } Minds14 @article{gerz2021multilingual, title={Multilingual and cross-lingual intent detection from spoken data}, author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Micha{\l} and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan}, journal={arXiv preprint arXiv:2104.08524}, year={2021} } Contributions Thanks to @patrickvonplaten, @anton-l, @aconneau for adding this dataset Downloads last month28,004 Models trained or fine-tuned on google/xtreme_s
https://huggingface.co/dgletts
David Letts dgletts Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/brandonkeiji
Brandon Keiji brandonkeiji Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/acmoles
Anthony Moles Lyall acmoles http://www.acmoles.com acmoles acmoles Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/Mindya
Biznass Mindya Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/mniemeyer
Michael Niemeyer mniemeyer Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/CrazyHippo
Henrik Warfvinge CrazyHippo HenrikWarf Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/scytulip
Congyin Shi scytulip Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/tommysiu
Tommy Siu tommysiu Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/barshow
barshow barshow barshow Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/calvin80
Ajay calvin80 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/montse90
Montserrat Gonzalez Arenas montse90 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/jfarri
Joe Farri jfarri jfarri Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/amitmarathe
A M amitmarathe Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/pjsu
Pengjun Su pjsu Research interests None yet Organizations models 1 pjsu/ddpm-butterflies-128 Updated Mar 5 datasets None public yet
https://huggingface.co/datasets/google/wit
https://en.wikipedia.org/wiki/Oxydactylus https://upload.wikimedia.org/wikipedia/commons/5/5f/Oxydactylus_longipes_fm.jpg English: Mounted skeleton of Oxydactylus longipes in the Field Museum of Natural History. Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene, existing for approximately 14 million years. The name is from the Ancient Greek οξύς and δάκτυλος. They had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes. Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene (28.4–13.7 mya), existing for approximately 14 million years. The name is from the Ancient Greek οξύς (oxys, "sharp")and δάκτυλος (daktylos, "finger"). They had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes. https://cs.wikipedia.org/wiki/Mechanick%C3%A1_m%C3%AD%C5%99idla http://upload.wikimedia.org/wikipedia/commons/2/2b/M16_rifle_correct_sight_picture_fig_4-18.png Mechanická mířidla na útočné pušce M16 M16 rifle correct sight picture Mechanická mířidla je zařízení určené pro zamíření střelné zbraně na zvolený cíl. V některých pramenech se používá označení mechanické zaměřovače. Vedle mechanických existují ještě optické zaměřovače. Mechanické zaměřovače se dále dělí na otevřené a dioptrové. Mechanická mířidla je zařízení určené pro zamíření střelné zbraně na zvolený cíl. V některých pramenech se používá označení mechanické zaměřovače. Vedle mechanických existují ještě optické zaměřovače. Mechanické zaměřovače se dále dělí na otevřené a dioptrové. https://sq.wikipedia.org/wiki/Mjedisi_natyror https://upload.wikimedia.org/wikipedia/commons/3/36/Hopetoun_falls.jpg Menaxhimi i ujërave dhe tokës ka ruajtur karakteristikat natyrore në Ujëvarat Hopetoun, Australi, ndërsa lejon qasje të bollshme për vizitorët. English: Hopetoun Falls, Beech Forest, near Otway National Park, Victoria, Australia. Taken with a Canon 10D and 17-40 f/4L lens. Français&#160;: Cascade de Hopetoun, Forêt de Beech, près du Parc National d'Otway, état de Victoria, Australie. Image prise avec un Canon 10D et un objectif 17-40 f/4L. Русский: Водопад Хоупентон, Бич Форест, возле Национального парка Отвэй, штат Виктория, Австралия Українська: Водоспад Хоуптоун поблизу національного парку Отвей, штат Вікторія, Австралійський Союз. Mjedis natyror quhet vendi, njerëzit, gjërat, natyra përreth nesh dhe çdo organizëm tjetër i gjallë. Mjedisi natyror përfshin të gjitha gjallesat ashtu si edhe gjithë botën jo të gjallë natyrale, domethënë në këtë rast jo artificiale. Termi më së shpeshti nënkupton Tokën ose disa pjesë të Tokës. Ky mjedis përfshin ndërveprimin e të gjitha llojeve të gjalla, klimës, motit dhe burimeve natyrore që ndikojnë në mbijetesën njerëzore dhe aktivitetin ekonomik. Mjedis natyror quhet vendi, njerëzit, gjërat, natyra përreth nesh dhe çdo organizëm tjetër i gjallë. Mjedisi natyror përfshin të gjitha gjallesat ashtu si edhe gjithë botën jo të gjallë natyrale, domethënë në këtë rast jo artificiale. Termi më së shpeshti nënkupton Tokën ose disa pjesë të Tokës. Ky mjedis përfshin ndërveprimin e të gjitha llojeve të gjalla, klimës, motit dhe burimeve natyrore që ndikojnë në mbijetesën njerëzore dhe aktivitetin ekonomik. https://nl.wikipedia.org/wiki/Zeesterren http://upload.wikimedia.org/wikipedia/commons/a/ae/Marthasterias_glacialis_%28Linnaeus%2C_1758%29_3.jpg Zeesterren / Uiterlijke kenmerken / Armen Voorzijde van een arm met buisvoetjes aan de onderzijde van de soort Marthasterias glacialis. Français&#160;: Marthasteria glacialis (Linnaeus, 1758) - Banyuls-sur-Mer&#160;: 07/91 Zeesterren zijn een groep van ongewervelde dieren die behoren tot de stekelhuidigen. Zeesterren leven op de bodems van alle oceanen; van getijdengebieden tot in de diepzee, maar uitsluitend in zout water. Ze vormen met ongeveer 1900 soorten een van de grootste groepen van stekelhuidigen. Zeesterren hebben een stervormig lichaam met een centrale schijf en vijf of meer langwerpige lobben die armen worden genoemd. De centrale schijf omvat de maag, met de mondopening aan de onderzijde. Ingeval de soort een anus heeft, ligt deze aan de bovenzijde. Het uiterlijk loopt per soort uiteen. Zo zijn soorten bekend met enige tientallen armen en zijn er naast de vele bruin-grijze ook rode, blauwe en gele soorten. Sommige soorten hebben stekels, andere zijn glad. Aan de onderkant van de armen bevinden zich buisvoetjes met kleverige napjes. Verder herbergt elke arm delen van het maag-darmstelsel en geslachtsorganen. Zeesterren paren niet maar laten hun geslachtscellen vrij in het zeewater. Sommige soorten kennen een vorm van broedzorg. De larven van de meeste zeesterren zijn vrijzwemmend en zien eruit als doorzichtige, garnaalachtige diertjes. De armen van de zeester moeten niet gezien worden als ledematen van het dier maar als een soort 'lobben' van het lichaam. De armen zijn relatief dik in vergelijking met die van de slangsterren en bieden zo ruimte om verschillende organen te bergen. De armen zijn dan ook gevuld met een groot deel van het spijsverteringsstelsel, het watervaatstelsel en de geslachtsorganen. Ze zijn door hun dikte ook erg stijf en kunnen, in tegenstelling tot de armen van slangsterren, niet gebruikt worden bij de voortbeweging. De onderzijde van de armen is voorzien van vele kleine uitstulpbare buisvoetjes die vaak een zuignapje hebben. De vele buisvoetjes bieden tezamen een stevige grip op de ondergrond. Bovendien wordt een plakkerige substantie afgescheiden waardoor de zeester zich nog beter kan hechten aan het substraat. De voetjes worden ambulacraalvoetjes genoemd, ze zijn onderdeel van het ambulacraalsysteem of watervaatstelsel. Zeesterren hebben meestal vijf armen, soms meer. Een enkele keer worden vier- of minderarmige exemplaren aangetroffen, maar dergelijke individuen zijn altijd één of meerdere armen kwijtgeraakt, bijvoorbeeld door predatie. Het komt ook voor dat een van de vijf armen na het aangroeien door een genetisch defect gevorkt raakt, waardoor het individu zesarmig is geworden. Een voorbeeld van een soort met meer armen is de zonnebloemster (Pycnopodia helianthoides). De jonge exemplaren van deze soort hebben altijd vijf armen maar naarmate de dieren ouder worden ontwikkelen ze er meer. Uiteindelijk hebben volwassen exemplaren vijftien tot vierentwintig armen. De verhouding tussen de lengte van de armen en de lichaamsdoorsnede hangt af van de soort. Meestal zijn de armen ongeveer drie keer zo lang als de breedte van de centrale lichaamsschijf. Sommige soorten hebben echter korte, dikke armen en een grote lichaamsschijf, terwijl weer andere zeesterren een klein lichaam hebben met juist heel lange, dunne armen. Ten slotte zijn er soorten die naast zeer korte armen ook een sterk gewelfde lichaamsvorm hebben. Dergelijke dieren lijken helemaal niet op een zeester maar doen meer denken aan een steen. Een voorbeeld zijn de zeesterren uit het geslacht Culcita. Dergelijke exemplaren zien eruit als stekelloze zee-egels, de armen zijn zo kort dat ze vanaf de aborale zijde niet goed zichtbaar zijn. Aan de orale zijde van deze soorten zijn echter altijd minstens vijf rijen voetjes te zien, net als bij de andere zeesterren. Onderstaand zijn de belangrijkste lichaamsvormen van zeesterren weergegeven, met de kenmerkende eigenschappen (eerste regel), de bijbehorende familie (tweede regel) en de afgebeelde soort (derde regel). https://de.wikipedia.org/wiki/Friedrich_von_%C3%96sterreich-Teschen https://upload.wikimedia.org/wikipedia/commons/f/f9/Habsburg_Frigyes_%28Paulikovics_Iv%C3%A1n%2C_2006%29_-_Mosonmagyar%C3%B3v%C3%A1r%2C_De%C3%A1k_Ferenc_t%C3%A9r%2C_Moson_6.jpg Friedrich von Österreich-Teschen Friedrich von Österreich-Teschen / Erster Weltkrieg Denkmal Erzherzog Friedrichs in Mosonmagyaróvar[13] English: Monument to Frederic Habsburg, Mosonmagyaróvár Habsburg Frigyes főherceg / 1856-1936 / szobra (Paulikovics Iván, 2006) - Mosonmagyaróvár, Deák Ferenc tér Erzherzog Friedrich Maria Albrecht Wilhelm Karl von Österreich, Herzog von Teschen war österreichisch-ungarischer Feldmarschall, Heerführer im Ersten Weltkrieg, Großgrundbesitzer und Unternehmer. Friedrich sollte 1914 wegen seiner Disharmonie mit Franz Ferdinand sein Kommando zurücklegen. Nach der Ermordung des Thronfolgers beim Attentat von Sarajevo bestimmte der 84-jährige Kaiser Franz Joseph am 5. Juli 1914 Friedrich für den Kriegsfall als Oberbefehlshaber. Mit der Mobilmachung trat er diese Stellung (Armeeoberkommandant) am 31. Juli 1914 schließlich an. Nominell stand er damit an der Spitze der Armee und der k.u.k. Kriegsmarine, doch die Führung der Operationen lag tatsächlich beim Chef des Generalstabes Franz Conrad von Hötzendorf. Beide hatten sich bereits 1871 als Leutnants im 11. Feldjägerbataillon kennengelernt. Der Kaiser ernannte Friedrich per 8. Dezember 1914 zum Feldmarschall. Das genaue Datum seiner Ernennung zum Armeeoberkommandanten ist aus der amtlichen Wiener Zeitung, die ansonsten alle Beförderungen von Offizieren enthielt, nicht ermittelbar. Sie publizierte am 14. Juli 1914 ein Schreiben des Kaisers an Friedrich vom 12. Juli, in dem er des Landwehr-Oberkommandos enthoben und als rangshöchster Armee-Inspektor zur Disposition des Allerhöchsten Oberbefehls gestellt wurde. Sie druckte am 21. August 1914, mittlerweile hatte der Erste Weltkrieg begonnen, ein Schreiben Friedrichs vom 18. August ab, in dem der Erzherzog als Armee-Oberkommandant, dem die gesamten Land- und Seestreitkräfte der Monarchie unterstehen, namens aller Soldaten dem Kaiser zum 84. Geburtstag gratuliert. Die Ernennung muss somit zwischen 13. Juli und 17. August 1914 erfolgt sein. Die tatsächliche Leitung der Operationen oblag jedoch, wie der Kaiser mit Friedrich vereinbart hatte, dem Chef der Generalstabs, General Franz Conrad von Hötzendorf; die deutschen Verbündeten schätzten Friedrich als Galionsfigur ein, da er von seinem Generalstabschef nicht immer vollständig informiert wurde. Zu Beginn des Krieges wurde unter der Patronanz des Armeeoberkommandos (AOK) das Kriegsüberwachungsamt (KÜA) gegründet, das die Streitkräfte gegen äußere und innere Feinde schützen sollte. Das Amt hegte enormes Misstrauen speziell gegenüber den slawischen Nationalitäten. Das AOK mit Erzherzog Friedrich an der Spitze trachtete, die beiden Ministerpräsidenten Karl Stürgkh und Stephan Tisza zu überreden, dass die Zivilverwaltung in den slawischen Ländern beider Reichshälften abgeschafft werden müsse. Nach seiner Thronbesteigung übernahm Kaiser Karl I. selbst das Armeeoberkommando, was einer Entlassung Erzherzog Friedrichs gleichkam. Am 2. Dezember 1916 proklamierte der neue Souverän in einem kurzen Tagesbefehl, er übernehme „in Ausübung seiner Herrscherrechte“ den unmittelbaren Befehl über alle Land- und Seestreitkräfte der Monarchie. Die Gerüchte, wonach Erzherzog Friedrich dem Kaiser seine Entlassung übel genommen habe, stimmten nicht. Er selbst hatte das Thema der Kommandoübergabe in den letzten Wochen der Regierung Franz Joseph mit Karl abgesprochen. Am 11. Februar 1917 enthob der Kaiser Friedrich von seiner nunmehrigen Funktion als stellvertretender Armeekommandant und stellte ihn zur Disposition meines Oberbefehls. Friedrich lebte hierauf in Pressburg und Halbturn, (damals) beide in Altungarn. Am 13. November 1918, einen Tag nach der Ausrufung der Republik in Deutschösterreich, berichtet die Wiener Polizei über die Stimmung in der Hauptstadt: Insbesondere werden gegen Erzherzog Friedrich heftige Anwürfe wegen der ihm zugeschriebenen Unfähigkeit als Armeekommandant, wegen seines angeblichen Geizes und wegen der ungemein großen Kriegsgewinne laut, die ihm durch die in seinem Besitz befindlichen Latifundien und Industriebetriebe zugeflossen sein sollen. Vor allem ihm gelten der beißende Spott und die scharfe Kritik, mit denen der Satiriker Karl Kraus in seinem Drama Die letzten Tage der Menschheit die intellektuellen und moralischen Qualitäten der österreichischen Führungselite im Ersten Weltkrieg illustriert. Andererseits beschrieb Ludwig Ganghofer, der im Krieg patriotische Stimmung verbreitete, Friedrich als liebenswürdigen und wohlwollenden Fürsten von ruhiger Schlichtheit und gütigem Menschentum. Feldmarschall Conrad erinnert sich anders: […] b https://en.wikipedia.org/wiki/Maine%27s_3rd_congressional_district https://upload.wikimedia.org/wikipedia/commons/7/7d/SamuelWGould.jpg Maine's 3rd congressional district List of members representing the district Maine's 3rd congressional district / List of members representing the district English: Samuel W. Gould, US Representative from Maine Maine's 3rd congressional district is an obsolete congressional district. It was created in 1821 after Maine achieved statehood in 1820 as part of the enactment of the Missouri Compromise. It was eliminated in 1963 after the 1960 U.S. Census. Its last congressman was Clifford McIntire. https://pt.wikipedia.org/wiki/Akseli_Gallen-Kallela https://upload.wikimedia.org/wikipedia/commons/b/b7/Gallen_Kallela_Lemminkainens_Mother.jpg Simbolismo e nacionalismo Akseli Gallen-Kallela / Infância e juventude / Simbolismo e nacionalismo A mãe Lemminkainen (1897), de Akseli Gallen-Kallela, é uma evocação recente dos temas da maternidade e da guerra explorados na Sinfonia n.º 3. Representa uma cena do Kalevala, poema épico finlandês. Um guerreiro chamado Lemminkainen tinha sido assassinado, cortado em pedaços e lançado ao rio Tuonela. A sua mãe recuperou os pedaços e ressuscitou-o. Akseli Gallen-Kallela artesão, ilustrador e pintor finlandês, conhecido pelas suas ilustrações do Kalevala, o poema épico nacional finês. O seu trabalho é considerado muito importante no surgimento do sentimento nacional desse país. Em 1890 contrai matrimônio com Mary Slöör. O casal teve três filhos, Impi Marjatta, Kristi e Jorma. Durante a sua lua de mele na Carélia Oriental, Gallen-Kallela, impregnado pelas tradições que ali se preservavam, começou a recopilar material para as suas representações do Kalevala, ao tempo que o seu estilo se inclinava progressivamente para o simbolismo. Esta viagem considerar-se-á mais tarde como o começo da orientação conhecida como Carelianismo na arte finesa. Este período caracteriza-se na sua pintura pela realização de representações românticas do poema impregnadas de simbolismo, tais como o Trítico de Aino, assim como por numerosas pinturas paisagísticas. Durante toda esta década aplicaria os princípios da Art Nouveau às suas pinturas e design. Um exemplo podem ser os afrescos que se encontram no átrio central do Museu Nacional de Historia de Helsinki, e que representam diferentes passagens do Kalevala com fortes traços e amplas áreas de vivas cores. Em dezembro de 1894 mudou-se para Berlim a fim de supervisar pessoalmente a exibição conjunta dos seus trabalhos com os do norueguês Edvard Munch. Em março de 1895 recebe por telegrama a notícia de que a sua filha Impi Marjatta morrera de difteria. Este acontecimento teria uma grande influência no seu trabalho posterior. Enquanto as suas pinturas prévias estiveram impregnadas de romantismo, após a morte da sua filha realizaria trabalhos mais agressivos tais como a Defesa do Sampo, A vingança de Joujahainen ou A mãe de Lemminkäinen. Para a Exposição Universal de Paris de 1900, Gallen-Kallela pintaria os afrescos do Pavilhão Finlandês. Nestes afrescos as suas ideias políticas, em luta contra a russificação da Finlândia, mostraram-se de um modo evidente. Assim, uma das serpentes do fresco chamado Ilmarinen arando o campo das víboras leva na sua cabeça a coroa dos Romanov, e o mesmo processo de extrair as víboras do campo é uma clara referência ao seu desejo de conseguir a independência da Finlândia. Com esta obra conseguiu uma Medalha de Ouro na própria exposição e, dois anos depois, a Legião de Honra francesa Também pintou os afrescos do Mausoléu de Juselius em Pori entre 1901 e 1903 (estes afrescos cedo viram-se danificados pela aparição de manchas brancas, e Juselius encomendou ao filho de Gallen-Kallela, Jorma, a sua reparação; Jorma completou este trabalho pouco antes da sua morte em 1939, abatido durante a II Guerra Mundial enquanto protegia a vida do então capitão Adolf Ehrnrooth). Após finalizar os afrescos, Gallen-Kallela pintou um grande número de temas procedentes da natureza, no que ele próprio definiu como um “período de purificação”. Akseli Gallen-Kallela mudou oficialmente o seu nome para que soasse menos sueco e finlandês em 1907, num processo habitual entre os ativistas em favor da língua finlandesa. Esse mesmo ano publica uma edição ilustrada de Seitsëman velgesta ("Os sete irmãos"), o romance de Aleksis Kivi. Em 1909 mudou-se para Nairobi, na África Oriental britânica, o atual Quênia, com a sua família, e regressa a Finlândia um par de anos depois. Entre 1911 e 1913 desenhou e construiu uma casa com estudo em Tarvaspää, a cerca de dez quilômetros a norte de Helsinki, onde se mudou para viver com a sua família. Durante 1918, tanto Gallen-Kallela como o seu filho Jorma participam da Guerra Civil Finlandesa que acabou com a vitória “branca”, bando no que os Gallen-Kallela lutavam, supondo o final da hegemonia militar russa sobre a Finlândia, embora também provocasse uma profunda divisão entre os finlandeses. Quando o regente General Mannerheim foi informado da participação de ambos os pintores no conflito, convidou a Gallen-Kallela a desenhar as bandeiras, condecorações oficiais (tais como a Cruz da Liberdade ou a Cruz da Ordem da Rosa Branca, ainda em uso) e uniformes para o novo estado independente. Em 1919 é nomeado ajudante de campo de Mannerheim. Mais tarde continuaria com as suas viagens, vivendo em Chicago em 1923-1924 e posteriormente em Tao (Novo México, Estados Unidos) em 1926. Durante este período estudou a arte e a cultura dos índios american https://en.wikipedia.org/wiki/Cheraw https://upload.wikimedia.org/wikipedia/commons/2/2d/Indians_NW_of_South_Carolina.jpg Cheraw / History / 18th century A c. 1724 English copy of a deerskin Catawba map of the tribes between Charleston (left) and Virginia (right) following the displacements of a century of disease and enslavement and the 1715–7 Yamasee War. The Cheraw are labelled as "Charra". English: "Map of the Several Nations of Indians to the Northwest of South Carolina" or the "Catawba Deerskin Map", an annotated copy of hand-painted deerskin original made by a Catawba chieftain to Governor Francis Nicholson "This map describing the scituation [sic] of the several nations of Indians to the NW of South Carolina was coppyed [sic] from a draught [sic] drawn &amp; painted on a deer skin by an Indian Cacique and presented to Francis Nicholason Esqr. Governor of South Carolina by whom it is most humbly dedicated to his Royal Highness George, Prince of Wales." The Cheraw people, also known as the Saraw or Saura, were a Siouan-speaking tribe of indigenous people of the Southeastern Woodlands, in the Piedmont area of North Carolina near the Sauratown Mountains, east of Pilot Mountain and north of the Yadkin River. They lived in villages near the Catawba River. Their first European and African contact was with the Hernando De Soto Expedition in 1540. The early explorer John Lawson included them in the larger eastern-Siouan confederacy, which he called "the Esaw Nation." After attacks in the late 17th century and early 18th century, they moved to the southeast around the Pee Dee River, where the Cheraw name became more widely used. They became extinct as a tribe, although some descendants survived as remnant peoples. In 1710, due to attacks by the Seneca of the Iroquois Confederacy (Haudenosaunee) from the north (whose empire by then extended along the colonial frontier northward, with hunting grounds in the Ohio River valley and the St. Lawrence River valley), the Cheraw moved southeast and joined the Keyauwee tribe. The Saura Indian villages, one known as Lower Sauratown and the other, Upper Sauratown, were at that time abandoned. Lower Sauratown was situated below the present town of Eden, near the mouth of Town Creek in northeastern Rockingham County, North Carolina, while Upper Sauratown was located in Stokes County, N.C. The Saura nation were recorded in The Journal of Barnwell as maintaining a village on the east bank of the upper branches of the Pee Dee River circa the Tuscarora War in 1712. Some Cheraw fought with South Carolina in the Tuscarora War. In 1712, John Barnwell led a force of 400-500 troops against the Tuscarora in North Carolina. Almost all his forces were Indians, organized into four companies, based in part on tribal and cultural factors. The 1st and 2nd companies were made up of Indians with strong ties to South Carolina. The 3rd company was of "northern Indians" who lived farther from Charles Town and whose allegiance was not as strong. They included the Catawba, Waxaw, Wateree, and Congaree, among others. The 4th company was of northern Indians who lived even farther away and whose allegiance was still weaker. Among this group were the Saraw, Saxapahaw, Peedee, Cape Fear, Hoopengs, and others. This 4th company was noted for high levels of desertion. Historian Alan Gallay has speculated that the Saura and Saxapahaw people deserted Barnwell's army because their villages were likely to be attacked by the Tuscarora in vengeance for assisting South Carolina in the war. Gallay described the approximate location of the Saura homeland as "about 60 miles upriver from the Peedees", whose home is described as "on the Peedee River about 80 miles west of the coast". This puts the Saura in the general vicinity of the upper Dan and Yadkin rivers. In 1715, Cheraw warriors joined other Southeastern tribes in the Yamasee War to fight against European enslavement of Indians, mistreatment, and encroachment on their territory. On July 18, 1715, a Cheraw delegation represented the Catawban tribes in Williamsburg, Virginia and negotiated peace. They were out of the war by October of 1715. In 1728, William Byrd conducted an expedition to survey the North Carolina and Virginia boundary, and reported finding two Saura villages on the Dan River, known as Lower Saura Town and Upper Saura Town. The towns had been abandoned by the time of Byrd's visit. He noted in his writing that the Saura had been attacked and nearly destroyed by the Seneca 30 years before, who had been raiding peoples on the frontier from their base in present-day New York. The Saura were known to have moved south to the Pee Dee River area. When the Council of Virginia offered tribes protection in 1732, the Cheraw asked to join the Saponis. In 1738, a smallpox epidemic decimated both the Cheraw and the Catawba. In 1755, the Cheraw were persuaded by South Carolina Governor James Glen to join the Waccamaw, Pedee, and Catawba, led by King Haigler. The remnants of the tribes combined. Some of the tribe may have moved north and founded the "Charraw Settlement" along Drowning Creek, (present-day Robeson County) North Carolina. The tribe was mostly destroyed before the middle of the 18th century and European encroachment on their old territory. By 1754, racially mixed families lived along the Lumber River. Cheraw women with the surname Grooms married into this group, which later became known as the Lumbee people. They were last noted as a distinct tribe among the Catawba in 1768. During the Revolutionary War, they and the Catawba removed their families to the same areas near Danville, Virginia, where they had lived earlier. Their warriors served the Patriot cause under General Thomas Sumter. https://fa.wikipedia.org/wiki/%D8%B6%D8%B1%D8%A7%D8%A8%D8%AE%D8%A7%D9%86%D9%87_%D8%B3%D9%84%D8%B7%D9%86%D8%AA%DB%8C https://upload.wikimedia.org/wikipedia/commons/8/8b/The_old_Royal_Mint_building_-_geograph.org.uk_-_735466.jpg ضرابخانه سلطنتی / تاریخچه ساختمان ضرابخانه سلطنتی از ۱۸۸۰ English: The old Royal Mint building Royal Mint Court, opposite the Tower of London.日本語: ロンドン塔に面して立つ王立造幣局の旧庁舎。2008年頃 ضرابخانه سلطنتی یک نهاد مجاز برای ضرب سکه در بریتانیا است. ضرابخانه سلطنتی از ۱۱۰۰ سال پیش به منظور تولید سکه برای انگلستان و بریتانیای کبیر فعال است و از سال ۲۰۱۰ به عنوان ضرابخانه سلطنتی با مسئولیت محدود، تحت قرارداد انحصاری برای عرضه سکه برای انگلستان و ۱۰۰٪ متعلق به خزانه داری علیا حضرت است. ضرابخانه سلطنتی علاوه بر ضرب و صادرات سکه به بسیاری از کشورهای دیگر، به تولید مدال و نشان نظامی، مدال یادبود و دیگر اقلام برای سایر دولت‌ها، مدارس و کسب و کار مشغول است و در جهان به عنوان ضرابخانه پیشرو در صادرات شناخته شده است. مسئولیت امنیت آن با پلیس وزارت دفاع است که به صورت مشروط مسلح است. https://es.wikipedia.org/wiki/Juan_de_Silva_y_Meneses http://upload.wikimedia.org/wikipedia/commons/0/08/Castillo_de_Barcience.jpg Primer matrimonio y ascenso en la Corte de Juan II Juan de Silva y Meneses / Biografía / Primer matrimonio y ascenso en la Corte de Juan II Torreón en el castillo de Barcience, en la provincia de Toledo, construido por Juan de Silva a mediados del siglo XV, con el escudo heráldico de los Silva afincados en España. Castle of Barcience, near Maqueda, in the province of Toledo, Spain. It was built in the 15th c. by the counts of Cifuentes, whose emblem was the lion. It was used for artillery in the 16th century. Juan de Silva y Meneses, noble y cortesano castellano, I conde de Cifuentes y I señor de Montemayor del Río. Hacia 1427 contrajo matrimonio con Leonor de Acuña —hija del I conde de Buendía y de Teresa Carrillo—. Con motivo de este enlace, el monarca castellano Juan II le dio la tenencia vitalicia de la villa de Cifuentes y su castillo y en 1428 lo nombró notario mayor del reino de Toledo (cargo que, hasta entonces, disfrutaba su padre).​ En 1429, tras el estallido de la guerra castellano-aragonesa, se dirigió a Extremadura con el válido Álvaro de Luna, futuro condestable de Castilla. Allí facilitó la toma del castillo de Trujillo y prestó su ayuda en el cerco de Albuquerque y otras plazas. En julio de 1430, el mismo año en que su padre fundó en él mayorazgo sobre la mitad de la villa de Barcience, fue nombrado juez integrante de la comisión que debían condicionar y firmar una tregua con el reino de Aragón.​ https://de.wikipedia.org/wiki/Olympische_Sommerspiele_1952/Leichtathletik_%E2%80%93_Speerwurf_(M%C3%A4nner) https://upload.wikimedia.org/wikipedia/commons/9/9b/Janusz_Sidlo_1.jpg Olympische Sommerspiele 1952/Leichtathletik – Speerwurf (Männer) Olympische Sommerspiele 1952/Leichtathletik – Speerwurf (Männer) / Qualifikation / Gruppe B Der Pole Janusz Sidło scheiterte an der geforderten Qualifikationsweite Der Speerwurf der Männer bei den Olympischen Spielen 1952 in Helsinki wurde am 23. Juli 1952 ausgetragen. 26 Athleten nahmen teil. Olympiasieger wurde der US-Amerikaner Cy Young. Er siegte vor seinem Landsmann Bill Miller und dem Finnen Toivo Hyytiäinen. https://be-tarask.wikipedia.org/wiki/%D0%9F%D0%BE%D0%B7%D0%BD%D0%B0%D0%BD%D1%8C https://upload.wikimedia.org/wikipedia/commons/8/8f/Kozio%C5%82ki_na_ratuszu.jpg Казьляняткі на познанскай ратушы По́знань ці Пазна́нь — адзін з найстарэйшых і найбуйнейшых польскіх гарадоў, разьмешчаны над ракой Вартай; сталіца Велікапольшчы і Велікапольскага ваяводзтва. Пятае па колькасьці насельніцтва места ў Польшчы. Урадавы орган — Рада места Познані. https://ca.wikipedia.org/wiki/Chalchitlicue http://upload.wikimedia.org/wikipedia/commons/2/2e/Chalchiuhtlicue_copy.jpg English: A drawing of Chalchiuhtlicue, one of the deities described in the Codex Borgia Español: Ilustración de Chalchiuhtlicue, una de las deidades descritas en el Códice Borgia En la mitologia asteca Chalchitlicue és la deessa de l'aigua. El seu nom significa "Falda de Jade". En la mitologia asteca Chalchitlicue és la deessa de l'aigua (companya de Tlàloc). El seu nom significa "Falda de Jade". https://en.wikipedia.org/wiki/Romanian_Front https://upload.wikimedia.org/wikipedia/commons/4/4a/Gazeta_Transilvaniei_with_FR_logo%2C_June_14%2C_1936.png Romanian Front / History / Stagnation Nameplate of Gazeta Transilvaniei on June 14, 1936, with FR logo and a condemnation of the "Judaeo-communist" press, including Adevărul English: Nameplate of the Romanian nationalist newspaper, Gazeta Transilvaniei, Issue 46 (June 14), 1936; featuring the electoral symbol of Alexandru Vaida-Voevod's Romanian Front. The masthead also urges Romanians to boycott the "Judaeo-communist newspapers Dimineața, Adevărul, Zorile [and] Lupta". The Romanian Front was a moderate fascist party created in Romania in 1935. Led by former Prime Minister Alexandru Vaida-Voevod, it originated as a right-wing splinter group from the mainstream National Peasants' Party. While in power, Vaida had an ambiguous approach to the Iron Guard, and constructed his own radical ideology; the FR had a generally xenophobic program of positive discrimination, being implicitly antisemitic. It was subsumed to the policies of King Carol II, maneuvering between the mainstream National Liberals, the PNȚ's left-wing, and the more radically fascist Guardists. Vaida tried to compete with the former two and appease the latter, assuming fascist trappings such as the black-shirted uniform. Like the Guard, he supported aligning Romania with the Axis powers, though he also hoped to obtain their guarantees for Greater Romania's borders. The FR's lower echelons included Viorel Tilea and other opponents of Vaida's approach, who believed in Romania's attachments to the League of Nations and the Little Entente. A reshuffled Tătărescu government took over in mid-1936. The Front still held rallies, boasting that 20,000 affiliates heard Ioanițescu speaking at Galați in March. However, according to the regional journal Viața Ardealului, summer 1936 was a "period of stagnation" for the FR and "the nationalist current as a whole". The Front was still "sure of its destiny", but "organizing in depth" and keeping secret about it. Vaida and Angelescu now advanced the notion of a PNȚ–FR reconciliation, arguing that it could successfully bring down the PNL cabinet. One other option, advanced by Carol and journalist Pamfil Șeicaru, was for the FR to join efforts with the breakaway Radical Peasants' Party. Meanwhile, revelations about German re-armament, pushed the FR closer to Nazism. During March 1936, Vaida declared that the League of Nations was powerless against the "victorious discipline" of the Italian Empire and the Hitlerian "unity of sentiment and willpower". In June, following the Rhineland crisis, L'Humanité reported that the "racist parties" (the Front, the Iron Guard and the PNC) staged a march outside the French embassy in Bucharest, with chants of "Long live Hitler!" With this, Vaida declared that Germany was marching toward realizing the Anschluss, pleading for France to discard its Popular Front and rejoin the "nationalist" camp. Speaking at Oradea in October, he saluted both Axis powers. According to Vaida, the Locarno Treaties were naturally obsolete, and Germany was right to ignore them; however, he cautioned that the borders of Greater Romania needed to be guaranteed by both Germany and France. Vaida's stance was ridiculed by the PNȚ youth: in a September communique, it noted that Vaida, "that old fascist parrot", was silent on the issue of Italian support for Hungarian irredentism, though this would have entailed the loss of Transylvania to Hungary. From the PNȚ's left, Nicolae L. Lupu described the FR as stoking "racial [and] Germanophile violence"; in response, the FR played down such incidents as "the excesses of certain youths", while noting brawls started by the PNȚ's own Voinici. In November, as Benito Mussolini expressed full support for a Hungarian expansion, Vaida joined other Romanian politicians in voicing his indignation. He and his party sought to tone down the "hysteria", informing their partisans that Mussolini would never risk going to war over Hungarian demands in Transylvania. Vaidists pledged themselves to combat propaganda by the Hungarian Unity Party, arguing that it "falsifies the most obvious truths". The FR also noted that Mihalache's anti-revisionism was a diversion used by communist and Jewish infiltrators. On September 4, the FR and PNC had agreed on another collaboration, and presented a single list for the local elections of that year. Brătianu's Georgist Liberal Party also collaborated with the two parties in places such as Brașov; though invited to join this "purely Romanian list", the PNȚ declined. In Ilfov County, the two-party list was headed by Ioanițescu, with the PNC man Stan Ghițescu taking the second eligible seat. The Front's registered logo, "two concentric circles and a dot", doubled as the alliance symbol. Called "target" or "wheel" in party documents, this drawing symbolized Greater Romania as an outside circle, and, within, "the belt strap tightening around The Black Dot, namely the xenophile". According to Gazeta Transilvaniei, the symbolism was poorly understood by illiterate sympathizers, who mistakenly voted with the PNȚ's circle (which had been intensely popularized by Ioanițescu before his defection). https://es.wikipedia.org/wiki/Par%C3%A1lisis_cerebral https://upload.wikimedia.org/wikipedia/commons/6/68/Gray764.png The motor tract. (Modified from Poirier.) La parálisis cerebral es un trastorno permanente y no progresivo que afecta a la psicomotricidad del paciente. En un nuevo consenso internacional, se propone como definición: “La parálisis cerebral describe un grupo de trastornos del desarrollo psicomotor, que causan una limitación de la actividad de la persona, atribuida a problemas en el desarrollo cerebral del feto o del niño. Los desórdenes psicomotores de la parálisis cerebral están a menudo acompañados de problemas sensitivos, cognitivos, de comunicación y percepción, y en algunas ocasiones, de trastornos del comportamiento”. Las lesiones cerebrales de la PC ocurren desde el período fetal hasta la edad de 3 años. Los daños cerebrales después de la edad de 3 años hasta el período adulto pueden manifestarse como PC, pero, por definición, estas lesiones no son PC. Hay autores que recomiendan, en determinados casos, no establecer el diagnóstico de PC hasta los 5 años de edad.​ La incidencia de esta condición en países desarrollados es de aproximadamente 2 - 2,5 por cada mil nacimientos. La parálisis cerebral (PC) es un trastorno permanente y no progresivo que afecta a la psicomotricidad del paciente. En un nuevo consenso internacional, se propone como definición: “La parálisis cerebral describe un grupo de trastornos del desarrollo psicomotor, que causan una limitación de la actividad de la persona, atribuida a problemas en el desarrollo cerebral del feto o del niño. Los desórdenes psicomotores de la parálisis cerebral están a menudo acompañados de problemas sensitivos, cognitivos, de comunicación y percepción, y en algunas ocasiones, de trastornos del comportamiento”. Las lesiones cerebrales de la PC ocurren desde el período fetal hasta la edad de 3 años. Los daños cerebrales después de la edad de 3 años hasta el período adulto pueden manifestarse como PC, pero, por definición, estas lesiones no son PC. Hay autores que recomiendan, en determinados casos, no establecer el diagnóstico de PC hasta los 5 años de edad.​ La incidencia de esta condición en países desarrollados es de aproximadamente 2 - 2,5 por cada mil nacimientos. Esta incidencia no ha bajado en los últimos 60 años a pesar de los avances médicos como la monitorización de las constantes vitales de los fetos, esto no se debe a que con la nueva tecnología no se puede prever ni prevenir la PC sino que ha aumentado la posibilidad de mantener con vida a bebés prematuros y de bajo peso mucho mejor que hace 60 años, es por eso que como dice más abajo la incidencia en estos casos se aumenta 10 veces (del 0,1 % al 1 %). En este sentido es muy interesante el consenso internacional denominado: “A template for defining a causal relation between acute intrapartum events and cerebral palsy: international consensus statement” en español sería: “Un Patrón para Definir una Relación Causal entre los Eventos Agudos Intraparto y la Parálisis Cerebral: Declaración Consensuada Internacional”. La parálisis cerebral no tiene cura conocida; la intervención médica aparece como una ayuda, pero también la intervención médica la puede prevenir en algunos casos, gracias a una mejor tecnología que permite la monitorización de las constantes vitales de los fetos. Estos tratamientos para el desarrollo personal del paciente se introducen en su vida diaria durante toda la vida. La parálisis cerebral es un término que agrupa diferentes condiciones. Hay que tener en cuenta que no hay dos personas con parálisis cerebral con las mismas características o el mismo diagnóstico. La parálisis cerebral está dividida en cuatro tipos, ( espástica, atetoide, atáxica y mixta) que describen los problemas de movilidad que presentan. Esta división refleja el área del cerebro que está dañada. https://pl.wikipedia.org/wiki/Jacques-Joachim_Trotti,_markiz_de_La_Ch%C3%A9tardie https://upload.wikimedia.org/wikipedia/commons/c/c4/Shetardie-s.JPG Jacques-Joachim Trotti, markiz de La Chétardie Jacques-Joachim Trotti, markiz de La Chétardie Portrait of Paul-François de Galluccio, marquis de L'Hôpital, ambassadeur de France en Russie[1] fr:Jacques-Joachim Trotti de La Chétardie Jacques-Joachim Trotti, markiz de La Chétardie – francuski dyplomata, organizator zamachu stanu w Petersburgu, w wyniku którego na tron carski wyniesiono Elżbietę Piotrowną. Jacques-Joachim Trotti, markiz de La Chétardie (ur. 3 października 1705 w Paryżu, zm. 1 stycznia 1759) – francuski dyplomata, organizator zamachu stanu w Petersburgu, w wyniku którego na tron carski wyniesiono Elżbietę Piotrowną. https://ru.wikipedia.org/wiki/%D0%A3%D0%BC%D0%B5%D1%80%D1%88%D0%B8%D0%B5_%D0%B2_%D0%BD%D0%BE%D1%8F%D0%B1%D1%80%D0%B5_2013_%D0%B3%D0%BE%D0%B4%D0%B0 https://upload.wikimedia.org/wikipedia/commons/7/79/VZakharevich.jpg Умершие в ноябре 2013 года Умершие в ноябре 2013 года / 16 ноября Русский: Ректор ЮФУ В.Г. ЗахаревичEnglish: Prof. Vladislav Zakharevich, the rector of the Southern Federal University, Russia Это список известных людей, соответствующих установленным критериям значимости, умерших в ноябре 2013 года. Причина смерти указывается лишь в исключительных случаях. В остальных случаях — не указывается. ← Октябрь 2013 Берберян, Арсен (75) — архиепископ Армянской апостольской церкви (1973—2013) . Булычёва, Ангелина Александровна (96) — русская поэтесса и журналист . Добриян, Михаил Борисович (66) — советский и российский учёный, конструктор, руководитель Специального Конструкторского Бюро Космического Приборостроения, глава муниципального образования Тарусского района Калужской области (недоступная ссылка). Захаревич, Владислав Георгиевич (67) — первый ректор Южного федерального университета (2006—2012) . Лэнфорд, Оскар (73) — американский математик . Нафанаил (Калайджиев) (61) — епископ Болгарской православной церкви, митрополит Неврокопский (с 1994) . Серкебаев, Ермек Бекмухамедович (87) — советский казахский оперный певец (баритон), педагог, народный артист СССР (1959), Герой Социалистического Труда (1986) . Соколов, Дмитрий Сергеевич (21) — лидер дагестанского бандподполья; уничтожен . Хейда, Збынек (83) — чешский поэт, историк, переводчик, правозащитник . Яровая, Нина Липовна — азербайджанская и израильская журналистка, лауреат Государственной премии Азербайджана, основоположник азербайджанской школы русскоязычной тележурналистики . https://ur.wikipedia.org/wiki/%D9%86%D8%A7%D8%B1%D8%AA%DA%BE%D9%85%D8%A8%D8%B1%D9%84%DB%8C%D9%86%DA%88_%D8%A7%DB%8C%D9%88%D9%86%DB%8C%D9%88 https://upload.wikimedia.org/wikipedia/commons/0/0e/Northumberland_Avenue_WC2_-_geograph.org.uk_-_1283363.jpg English: Northumberland Avenue WC2 Taken from the Trafalgar Square pedestrian crossing نارتھمبرلینڈ ایونیو ویسٹ منسٹر شہر، مرکزی لندن میں ایک سڑک ہے۔ نارتھمبرلینڈ ایونیو (انگریزی: Northumberland Avenue) ویسٹ منسٹر شہر، مرکزی لندن میں ایک سڑک ہے۔ https://en.wikipedia.org/wiki/%C3%81d%C3%A1m_K%C3%B3sa https://upload.wikimedia.org/wikipedia/commons/0/01/%C3%81d%C3%A1m_K%C3%B3sa_01.JPG English: Hungarian MEP Ádám Kósa Ádám Kósa is a Hungarian politician and Member of the European Parliament from Hungary. He is a member of Fidesz, part of the European People's Party. He is the first deaf European politician user of Deaf Sign Language at the European Parliament. Ádám Kósa (born 1 July 1975) is a Hungarian politician and Member of the European Parliament (MEP) from Hungary. He is a member of Fidesz, part of the European People's Party. He is the first deaf European politician user of Deaf Sign Language at the European Parliament. https://be.wikipedia.org/wiki/%D0%93%D0%B0%D1%80%D0%B0%D0%B4%D1%8B_%D0%93%D1%80%D1%8D%D0%BD%D0%BB%D0%B0%D0%BD%D0%B4%D1%8B%D1%96 https://upload.wikimedia.org/wikipedia/commons/5/5a/Nuuk_city_below_Sermitsiaq.JPG English: Nuussuaq district in Nuuk, the capital of Greenland, with the Sermitsiaq mountain in background Спіс гарадоў Грэнландыі паводле колькасці насельніцтва. У спіс уключаныя гарады з насельніцтвам не менш за 1 000 чалавек па стане на 1 студзеня 2018. Колькасць насельніцтва прыводзіцца адносна названага горада без уліку пасяленняў, якія складаюць яго прыгарады. https://es.wikipedia.org/wiki/Pablo_Elguez%C3%A1bal https://upload.wikimedia.org/wikipedia/commons/7/72/Kirru.jpg Imagen de Pablo Elguezábal en 1932 Pablo Elguezábal Iturri, más conocido por el sobrenombre de Rubio o Kirru fue un pelotari español de la modalidad de mano. Pablo Elguezábal Iturri (Cienfuegos, Cuba, 22 de marzo de 1907 - Rigoita, Vizcaya, 7 de noviembre de 2003), más conocido por el sobrenombre de Rubio o Kirru fue un pelotari español de la modalidad de mano. https://en.wikipedia.org/wiki/Grey_francolin https://upload.wikimedia.org/wikipedia/commons/7/7e/DecoyGreyFrancolin.jpg Grey francolin / Behaviour and ecology English: A decoy grey francolin used by a trapper, Chikballapur The grey francolin is a species of francolin found in the plains and drier parts of the Indian subcontinent. This species was formerly also called the grey partridge, not to be confused with the European grey partridge. They are found in open cultivated lands as well as scrub forest and their local name of teetar is based on their calls, a loud and repeated Ka-tee-tar...tee-tar which is produced by one or more birds. The term teetar can also refer to other partridges and quails. During the breeding season calling males attract challengers, and decoys were used to trap these birds especially for fighting. The loud calls of the birds are commonly heard early in the mornings. Pairs of birds will sometimes engage in a duet. The female call is a tee...tee...tee repeated and sometimes a kila..kila..kila and the challenge call kateela..kateela..kateela is a duet. They are usually seen in small groups. The main breeding season is April to September and the nest is a hidden scrape on the ground. The nest may sometimes be made above ground level in a niche in a wall or rock. The clutch is six to eight eggs, but larger clutches, potentially reflecting intraspecific brood parasitism, have been noted. Food includes seeds, grains as well as insects, particularly termites and beetles (especially Tenebrionidae and Carabidae). They may occasionally take larger prey such as snakes. They roost in groups in low thorny trees. Several species of feather mites, helminth and blood parasites have been described from the species. https://arz.wikipedia.org/wiki/%D8%A8%D8%A7%D8%B3%D8%A8%D9%88%D8%B1%D8%AA%D8%A7%D8%AA https://upload.wikimedia.org/wikipedia/commons/7/78/Afghan_Passport.jpg English: New eletronic readable passports issued by Afghan Ministry of Interior الصفحه دى يتيمه, حاول تضيفلها لينك فى صفحات تانيه متعلقه بيها. https://en.wikipedia.org/wiki/List_of_English_Heritage_blue_plaques_in_London https://upload.wikimedia.org/wikipedia/commons/3/35/Mark_Gertler_-_GLC_blue_plaque%2C_32_Elder_Street_Spitalfields.JPG List of English Heritage blue plaques in London List of English Heritage blue plaques in London / By borough / Tower Hamlets English: Blue plaque dedicated to Mark Gertler This is a list of the approximately 940 blue plaques placed by English Heritage and its predecessors in the boroughs of London, the City of Westminster, and the City of London. The scheme was originally administered by the Royal Society of Arts from 1876 to 1901, and was taken over by the London County Council until 1965. The Greater London Council took over the scheme in 1965 from its predecessor, the LCC. Since the abolition of the GLC in 1986, the blue plaque scheme has been administered by English Heritage. There are 21 blue plaques in the London Borough of Tower Hamlets. https://ja.wikipedia.org/wiki/%E6%84%9B%E5%AD%90%E5%86%85%E8%A6%AA%E7%8E%8B https://upload.wikimedia.org/wikipedia/commons/3/33/Rhododendron_quinquefolium.JPG 日本語: シロヤシオ(白八汐、学名:Rhododendron quinquefolium Bisset et S.Moore)、御在所岳山頂にて English: Rhododendron quinquefolium in Mount Gozaisho, Komono, Mie, Japan. 愛子内親王は、日本の皇族。称号は敬宮、お印はゴヨウツツジ。身位は内親王。敬称は殿下。 徳仁の第1皇女子。母は雅子。明仁と美智子の皇孫にあたる。 21世紀に誕生した初の皇族であり、2020年4月1日現在、18名の皇室構成員のうち最年少の女性皇族で、内廷皇族である。 住居は、東京都港区元赤坂二丁目の赤坂御用地内にある赤坂御所。 (各事象等における身位の表記は、当時に沿う。) 2001年(平成13年)12月1日14時43分、皇太子徳仁親王と皇太子妃雅子(両者とも当時)の間に第1子・第1皇女子として、東京都千代田区の宮内庁病院で出生。誕生時の身長は49.6センチメートル、体重は3,102グラム。 同日、祖父である第125代天皇明仁から守り刀(人間国宝である大隅俊平作)と袴が贈られる「賜剣の儀」が行われた。刀身は約25センチで、全長約40センチ。 また、内閣総理大臣・小泉純一郎(当時)が「内親王殿下の御誕生を迎えて」の内閣総理大臣謹話を発表した。 同年12月7日、「浴湯の儀」「命名の儀」「賢所皇霊殿神殿に誕生命名奉告の儀」が行われ、天皇から「愛子(読み:あいこ)」と命名され、「敬宮(読み:としのみや)」の御称号を受けた。名と御称号の由来は 「 人を愛する者は人恒に之を愛し、人を敬ふ者は人恒に之を敬ふ。 」 —『孟子』離婁下 に拠る。皇太子・同妃(当時)、そして学者が相談して内定し、祖父の天皇(当時)も両親である皇太子・同妃(当時)の意向を尊重して命名した。 浴湯の儀に伴って行われる「読書鳴弦」の儀式では、元学習院大学長児玉幸多により、『日本書紀』から8人10代存在した女性天皇のうち最初の女帝にあたる推古天皇に関する部分が読まれている。お印のゴヨウツツジは那須御用邸でも5月に咲く花で、両親の「この純白の花のような純真な心を持った子供に育ってほしい」という願いを込めた。 平成の皇太子夫妻の待望の第一子誕生に対して、国民の祝賀の記帳は宮内庁関連で12万人、全国の自治体で65万人、合計77万人に達した。12月2日夕、皇居前広場で「新宮さまのご誕生をお祝いする国民の集い」が開かれ、奉祝国会議員連盟会長の麻生太郎をはじめ政治家や竹下景子、西田ひかるなどの芸能人、毛利衛、長嶋茂雄などの著名人が祝辞を述べ、2万5千人が集まり万歳して祝意を表した。 幼時には、両親(皇太子・同妃)から「愛ちゃん」と呼ばれた。 2005年(平成17年)春から週2回、東京都渋谷区のこどもの城に通い、音楽遊びなどを通じて集団生活に親しんだ。 2006年(平成18年)4月11日、学習院幼稚園に入園。同年8月、皇太子・同妃(当時)である両親のオランダ旅行・滞在に同行して、初めて海外訪問した。 同年11月11日に、袿(うちき)と袴をつけ「着袴の儀」を行った。このとき着けた「濃色(こきいろ、濃い赤色)」の袴は、誕生のときに贈られたものである。この頃には自転車の練習なども始めている。 2008年(平成20年)3月に学習院幼稚園を卒園し、同年4月に学習院初等科に入学。2009年(平成21年)の初等科2年生時には、漢字の書き取りや習字を行う姿が報道された。 2010年(平成22年)2月下旬から風邪を患うなど体調不良が原因となり欠席しがちだったが、同年3月5日になって野村一成(当時の東宮大夫)が、「3月上旬に発生した初等科での児童同士のトラブルから体調不良となり、学校を欠席した」と発表した後、同日にまた学校法人学習院側も記者会見を開き同様の発表がなされ、大きな波紋を呼んだ(詳細は「愛子内親王不登校騒動」)。 2011年(平成23年)秋より、初等科への通学は平常な状態に戻った。 2012年(平成24年)には学習院初等科5年生となり、「管弦楽部(パートはチェロ)、バスケットボール部などの部活動での練習にも励み、学習院女子大学で開催された英会話セミナーにも通い出した」と報道された。 2014年(平成26年)3月に学習院初等科を卒業し、同年4月に学習院女子中等科に入学。同年7月15日に自身の曽祖父母にあたる昭和天皇・香淳皇后の武蔵野陵を初めて参拝し、また、7月30日に伊勢神宮を初めて参拝した。同年8月3日、全国高等学校総合体育大会を両親との一家で訪れ、女子サッカーと男子バレーボールの試合を観戦した。 同年12月1日、13歳の誕生日を迎え、皇居内の御所に居住する祖父母の天皇明仁と皇后美智子(当時、現:上皇と上皇后)を初めて一人で挨拶のため訪問した。春からはテニスとソフトボールを始めている。授業の科目数も増え、学業にスポーツにと忙しい日々を過ごす。 2015年(平成27年)戦後70年の節目の夏には、初めて第二次世界大戦の企画展示(「昭和館」東京都千代田区)に足を運び見学したほか、戦争体験者からも直接話を聞いた。 2016年(平成28年)8月、学習院女子中等科第3学年在学中の夏休みに両親の皇太子徳仁親王同妃雅子(当時)の地方公務に初めて同行し長野県上高地を訪れた。同年9月26日から胃腸が弱りふらつきなどの症状のため学校を欠席したが、休養に努め11月に学校に復帰した。 2017年(平成29年)3月、学習院女子中等科を卒業し、発表された卒業文集の作文「世界の平和を願って」では、「『平和』は、人任せにするのではなく、一人ひとりの思いや責任ある行動で築きあげていくものだから」などと、修学旅行で広島を訪れ原爆の悲劇を見て感じた平和を築いてゆく願いを綴り、多くの国民の感動を呼んだ。同年4月、学習院女子高等科に入学。 2018年(平成30年)7月22日-8月9日まで、イギリスに短期留学した。(学習院女子高等科の海外研修プログラム) 首都ロンドン郊外のイートン校で英語教育、更にポーツマスやオックスフォードで英国の文化を体験した。 2019年(令和元年)5月1日、天皇の退位等に関する皇室典範特例法の施行(前日の平成31年4月30日に祖父の天皇明仁が退位し上皇となり、祖母の皇后美智子は上皇后となる。)により父の皇太子徳仁親王が第126代天皇に即位、母の皇太子妃雅子も立后し皇后となる。これに伴い、内親王は第1皇女子として、天皇・皇后を両親に持つ唯一の人物となった。 2020年(令和2年)3月に学習院女子高等科を卒業。新型コロナウイルスの感染拡大を受け、両親である天皇徳仁と皇后雅子は卒業式への出席を控えた。同年4月より父の母校でもある学習院大学文学部日本語日本文学科に入学(父天皇は、同学部史学科出身である)。 https://uk.wikipedia.org/wiki/%D0%9C%D0%BE%D0%BB%D0%B4%D0%BE%D0%B2%D1%81%D1%8C%D0%BA%D0%B0_%D0%BA%D1%83%D1%85%D0%BD%D1%8F http://upload.wikimedia.org/wikipedia/commons/6/6e/Placinta.jpg Молдо́вська ку́хня — національна кухня Молдови. Молдова розташована в регіоні багатих природних можливостей, винограду, фруктів і різноманітних овочів, а також вівчарства і птахівництва, що обумовлює багатство і різноманітність національної кухні. Молдовська кухня формувалася під впливом грецької, турецької, балканської, західноєвропейської, а пізніше — української та російської, а також єврейської та німецької кухонь, проте вона відрізняється самобутністю. Найбільшу кількість страв готують у Молдові з овочів — їх вживають у свіжому вигляді, варять, смажать, печуть, фарширують, тушкують, солять. Традиційними для неї є страви з кукурудзи, квасолі, нуту, овочів — баклажанів, кабачків, перцю, ротунда, цибулі-пір, помідорів, білокачанної та цвітної капусти, а також гарбуза. З кукурудзи виготовляють крупу, борошно, пластівці, олію, безалкогольні напої і т. д. Ще на початку XVIII століття з кукурудзяного борошна і крупи в Молдові готували мамалигу, супи, печені вироби. Мамалига являє собою своєрідну кашу, лагідну і приємну на смак. Подають зі шкварками, сметаною, бринзою, молоком або вершками. З мамалиги також роблять кукурудзяні коржі, нарізаючи і підсмажуючи її на маслі або на смальці. У минулому мамалига в холодному вигляді часто заміняла хліб, однак це було викликано скоріше необхідністю, ніж традицією, так як в Молдові здавна випікався саме пшеничний хліб. Історично мамалига була основною селянською їжею, але в останні десятиліття мамалига придбала статус високоякісної страви і подається у багатьох ресторанах. Квасоля використовується для приготування закусок, перших і других страв. Овочі служать основою для різноманітних салатів, гарячих других страв і гарнірів до риби і м'яса. Сирі овочі найчастіше смажать, тушкують, смажать, запікають, рідше — відварюють. Традиційні для молдовської кухні фаршировані баклажани, кабачки, перець, помідори. Їх начиняють овочевим, круп'яно-овочеві, м'ясо-овочевим фаршем і запікають з додаванням соусів з сметани, томатів, зелені. З пряних овочів і зелені як приправи переважно використовуються цибуля-пір, селера, чебрець, любисток, петрушка і кріп. В їжу додають і такі прянощі, як чорний і духмяний перець, гіркий червоний перець, коріандр, гвоздику, лавровий лист, мускатний горіх тощо. Часто вживають часник, який становить основу соусів муждей, скордоля, якими заправляють рибні, м'ясні, овочеві страви. Подають ці соуси також і до мамалиги. Практично всі овочі заготовляються про запас. Їх квасять, солять, консервують. Дуже популярна в Молдові бринза — розсольний сир з овечого молока. Вживають її як в натуральному вигляді, так і як компонент овочевих, борошняних, яєчних, рибних і м'ясних страв. Бринза є важливою частиною молдовської кухні ще з XVII століття, коли в Молдовському князівстві активно розвивалося вівчарство. В молдовській кухні використовуються всі види м'ясних продуктів. З баранини готуються манжа, мусака, з яловичини — паприкаш, мітітеї, свинини — менкеріка, токана, костіца, кирнецеї, з домашньої птиці — яхніє, зама. Мітітеї за виглядом нагадують маленькі ковбаски без оболонки. Вони схожі на традиційну балканську страву чевапчичі. Національні рибні та м'ясні страви готуються на гратарі — залізній решітці, розташованій над розжареним деревним вугіллям з бука, горіха, кизилу. Продукти, особливо якщо вони будуть смажитися в натуральному вигляді, попередньо витримують у маринаді. Традиційними борошняними виробами є вертути і плацинди з фруктовою, овочевий, сирною і горіховою начинкою. Плацинда нагадує плаский корж круглої і іноді квадратної форми, а вертута являє собою рулет з тонкого тіста. У Молдові росте безліч видів фруктових дерев, і до столу прийнято подавати свіжі фрукти — яблука, груші, персики, абрикоси, вишні, виноград, волоські горіхи. Улюблені національні ласощі — нуга, желе (пелтя) з ягідних і фруктових соків, халва (алвіце), тістечка і печиво з пісочного та листкового тіста. https://hu.wikipedia.org/wiki/IV._K%C3%A1roly_magyar_kir%C3%A1ly https://upload.wikimedia.org/wikipedia/commons/6/6d/Charles_IV%2C_the_last_King_of_Hungary_in_coronation_gear.jpg IV. Károly magyar király / Családja IV. Károly teljes uralkodói díszben magyar királlyá koronázása után 1916-ban (korabeli fénykép) IV. Károly teljes nevén Karl Franz Josef Ludwig Hubert Georg Maria von Österreich, osztrák főherceg, a Habsburg–Lotaringiai-ház utolsó uralkodója, 1916 és 1918 között I. Károly néven az Osztrák Császárság utolsó császára és IV. Károly néven az utolsó magyar király. Kétévi uralkodása után Ausztriát és Magyarországot köztársasággá kiáltották ki. Nem mondott le, de az új államformát elfogadta, mely az eckartsaui nyilatkozatban olvasható. 1921-ben két alkalommal is megpróbált visszatérni a trónra, sikertelenül. 1911. október 21-én az alsó-ausztriai Schwarzau am Steinfeld kastélyában feleségül vette Zita Bourbon–parmai hercegnőt (1892–1989), akitől nyolc gyermeke született: Habsburg Ottó (1912–2011) 1951-ben kötött házasságot Regina szász–meiningeni hercegnővel (1925–2010) Andrea (1953–) Monika (1954–) Michaela (Mikaéla) (1954–) Gabriella (1956–) Walburga (1958–) Károly (1961–) György (1964–) Habsburg Etelka (Adelhaid) (1914–1971) Habsburg Róbert (1915–1996) 1953-ban kötött házasságot Margherita di Savoia-Aosta hercegnővel (1930–) Mária Beatrix (1954–) Lőrinc (1955–) Gerhard (Gellért) (1957–) Martin (Márton) (1959–) Habsburg Izabella (Erzsébet) (1963–) Habsburg Félix (1916–2011) 1952-ben kötött házasságot Anna-Eugénie von Arenberg hercegnővel (1925–1997) Maria del Pilar (Mária) (1953–) Carl Philipp (Károly Fülöp) (1954–) Kinga (1955–) Raimund (Rajmond) (1958–) Maria Adelheid (Mária Etelka) (1959–) István (1961–) Virdis (1961–) Habsburg Károly Lajos (1918–2007) 1950-ben kötött házasságot Yolande de Ligne hercegnővel (1923–) Rudolf (1950–) Alexandra (1952–) Carl (Károly) (1954–) Maria (Mária) (1957–) Habsburg Rudolf (1919–2010 Először 1953-ban kötött házasságot Xenia Czernichev-Besobrasov grófnővel (1929–1968), majd 1971-ben Anna Gabriela von Wrede hercegnővel (1940–) Maria-Anna (Mária Anna) (1954–) Carl Peter (Károly Péter) (1955–) Simon (1958–) János (1962–1975) Katalin Mária (1972–) Habsburg Sarolta (1921–1989) hozzáment György mecklenburgi herceghez (1899-1962). Habsburg Erzsébet (1922–1993) hozzáment Károly Henrik leiningeni herceghez. https://en.wikipedia.org/wiki/Wiccan_(comics) https://upload.wikimedia.org/wikipedia/commons/8/8c/10.8.16JimCheungByLuigiNovi1.jpg Wiccan (comics) / Publication history English: Comic book artist Jim Cheung in Artist Alley at the Jacob K. Javits Convention Center in Manhattan, on Saturday October 8, 2016, Day 3 of the 2016 New York Comic Con. This photo was created by Luigi Novi. It is not in the public domain, and use of this file outside of the licensing terms is a copyright violation. If you would like to use this image outside of the Wikimedia projects, you may do so, only if I am properly credited, either by linking the photograph to this page, or with an easily visible credit placed near the photo in each instance in which it is used. Please credit authorship as follows: &#160;© Luigi Novi / Wikimedia Commons. Please maintain the original file name in all uses. You can see a gallery of some of my other photos here. If you have any questions, you can contact me by sending me an email or leaving a note at the bottom of my Wikipedia talk page. Wiccan is a comic book character and member of the Young Avengers, a team of teenage superheroes in Marvel Comics. Created by writer Allan Heinberg and artist Jim Cheung, the character first appeared in Young Avengers #1. The character's appearance is patterned on that of two prominent Marvel superheroes, Thor and Scarlet Witch, both of whom are members of the Avengers. Like the Scarlet Witch, Wiccan possesses powerful magical abilities which make him a key member of his superhero team. Recruited to the Young Avengers by Iron Lad, Wiccan's story includes the discovery that he and fellow teen hero Speed are in fact long-lost twin brothers, and that the pair are the sons of Scarlet Witch and her husband Vision. Significant storylines for the character include him and his brother's search for their missing mother, learning to master his powers, and an ongoing relationship with his teammate Hulkling. Alongside his permanent role as a member of the Young Avengers, Wiccan has also been a member of Avengers Idea Mechanics and Strikeforce. Wiccan first appeared in Young Avengers #1 (April 2005). The issue was scripted by Allan Heinberg and drawn by Jim Cheung. One of the original four members of the Young Avengers, the team was founded after the Avengers disbanded in the story line Avengers Disassembled. Initially, Heinberg assumed that Marvel would not allow him to write two leading homosexual characters. Because of this, he originally planned to write Billy's love interest, Hulkling, as a female shapeshifter named Chimera. Chimera would discover that her true form was male, which would force Billy to decide if he was still in love with him. However, due to the complexity of this proposed story line, editor Tom Brevoort suggested simply making both characters gay. Wiccan appeared in the new 2013 Young Avengers series by Kieron Gillen and Jamie McKelvie. As part of the All-New All-Different Marvel rebranding, Wiccan (along with Hulkling) appears as a member of the New Avengers led by Sunspot along with Songbird, Squirrel Girl, Hawkeye, Power Man and White Tiger. He later guest-starred in the Scarlet Witch series written by James Robinson. In 2019, he will star in Strikeforce alongside Blade, Angela, Spider-Woman, Monica Rambeau, Daimon Hellstrom and Winter Soldier. https://ja.wikipedia.org/wiki/%E3%83%9F%E3%83%8A%E3%82%B9%E3%83%BB%E3%82%B8%E3%82%A7%E3%83%A9%E3%82%A4%E3%82%B9%E7%B4%9A%E6%88%A6%E8%89%A6 https://upload.wikimedia.org/wikipedia/commons/7/73/E_Minas_Geraes_1910_altered.jpg English: Minas Gerais sailing soon after her commissioning. Photo was taken too early to be of Sao Paulo, which is a common identification in sources.[1] ミナス・ジェライス級戦艦とはブラジル海軍が南アメリカ諸国で最初に購入した弩級戦艦の艦級で竣工時は世界最強の弩級戦艦であった。ミーナ・ジェライス級とも呼ばれる。 ミナス・ジェライス(Minas Gerais)級戦艦とはブラジル海軍が南アメリカ諸国で最初に購入した弩級戦艦の艦級で竣工時は世界最強の弩級戦艦であった。ミーナ・ジェライス級とも呼ばれる。 https://en.wikipedia.org/wiki/Ballymalis_Castle https://upload.wikimedia.org/wikipedia/commons/3/34/Castles_of_Munster%2C_Ballymalis%2C_Kerry_-_geograph.org.uk_-_1392738.jpg English: Castles of Munster: Ballymalis, Kerry This tower house on the east bank of the River Laune was built in c.1600. It was granted to Sir Francis Brewster after its confiscation in 1677, when it passed to Alexander Eager. Ballymalis Castle is a tower house and National Monument located in County Kerry, Ireland. Ballymalis Castle is a tower house and National Monument located in County Kerry, Ireland. https://bg.wikipedia.org/wiki/%D0%A1%D0%B2%D0%B5%D1%82%D0%B8_%D0%92%D1%80%D0%B0%D1%87_(%D0%BF%D0%B0%D1%80%D0%BA) https://upload.wikimedia.org/wikipedia/commons/2/2e/St._Vrach_park%2C_Sandanski%2C_Bulgaria_2015_22.JPG Свети Врач (парк) / Галерия Български: Парк "Свети Врач" в Сандански. English: St. Vrach park, Sandanski, Bulgaria „Свети Врач“ е градски парк в Сандански. Единственият с пясъчни алеи и един от най-големите по площ паркове в България, простира се на 344 декара площ на двата бряга на река Санданска Бистрица. https://nl.wikipedia.org/wiki/Lijst_van_gemeentelijke_monumenten_in_Halderberge https://upload.wikimedia.org/wikipedia/commons/d/d4/Hoeven_7_HB_GM_St_Janstr_57_Woonhuis_30112019.jpg Lijst van gemeentelijke monumenten in Halderberge Lijst van gemeentelijke monumenten in Halderberge / Hoeven Nederlands: Woonhuis This is an image of a municipal monument in Halderberge with number WN024 De gemeente Halderberge heeft 154 gemeentelijke monumenten, hieronder een overzicht. Zie ook de rijksmonumenten in Halderberge. De plaats Hoeven kent 19 gemeentelijke monumenten: https://hu.wikipedia.org/wiki/Magyarbikali_reform%C3%A1tus_templom https://upload.wikimedia.org/wikipedia/commons/3/31/Magyarbikali_templom_karzata.jpg Magyarbikali református templom Magyarbikali református templom / Képgaléria Magyar: Magyarbikali_templom_karzata Magyarbikal első írásos említése, Tera Bekaly néven, 1249-ből származik. Régen a falu a felette húzódó bükkerdő alatt helyezkedett el – erre utal elnevezése is -, ám a lakosság a rájuk leselkedő veszélyek miatt lehúzódott a védettebb, jobban elrejtett völgybe, így alakult ki a mai falu. https://de.wikipedia.org/wiki/Waldfriedhof_Stuttgart https://upload.wikimedia.org/wikipedia/commons/6/63/S_Waldfriedhof_Walter_Romberg_Stele.jpg Waldfriedhof Stuttgart / Gräber Stele mit Porträt Walter Rombergs auf dem Stuttgarter Waldfriedhof Der Waldfriedhof Stuttgart wurde 1913 kurz vor Ausbruch des Ersten Weltkriegs nach den Plänen des Stuttgarter Stadtbaudirektors Albert Pantle angelegt. Auf dem Friedhof, der im Stuttgarter Stadtbezirk Degerloch liegt, sind zahlreiche Prominente bestattet. Der Name des Friedhofs verweist darauf, dass er mitten im Mischwaldbestand des Degerlocher Walds errichtet wurde. Der Friedhof besteht aus zwei Teilen: dem älteren und größeren westlichen Hauptteil und dem jüngeren, östlich gelegenen Waldfriedhof-Viereichenhau. Mit 30,7 Hektar ist er der flächenmäßig größte und mit seinen 15.000 Grabstellen der drittgrößte Stuttgarter Friedhof. Er ist in die Abteilungen 1-35 und 50-75 aufgeteilt. Auf dem Friedhofsgelände befinden sich eine Feierhalle, ein Verwaltungsgebäude, ein Leichenhaus und drei Ehrenmale für die Gefallenen der beiden Weltkriege. Dem Friedhof benachbart ist der weiter östlich gelegene Dornhaldenfriedhof, der 1974 angelegt wurde. Eine Standseilbahn aus dem Jahre 1929 verbindet den Südheimer Platz mit dem 100 Meter höher gelegenen Friedhof. Hinweis: In dem Friedhofsführer von Werner und Christopher Koch (#Koch 2012) und in der SSB-Broschüre „Lebenslinien“ (#Straßenbahnen 2009) finden sich Kurzbiographien von Prominenten, die auf dem Waldfriedhof begraben sind, ein Lageplan mit Standortangaben für Gräber und Denkmäler und im Friedhofsführer auch Fotos der Gräber. https://zh.wikipedia.org/zh-tw/%E5%8D%A1%E9%96%80%C2%B7%E5%8D%A1%E6%96%AF http://upload.wikimedia.org/wikipedia/commons/0/05/Kass%2CCarmen_2004_Mainz.jpg English: Carmen Kass, Chess Classic Mainz 2004 Deutsch: Carmen Kass, Chess Classic Mainz 2004 卡門·卡斯,生於蘇聯時期塔林,暱稱「卡神」,愛沙尼亞超級名模、西洋象棋手、社會活動家。 卡門·卡斯(愛沙尼亞語:Carmen Kass,1978年9月14日-),生於蘇聯時期塔林,暱稱「卡神」,愛沙尼亞超級名模、西洋象棋手、社會活動家。 https://az.wikipedia.org/wiki/II_%C6%8Fbd%C3%BCrr%C9%99hman https://upload.wikimedia.org/wikipedia/commons/3/3b/Abderram%C3%A1n_II.jpg English: Rahmán II statue in Murcia Español: Estatua de Abderramán II en Murcia Əbdürrəhman ibn əl-Hakəm bin Hişam bin Əbdürrəhman — Əndülüs Əməvi dövlətinin dördüncü əmiri, I Əbdürrəhmanın nəvəsi. Əbdürrəhman ibn əl-Hakəm bin Hişam bin Əbdürrəhman (788, Toledo – 22 sentyabr 852, Kordova (İspaniya), Andalusiya) — Əndülüs Əməvi dövlətinin dördüncü əmiri (822-852), I Əbdürrəhmanın nəvəsi. https://fr.wikipedia.org/wiki/Charles_Sprague_Pearce https://upload.wikimedia.org/wikipedia/commons/6/61/Charles_Sprague_Pearce_detail.jpg Detail from: Photograph of Charles Sprague Pearce (1851-1914) in his studio in Auvers-sur-Oise Charles Sprague Pearce, né le 13 octobre 1851 à Boston et mort le 18 mai 1914 à Paris, est un peintre américain. Charles Sprague Pearce , né le 13 octobre 1851 à Boston et mort le 18 mai 1914 (à 62 ans) à Paris, est un peintre américain. https://en.wikipedia.org/wiki/Weightlifting_at_the_2018_Summer_Youth_Olympics https://upload.wikimedia.org/wikipedia/commons/c/cc/Girls_48_kg_Weightlifting_2018_YOG_-_Victory_Ceremony_06.jpg Weightlifting at the 2018 Summer Youth Olympics Weightlifting at the 2018 Summer Youth Olympics / Medal summary / Girl's events / Notes Español: Levantamiento de pesas en los Juegos Olímpicos de la Juventud Buenos Aires 2018. Torneo femenino, 48 kg. English: Weightlifting at the 2018 Summer Youth Olympics – Girls' 48 kg. Weightlifting at the 2018 Summer Youth Olympics was held from 7 to 13 October. The events took place at Parque Polideportivo Roca in Buenos Aires, Argentina. Supatchanin Khamhaeng of Thailand originally won the gold medal, but was disqualified in 2019 after testing positive for a banned substance. https://es.wikipedia.org/wiki/Mar%C3%ADa_del_Carmen_Due%C3%B1as https://upload.wikimedia.org/wikipedia/commons/b/bf/Carmen_Due%C3%B1as_6.jpg María del Carmen Dueñas / Biografía / Trayectoria política Maria del Carmen Dueñas interviene en el Congreso Regional del Partido Popular de Melilla Español: Intervención politica del PP María del Carmen Dueñas Martínez, es una abogada y política española del Partido Popular. Entre 2015 y 2019 fue diputada por Melilla en el Congreso. Portavoz de Igualdad del Grupo Parlamentario del Partido Popular, fue la Ponente del Pacto de Estado en materia de Violencia de Género, en el siguiente enlace encontramos su intervención en el Pleno del Congreso: Intervención en el Debate del Pleno del Congreso. Desde el año 2004 a 2017 fue Secretaria General del Partido Popular en Melilla. Se afilió al Partido Popular de Melilla a principios del año 2000, siendo nombrada Secretaria Regional y número 2 del Partido Popular de Melilla, tras el IX Congreso Regional de este partido. Fue diputada local en la Asamblea de Melilla (2007-2008), Consejera de Contratación y Patrimonio (2007-2008), también fue senadora electa (2008-2015) y Presidenta de la Comisión de Igualdad. (2012-2015).​ En la IX Legislatura obtuvo el escaño de Senadora por la circunscripción electoral de la Ciudad Autónoma de Melilla y se convirtió en la primera mujer senadora electa de Melilla. En dicha Legislatura fue la Portavoz de la Comisión de Igualdad en el Senado por el Grupo Popular.​ Tras las elecciones a Cortes Generales de 2011, obtiene el escaño de Senadora por la circunscripción electoral de Melilla, y tras constituirse la X Legislatura es designada Presidenta de la Comisión de Igualdad del Senado, vocal en la Comisión de Justicia y vocal en la Comisión de Incompatibilidades. En las elecciones a Cortes Generales de 2015, obtiene el escaño de Diputada por la circunscripción electoral de la Ciudad Autónoma de Melilla en la XI Legislatura, convirtiéndose en la primera mujer diputada por la Ciudad Autónoma de Melilla. En las elecciones a Cortes Generales de 2016, obtuvo el escaño de diputada por la circunscripción electoral de la Ciudad Autónoma de Melilla en la XII Legislatura, situándose entre los candidatos más votados de esos comicios con el 49,90% de los votos.​ https://fr.wikipedia.org/wiki/Gare_d%27Unionville_(Toronto_and_Nipissing) https://upload.wikimedia.org/wikipedia/commons/4/46/Old_Unionville_station_in_the_snow.jpg Gare d'Unionville (Toronto and Nipissing) Gare d'Unionville (Toronto and Nipissing) English: The old station was last used on Friday May 3, 1991. The new GO Station opened for business the following Monday, May 6. Prior to GO Transit using the original station between 1982 and 1991, VIA used the station for their Stouffville to Toronto commuter service. Image illustrative de l’article Gare d'Unionville (Toronto and Nipissing) La gare d'Unionville est une ancienne gare ferroviaire canadienne, située à Markham en Ontario. Elle a été remplacée par la gare d'Unionville. Elle se distingue comme une gare du XIXᵉ siècle encore dans son emplacement d'origine sur la voie ferrée. La gare d'Unionville est une ancienne gare ferroviaire canadienne, située à Markham en Ontario. Elle a été remplacée par la gare d'Unionville (GO Transit). Elle se distingue comme une gare du XIXᵉ siècle encore dans son emplacement d'origine sur la voie ferrée. https://fr.wikipedia.org/wiki/Industrie_de_l%27ocre_en_pays_d%27Apt http://upload.wikimedia.org/wikipedia/commons/e/e5/Forge_%28Apt%29.JPG Industrie de l'ocre en pays d'Apt Exploitation du fer de La Tène jusqu'au XIXe siècle Industrie de l'ocre en pays d'Apt / Exploitation du fer de La Tène jusqu'au XIXe siècle Une des dernières forges ayant servi à usiner le minerai de fer de Rustrel Musée de l'aventure industrielle à Apt Français&#160;: reconstitution de forge L'industrie de l'ocre en pays d'Apt a été favorisée par des énormes dépôts de sables ocreux qui couvrent un secteur comprenant Gignac, Rustrel, Villars, Gargas et Roussillon. L'ocre est une roche ferrique composée d'argile pure colorée par un hydroxyde de fer : l'hématite pour l'ocre rouge, la limonite pour la brune et goethite pour la jaune. Du pouvoir colorant de l'ocre connu dès la protohistoire, il y eut ensuite passage à l'exploitation du fer pour arriver à la fin du XVIIIᵉ siècle à l'extraction industrielle des colorants ocreux. Si les premiers habitants de nos régions utilisèrent les gisements d'ocre, pour l'art corporel ou pariétal, ils s'en servirent aussi, dès l'âge du fer pour forger outillage et armes. En effet ces dépôts ocreux contiennent de grandes quantités de minerai ferrugineux qui fut fondu dans les premiers « bas fourneaux » de l'Antiquité jusqu'au Moyen Âge. Ces gisements furent exploités à ciel ouvert ou en galeries. Dans certaines de celles-ci, au nord de la vallée du Calavon, ont été retrouvées intactes des amphores romaines. Dans le pays d'Apt s'étend un vaste bassin minier et métallurgique dont la production de fer perdura jusqu’au XIXᵉ siècle. Il est divisé en trois secteurs qui se jouxtent. Le district de Rustrel, compris entre Gignac et Villars, qui s'étale sur 20 km². Le district de Gignac-Simiane-la-Rotonde-Banon où les sites sidérurgiques sont les plus nombreux. Le district de Gordes-Lagnes-Fontaine-de-Vaucluse où les grottes des parois calcaires ont, pour la plupart, été exploitées et vidées de leurs remplissages ferrugineux. Des datations au C14 ont permis d'identifier certains ferriers comme appartenant la période de La Tène. Des campagnes de prospection, réalisées entre 1996 et 2008, ont répertorié plus de 300 ferriers. Exploité jusqu'à la fin du XIXᵉ siècle, en particulier à Gignac et à Rustrel, ce minerai de fer contribua à l'essor économique et industriel de la vallée du Calavon. À Gignac, la teneur du minerai recueilli dans les ocres atteint entre 45 et 55 %. L'étude des scories, au quartier de la Ferrière, a montré qu'elles datent de la plus haute Antiquité. Au nord-est du village, à Thosse, les plus anciens bas fourneaux identifiés ont servi au cours du IIIᵉ siècle. Dans ce hameau, l'exploitation du minerai de fer s'est poursuivi jusqu'en 1815. Dans les années 1850, le besoin de bois pour la production de fer s’intensifia fut tel qu'il provoqua la déforestation des Monts de Vaucluse et du Mont Ventoux. À Rustrel, existent toujours les vestiges de ces hauts fourneaux qui furent construits, à partir de 1836 pour se substituer à celui de Velleron. Ils permirent, jusqu'en 1890, de produire des fontes à gueuses et à moulages. Cette exploitation périclita par manque de moyens de transport adéquat. https://no.wikipedia.org/wiki/Scott_Parker https://upload.wikimedia.org/wikipedia/commons/7/74/Scott_Parker_2012-06-11.jpg English: Scott Parker| at the Euro 2012 match against France Русский: Скотт Паркер в матче Евро 2012 против Франции Scott Matthew Parker er en engelsk fotballtrener og tidligere fotballspiller som er manager for Fulham FC. Parker startet sin profesionelle karriere i Charlton Athletic FC. Så dro han til London-klubben Chelsea, hvor han bare fikk spille femten kamper. Etter mangel på spilletid og tillit, dro Parker til Newcastle United. Scott Parker skåret forøvrig Newcastles første mål i 1. serierunde i Premier League 2006/07. Fra 2007 til 2011 Har Parker spilt for den andre London-klubben West Ham United. Der har han har fått tittelen ''Player of the year'' 3 ganger. Han har også representert England ved seks anledninger og har også flere kamper på aldersbestemte landslag. Scott Parker er den første fotballspilleren som har spilt for både Chelsea FC, West Ham United og Tottenham Hotspur. 28. februar 2019 ble han ansatt som midlertidig manager for Fulham FC. 10. mai 2019 ble han ansatt som manager for Fulham FC. Scott Matthew Parker (født 13. oktober 1980) er en engelsk fotballtrener og tidligere fotballspiller som er manager for Fulham FC. Parker startet sin profesionelle karriere i Charlton Athletic FC. Så dro han til London-klubben Chelsea, hvor han bare fikk spille femten kamper. Etter mangel på spilletid og tillit, dro Parker til Newcastle United. Scott Parker skåret forøvrig Newcastles første mål i 1. serierunde i Premier League 2006/07. Fra 2007 til 2011 Har Parker spilt for den andre London-klubben West Ham United. Der har han har fått tittelen ''Player of the year'' 3 ganger. Han har også representert England ved seks anledninger og har også flere kamper på aldersbestemte landslag. Scott Parker er den første fotballspilleren som har spilt for både Chelsea FC, West Ham United og Tottenham Hotspur. 28. februar 2019 ble han ansatt som midlertidig manager for Fulham FC. 10. mai 2019 ble han ansatt som manager for Fulham FC. https://nl.wikipedia.org/wiki/Lijst_van_gemeentelijke_monumenten_in_Wyck_(Maastricht) https://upload.wikimedia.org/wikipedia/commons/9/95/Maastricht_-_Stationsstraat_29-31_GM-2041_20190825.jpg Lijst van gemeentelijke monumenten in Wyck (Maastricht) Lijst van gemeentelijke monumenten in Wyck (Maastricht) English: Stationsstraat 29-31, Maastricht, The Netherlands. Nederlands: Stationsstraat 29-31, Maastricht. De buurt Wyck in de wijk Maastricht-Centrum in Maastricht heeft 434 gemeentelijke monumenten beschreven in 329 regels. De buurt Wyck in de wijk Maastricht-Centrum in Maastricht heeft 434 gemeentelijke monumenten beschreven in 329 regels. https://gl.wikipedia.org/wiki/Cangrexo_real https://upload.wikimedia.org/wikipedia/commons/8/8c/Chaceon_affinis.jpg O cangrexo real é unha especie de crustáceo decápodo braquiúro da familia dos xeriónidos. O cangrexo real (Chaceon affinis) é unha especie de crustáceo decápodo braquiúro da familia dos xeriónidos. https://hi.wikipedia.org/wiki/%E0%A4%86%E0%A4%B5%E0%A4%B0%E0%A5%8D%E0%A4%A4_%E0%A4%B8%E0%A4%BE%E0%A4%B0%E0%A4%A3%E0%A5%80 https://upload.wikimedia.org/wikipedia/commons/f/fe/Periodic_trends.svg आवर्त सारणी / आधुनिक आवर्त सारणी की प्रमुख विशेषताएँ / वर्ग तत्वों के गुणों का आवर्ती परिवर्तन आवर्त सारणी रासायनिक तत्वों को उनकी संगत विशेषताओं के साथ एक सारणी के रूप में दर्शाने की एक व्यवस्था है। आवर्त सारणी में रासायनिक तत्त्व परमाणु क्रमांक के बढ़ते क्रम में सजाये गये हैं तथा आवर्त, प्राथमिक समूह, द्वितीयक समूह में वर्गीकृत किया गया है। वर्तमान आवर्त सारणी मैं ११८ ज्ञात तत्व सम्मिलित हैं। सबसे पहले रूसी रसायन-शास्त्री मेंडलीफ ने सन १८६९ में आवर्त नियम प्रस्तुत किया और तत्वों को एक सारणी के रूप में प्रस्तुत किया। इसके कुछ महीनों बाद जर्मन वैज्ञानिक लोथर मेयर ने भी स्वतन्त्र रूप से आवर्त सारणी का निर्माण किया। मेन्देलेयेव की सारणी से अल्फ्रेड वर्नर ने आवर्त सारणी का वर्तमान स्वरूप निर्मित किया। सन १९५२ में कोस्टा रिका के वैज्ञानिक गिल चावेरी ने आवर्त सारणी का एक नया रूप प्रस्तुत किया जो तत्वों के इलेक्ट्रानिक संरचना पर आधारित था। रसायन शास्त्रियों के लिये आवर्त सारणी अत्यन्त महत्वपूर्ण एवं उपयोगी है। इसके कारण कम तत्वों के गुणधर्मों को ही याद रखने से काम चल जाता है क्योंकि आवर्त सारणी में किसी समूह या किसी आवर्त में गुणधर्म एक निश्चित क्रम से एवं तर्कसम्मत तरीके से बदलते हैं। नीचे आवर्त सारणी का आधुनिक रूप दिखाया गया है जिसमें १८ वर्ग तथा ७ आवर्त हैं- आवर्त सारणी के इस प्रचलित प्रबन्ध में लैन्थनाइड और ऐक्टिनाइड को अन्य धातुओं से अलग रखा गया है। किसी एक वर्ग के सभी तत्त्वों के परमाणुओं के सबसे बाहरी कक्षा में इलेक्ट्रानों की संख्या (अर्थात 'संयोजक इलेक्ट्रानों' की संख्या) समान होती है। इस कारण किसी एक वर्ग के सभी तत्वों के मुख्य गुण समान होते हैं। हल्की धातुएँ - वर्ग 1 और 2 . अल्कली धातुएं - वर्ग 1. अल्कलाइन मृदा धातुएं - वर्ग 2. भारी धातुएँ या संक्रमण धातुएँ' - वर्ग 3, 4, 5, 6, 7, 8, 9, 10, 11 और 12 . अधातुएँ - वर्ग 13, 14, 15, 16 और 17. अक्रिय गैसें - वर्ग 18 . खण्ड या ब्लॉक संयोजक इलेक्ट्रानों के आधार पर तत्वों को 4 खण्डों में बाँटा गया है- s, p, d, f . s-block – वर्ग 1 तथा 2 . p-block – वर्ग 13 से 18 . d-block – वर्ग 3 से 12 . f-block – लैन्थेनाइड और ऐक्टिनाइड (Lanthanide and Actinide series). प्रतिनिधि तत्व (Representative Elements या Normal elements या Typical elements) – s-block और p-block के तत्वों को सम्मिलित रूप से संक्रमण तत्व (Transition Elements) – d-block के तत्व अन्तरिक संक्रमण तत्व (Inner Transition Elements) – f-block के तत्व -- इन्हें विरल मृदा तत्व (Rare Earth Elements) भी कहते हैं। https://zh.wikipedia.org/zh-cn/%E9%BA%92%E9%BA%9F https://upload.wikimedia.org/wikipedia/commons/b/bb/%E4%B8%89%E5%B3%BD%E8%A1%8C%E4%BF%AE%E5%AE%AE%E9%BA%92%E9%BA%9F%E9%8E%AE%E9%96%80%E7%8D%B8.jpg 中文(繁體)‎: 三峽行修宮麒麟鎮門獸,新北市三峽區嘉添里 麒麟,亦作骐𬴊,是中国古代神话传说中的神兽,是建马的后代,其祖先为应龙。常与龙马混淆。中国古代用麒麟象征祥瑞,公兽为麒,母兽为麟,据说能活两千年。性情温和,身上虽有可攻击敌人的武器,但不伤人畜,不践踏昆虫花草,故称为仁兽。 麒麟的首似龙,形如马,状比鹿,尾若牛尾,背上有五彩毛纹,腹部有黄色毛,口能吐火,声音如雷。相传只在太平盛世或世有圣人时才会出现。所以被称为瑞兽。汉许慎《说文解字》:“麒,仁兽也,麋身牛尾一角;麐,牝麒也。” 据香港历史文化学者叶德平先生说,中国民间信仰中之中,有所谓“四灵”之说。《礼记・礼运》曰:“麟、凤、龟、龙,谓之四灵。”它格外受客家人的重视,在节日庆典之中,常常看到它的身影。 麒麟与龙、凤一样,都是人们虚拟出来的瑞兽,被赋予美丽的想像。传说的麒麟是十分温驯和善的,不会伤害生灵,甚至连草木也不会折断,堪称“仁兽”,故格外受到以耕读为务的客家人所崇拜。 据传孔子出生时有麒麟显现,所以民间认为麒麟会给人们带来儿子,使家族兴旺,因此有麒麟送子之说,也把杰出的儿童称为“麒麟儿”、“麟儿”。此后,民间慢慢出现“麒麟送子图”之作。 作为木板画,上刻对联“天上麒麟子,地上状元郎””,以此为佳兆。民间普遍认为,求拜麒麟可以生育得子。 唐杜甫《徐卿二子歌》:“君不见徐卿二子多绝奇。感应吉梦相追随。孔子释氏亲抱送,并是天上麒麟儿。” 麒麟一称为龙之子,属龙族,瑞兽,与龙凤龟合称为四灵,因此麒麟图案常作为吉祥,仁爱之符号,被中国古代各朝朝政常采用。史载汉宣帝在未央宫建有麒麟阁,绘功臣图像,以表嘉奖和向天下昭示其爱才之心。 《明会典》记载,洪武二十四年(1391年)规定,公、侯、驸马、伯以麒麟作为补服图案。故称一品麒麟。 清朝时,武职官员一品的补子徽饰为麒麟。 “麟止”是指绝笔,元狩元年(前122年)冬十月汉武帝至雍(今陕西凤翔)获白麟,一角而五趾,作《白麟之歌》,司马迁作《史记》于此处止笔。 《史记·太史公自序》:“于是卒述陶唐以来,至于麟止。” 麒麟因其深厚的文化内涵,在中国传统民俗礼仪中,被制成各种饰物送给未成年的儿童佩戴,有祈福和安佑的用意。 曹雪芹《红楼梦》一书中的第三十一回和三十二回,大篇幅写“因麒麟伏白首双星”,这里的麒麟不仅是史湘云的护身符,也是暗示她婚配的一件信物。 黄梅戏《女驸马》中,一对玉麒麟也是代表爱情的见证。女主人翁与男主人翁受阻于女方父母的决定,女主人翁交于男主人翁一只玉麒麟,发誓“生生死死不变心,清风明月作见证,分开一对玉麒麟,这只麒麟交于你,这只麒麟留在身,麒麟成双人成对,散心两意天地不容”。等到双方冲破重重阻挠,有情人终成眷属,“麒麟成双人成对,并蒂花开万年红”,大喜之夜双方麒麟终于成对。 蒙通联足球俱乐部的吉祥物为麒麟。 https://ja.wikipedia.org/wiki/%E4%B9%BE%E3%83%89%E3%83%83%E3%82%AF https://upload.wikimedia.org/wikipedia/commons/9/94/USS_Greeneville_%28SSN_772%29_-_dry_dock_Pearl_Harbor_%281%29.jpg English: The USS Greeneville (SSN 772) sits atop blocks in Dry Dock #1 at the Pearl Harbor Naval Shipyard and Intermediate Maintenance Facility, Pearl Harbor, Hawaii, on Feb. 21, 2001. The Los Angeles class attack submarine is dry-docked to assess the damage and perform necessary repairs following a Feb. 9 collision at sea with the Japanese fishing vessel Ehime Maru off the coast of Honolulu, Hawaii. 乾ドックとは、船体の検査や修理などのために水を抜くことができるドックのこと。船渠、乾船渠とも。 https://zh-yue.wikipedia.org/wiki/%E5%8D%83%E8%91%89%E6%B6%BC%E5%B9%B3 https://upload.wikimedia.org/wikipedia/commons/e/ea/Ryohei_Chiba_%28%E5%8D%83%E8%91%89%E6%B6%BC%E5%B9%B3%29_at_MTV_VMAJ_2014.jpg MTV VMAJ 2014_016_w-inds. https://ru.wikipedia.org/wiki/Baseodiscus_princeps https://upload.wikimedia.org/wikipedia/commons/9/90/Baseodiscus_princeps.png English: Baseodiscus princeps(=Taeniosoma princeps; Nemertea: Heterinemerta: Valenciniidae), large individual from Yakutat, Alaska. Базеодискус превосходный — вид невооружённых немертин из семейства Valenciniidae. Тело жёлтое, густо покрытое небольшими тёмно-красными пятнами неправильной формы. Тело может достигать внушительных размеров, иногда 2 м и более. Обитает в литоральной зоне морей под камнями. Встречается на западном побережье Северной Америки от Аляски до залива Пьюджет и в Японском море. Базеодискус превосходный (лат. Baseodiscus princeps) — вид невооружённых немертин из семейства Valenciniidae. Тело жёлтое, густо покрытое небольшими тёмно-красными пятнами неправильной формы. Тело может достигать внушительных размеров, иногда 2 м и более. Обитает в литоральной зоне морей под камнями. Встречается на западном побережье Северной Америки от Аляски до залива Пьюджет и в Японском море. https://es.wikipedia.org/wiki/Mus https://upload.wikimedia.org/wikipedia/commons/9/9e/31almus.png Algunas jugadas con nombre propio Mus / Algunas jugadas con nombre propio Una buena jugada al mus muy clásica: el solomillo, trío de reyes para la grande y los pares y 31 para el juego. English: example of a good combination of cards in the Spanish card game "el mus"Español: Una buena y clásica mano en el mus: treinta y una en el juego, con tres reyes El mus es un juego de naipes​ ampliamente extendido en España y también muy jugado en algunos países de Hispanoamérica, como Argentina, Chile, Colombia o México, y en algunas regiones del sur de Francia. Se trata de un juego con más de doscientos años de historia y cuyo origen mayormente aceptado estaría en el País Vasco, ​ aunque también hay versiones que discuten ese origen.​ Para el mus se utiliza la baraja española y normalmente lo juegan cuatro personas agrupadas en dos parejas. Las reglas pueden variar mucho dependiendo de las costumbres locales del lugar donde se juegue, pero cada mano siempre consistirá de las siguientes jugadas llamadas "lances":​ Grande: la combinación es mejor cuanto mayor sea el valor de las cartas. Chica: la combinación es mejor cuanto menor sea el valor de las cartas. Pares: la combinación es mejor cuantas más cartas iguales haya y mayor sea su valor. Juego: consiste en igualar o superar la cifra de 31 sumando el valor de cada carta. Si nadie alcanza esta cifra, se jugará al "punto" y la mejor combinación será la que más se aproxime a 30. Hay algunos duples que tienen un nombre específico: Duples gallegos: Son aquellos formados por dos ases y dos reyes. Duples castellanos: Son aquellos compuestos por dos reyes y dos caballos. En algunas regiones, a esta jugada también se la denomina como duples nacionales, polacos, alemanes o imperiales. Jacobiana o duples andaluces: jugada de duples compuesta por dos reyes y dos sotas. Tembleque: jugada de duples compuesta por dos reyes y dos cincos. Otras jugadas con nombre propio son: Barco o piara: cuatro reyes. Es la mejor jugada a grandes y a pares. La jugada del tío Pedrete o tío Perete, perete, tanganete, cagalete o peterete: se consigue si un jugador tiene un 4, un 5, un 6 y un 7 de cualquier palo. Al ser, con diferencia, la peor jugada que se puede obtener, el jugador con esta mano dice «perete», recibe una chica y se descarta de las cuatro cartas, recibiendo otra mano. Esta jugada no está admitida en todos los sitios. Solomillo o la bonita: jugada de 31 para juego, compuesta por tres reyes y un as. Besugo: jugada compuesta por tres ases y un rey. Algunas variantes admiten la seña para esta jugada, que consiste en poner boca de pez para informar de esta jugada, inversa al solomillo. De esta forma, se advierte a la pareja de que se tiene un rey y una buena jugada a chica. Mus negro o corrida: consiste en darse mus a pesar de tener una jugada buena, esperando que sea cortado por la pareja contraria y poder ganar así más tantos al revocar, puesto que se supone que no se lo esperan al no haberlo cortado. Evidentemente, tiene el riesgo de que no se corte el mus y se pierda así la jugada, por lo que debe sopesarse bien su utilización. Equidistante, para todo y para nada, ni para reír ni para llorar, la del tonto, la ansiosa o la del tío Paco: rey, caballo, as y cuatro, jugada para grande con rey y caballo y jugada para chica con as y cuatro. Como se puede comprobar, hay jugada para grande y chica, pero ninguna de las dos es buena, lo que en ocasiones se expresa como «dos a grande, dos a chica y pierdes cuatro». Jugada del sastre: se dice cuando se tienen tres cuatros. Las de Madrid: cuando al lance de juego se tienen 37. «37 nunca pierde»: llevando 37 al juego se ve el envite, pensando que el rival va de farol. Toribio: dos ases. El Banco Bilbao: jugada de tres o cuatro caballos. La jugada ladrona o la de Benito: rey, caballo y dos ases. Escopeta y perro: un rey y un caballo. Escopeta, perro y gato: rey, caballo y sota. Juego de tacón o llevar la tuerta: treinta y una. El vitor: tres caballos y una sota. Bocarrana: el cinco de bastos. Se dice que quien lo tiene no gana. Cordobeses: tres sotas y un as, buena jugada en pares y en juego, cuya seña es llevarse el dedo índice a la barbilla, debajo del labio inferior. Juego jesusiano: tienes 31 con sota, caballo, siete y cuatro; o, en su defecto, sota, caballo, cinco y seis. https://en.wikipedia.org/wiki/U.S._Route_41_in_Michigan https://upload.wikimedia.org/wikipedia/commons/2/20/Marquette%2C_Michigan_-_Buildings.jpg U.S. Route 41 in Michigan U.S. Route 41 in Michigan / Business loops The former Bus. US 41 along Washington Street in downtown Marquette from left to right: the Old State Savings Bank Building (in red sandstone), the Wells Fargo Bank Main Branch (combining the First National Bank of Marquette Building and the Kaufman Building) and various store fronts along Washington Street, Marquette, Michigan, US US Highway 41 is a part of the United States Numbered Highway System that runs from Miami, Florida, to the Upper Peninsula of the US state of Michigan. In Michigan, it is a state trunkline highway that enters the state via the Interstate Bridge between Marinette, Wisconsin, and Menominee, Michigan. The 278.769 miles of US 41 that lie within Michigan serve as a major conduit. Most of the highway is listed on the National Highway System. Various sections are rural two-lane highway, urbanized four-lane divided expressway and the Copper Country Trail National Scenic Byway. The northernmost community along the highway is Copper Harbor at the tip of the Keweenaw Peninsula. The trunkline ends at a cul-de-sac east of Fort Wilkins State Park after serving the Central Upper Peninsula and Copper Country regions of Michigan. US 41 passes through farm fields and forest lands, and along the Lake Superior shoreline. The highway is included in the Lake Superior Circle Tour and the Lake Michigan Circle Tour and passes through the Hiawatha National Forest and the Keweenaw National Historical Park. There have been three business loops for US 41: Ishpeming–Negaunee, Marquette and Baraga. Only the business loop serving Ishpeming and Negaunee is still a state-maintained trunkline, but it is no longer designated Bus. US 41. US 41/M-28 was relocated to bypass the two cities' downtowns in 1937. The highway through downtown Ishpeming and Negaunee later carried the ALT US 41/ALT M-28 designation before being designated Bus. M-28 in 1958. The western end of the business loop was transferred to local government control when Bus. M-28 was moved along Lakeshore Drive in 1999. Bus. US 41 in Marquette was first shown on a map in 1964 after the construction of the Marquette Bypass. It was later designated Bus. US 41/Bus. M-28 on a map in 1975; this second designation was removed from maps by 1982. The entire business loop was turned back to local control in a "route swap" between the City of Marquette and MDOT announced in early 2005. The proposal transferred jurisdiction on the unsigned M-554 and the business route from the state to the city. The state would take jurisdiction over a segment of McClellan Avenue to be used to extend M-553 to US 41/M-28. In addition, MDOT would pay $2.5 million (equivalent to $3.2 million in 2018) for reconstruction work planned for 2007. The transfer would increase Marquette's operational and maintenance liability expenses by $26,000 (equivalent to $32,832 in 2018) and place the financial burden of the future replacement of a stop light on the city. On October 10, 2005, MDOT and Marquette transferred jurisdiction over the three roadways. As a result, Bus. US 41 was decommissioned when the local government took control over Washington and Front streets. As a result of the decommissioning, the 2006 maps did not show the former business loop. The third business loop was in Baraga in the early 1940s. As shown on the maps of the time, US 41 was relocated in Baraga between the publication of the December 1, 1939, and the April 15, 1940, MSHD maps. A business loop followed the old routing through downtown. The last map that shows the loop was published on July 1, 1941. Bus. US 41 is shown under local control on the June 15, 1942, map. https://zh.wikipedia.org/zh-tw/CRH3 https://upload.wikimedia.org/wikipedia/commons/e/ec/CRH3A-3087_EMU_at_Chengdu_East_Railway_Station.jpg 和谐号CRH3型电力动车组 / 概要 / CRH3A 中文: 成都东站 西成高铁D4262次和谐号CRH3A-3087电力动车组English: Xi'an-Chengdu High-speed Railway D4262 CRH3A-3087 EMU at Chengdu Dong (East) Railway Station 和諧號CRH3型電聯車,是中華人民共和國鐵道部為營運新建的高速城際鐵路及客運專線,而向德國西門子交通集團和中國北車集團唐山軌道客車有限責任公司訂購的CRH系列高速動車組。中國鐵道部將所有引進國外技術、聯合設計生產的中國鐵路高速車輛均命名為「和諧號」。 CRH3A型的原型衍生自CRH5和CJ1平台,由中國北車主導,中國北車所屬長客股份公司設計生產,並於2013年6月8日在長客亮相。CJ1型電聯車以CRH380BL技術平台為基礎,借鑑了CRH380BL、CRH380CL、CRH380B、CRH5型電聯車的優點,研製開發的自主智慧財產權電聯車。 CRH3A型電聯車列車不完全與CJ1型實驗列車相同,採用新頭型,統型化的車內定員與設施配置,在優化空氣動力性能的同時,也減少了乘客上車後難於找到相應需求設施的難度,有利於乘車體驗的統一,減少乘客學習設施分布的需要。 CRH3A型電聯車列車的制軔能力比現有的高速電聯車列車提高了15%。另外,CRH3A列車也採用了中國自主研發的監測控制系統,以對列車的運營狀態進行實時監控。該車型採用全封閉車體,以期在橋隧比較大的線路運營時保證較好的氣密性與乘坐舒適度。該車型內飾採用中國電力電聯車統型內飾設計,沒有「面壁」(即靠車廂兩側的座位無車窗)情況,亦將車窗加大以獲得更好的採光性能。一等座列車採用暖光照明,而二等座車廂為普通白光照明。在行李架下方,設置有座位標識貼。 該型號列車為白色底色,車窗黑色塗裝並在車窗下有單條藍色腰線。另外該列車兩端的駕駛室觀察窗有金色點綴,與CR400BF「復興號」電聯車塗裝有相似之處,火車迷暱稱為「黃金眼」。 2017年9月份起,CRH3A型陸續下綫交付,首先用於西成客運專綫本綫車使用,但由於無法應付西成客運專線秦嶺段的坡度,後退出西成客運專線的運營並陸續轉配至成渝客運專線、成遂渝鐵路、渝貴鐵路、貴廣客運專線運營。 https://fr.wikipedia.org/wiki/Refuge_du_Nant_du_Beurre https://upload.wikimedia.org/wikipedia/commons/0/0c/Refuge_du_Nant_du_Beurre-ao%C3%BBt_2016-6.jpg Français&#160;: Refuge du Nant du Beurre (2080 m). Le refuge du Nant du Beurre est un refuge de montagne situé sur la commune de La Léchère, dans le département de la Savoie en région Auvergne-Rhône-Alpes. Le refuge du Nant du Beurre est un refuge de montagne situé sur la commune de La Léchère, dans le département de la Savoie en région Auvergne-Rhône-Alpes. https://zh.wikipedia.org/zh-tw/%E8%97%A4%E9%98%AA%E7%AB%99 https://upload.wikimedia.org/wikipedia/commons/4/46/JRW_kinki-H.svg 藤阪站是一個位於日本大阪府枚方市藤阪南町二丁目,屬於西日本旅客鐵道片町線的鐵路車站。車站編號為JR-H28。 藤阪站(日語:藤阪駅/ふじさかえき Fujisaka eki /)是一個位於日本大阪府枚方市藤阪南町二丁目,屬於西日本旅客鐵道(JR西日本)片町線(學研都市線)的鐵路車站。車站編號為JR-H28。 https://sv.wikipedia.org/wiki/Athis-de-l%27Orne http://upload.wikimedia.org/wikipedia/commons/a/a7/Map_commune_FR_insee_code_61007.png Detaljkarta över kommunen. Map commune FR insee code 61007.png Athis-de-l'Orne är en kommun i departementet Orne i regionen Normandie i nordvästra Frankrike. Kommunen ligger i kantonen Athis-de-l'Orne som tillhör arrondissementet Argentan. År 2009 hade Athis-de-l'Orne 2 605 invånare. Athis-de-l'Orne är en kommun i departementet Orne i regionen Normandie i nordvästra Frankrike. Kommunen ligger i kantonen Athis-de-l'Orne som tillhör arrondissementet Argentan. År 2009 hade Athis-de-l'Orne 2 605 invånare. https://qu.wikipedia.org/wiki/Santiago_Qullana_kantun_(Pedro_Domingo_Murillo) http://upload.wikimedia.org/wikipedia/commons/a/a0/Valle_de_la_Luna_%28Bolivien%29.jpg Santiago Qullana kantun (Pedro Domingo Murillo) Santiago Qullana kantun (Pedro Domingo Murillo) bizarre Formationen im Mondtal bei La Paz Santiago Qullana kantun nisqaqa Buliwya mamallaqtapi huk kantunmi, Chuqiyapu suyupi, Pedro Domingo Murillo pruwinsyapi, Miqapaka munisipyupi. Uma llaqtanqa Santiago Qullana llaqtam. Santiago Qullana kantun (kastilla simipi: Cantón Santiago de Collana) nisqaqa Buliwya mamallaqtapi huk kantunmi, Chuqiyapu suyupi, Pedro Domingo Murillo pruwinsyapi, Miqapaka munisipyupi. Uma llaqtanqa Santiago Qullana llaqtam (868 llaqtayuk, 2001 watapi). https://ro.wikipedia.org/wiki/Birkeland https://upload.wikimedia.org/wikipedia/commons/7/7c/Birkenes_IMG_1676_rv41_birkenes.JPG English: Image from the settlement Birkeland, center of the municipality of Birkenes, Aust-Agder county (Norway). Birkeland este o localitate din comuna Birkenes, provincia Aust-Agder, Norvegia, cu o suprafață de 2 km² și o populație de 2.645 locuitori. Birkeland este o localitate din comuna Birkenes, provincia Aust-Agder, Norvegia, cu o suprafață de 2 km² și o populație de 2.645 locuitori (2013). https://ru.wikipedia.org/wiki/%D0%A1%D0%BF%D0%B8%D1%81%D0%BE%D0%BA_%D0%B3%D1%83%D0%B1%D0%B5%D1%80%D0%BD%D0%B0%D1%82%D0%BE%D1%80%D0%BE%D0%B2_%D0%9E%D1%80%D0%B5%D0%B3%D0%BE%D0%BD%D0%B0 https://upload.wikimedia.org/wikipedia/commons/5/52/Oswald_West.jpg Список губернаторов Орегона Список губернаторов Орегона / Губернаторы штата Орегон Губерна́тор Орего́на является главой исполнительной власти штата и главнокомандующим вооружёнными и военно-морскими силами штата Орегон. Губернатор обеспечивает соблюдение законов штата, имеет право утверждать, либо налагать вето на законопроекты, принятые законодательным собранием штата, созывать легислатуру и миловать преступников и смягчать приговоры, за исключением случаев государственной измены и импичмента. Губернатор должен быть гражданином США старше 30 лет и жить в штате Орегон не менее трёх лет до выборов. Орегон является одним из семи штатов, где нет вице-губернаторов. В случае утраты губернатором трудоспособности, его смерти, отставки или отстранения от должности, образовавшееся вакантное место до следующих выборов занимает избранный и имеющий право занимать эту должность человек в порядке преемственности, определённом конституцией штата: секретарь штата, казначей штата, президент Сената штата, спикер Палаты представителей штата. В штате Орегон было 38 губернаторов, из них 21 были республиканцами и 16 — демократами. Нынешний губернатор Кейт Браун вступила в должность 18 февраля 2015 года. https://sl.wikipedia.org/wiki/Mestna_cerkev_svetega_Dionizija https://upload.wikimedia.org/wikipedia/commons/3/37/Esslingen_aN%2C_St._Dionys%2C_Ausgrabungen%2C_Schiff_St._Vitalis.jpg Mestna cerkev svetega Dionizija Mestna cerkev svetega Dionizija / Muzej English: Excavations below the Stadtkirche St. Dionys (Saint Dionysius church) in Esslingen am Neckar, Germany: excavations inside the foundations of the nave of the 2nd Saint Vitalis church (St. Vitalis II), which was erected in the 1st half of the 9th century. The wall in the background is the foundation for the main façade of St. Vitalis II; at the top we see the doorstep of the church’s main portal. — Sorry for the bad image quality: it’s very dark in the excavations and neither flashlight nor tripod are allowed. But I was explicitely allowed to take photographs without flashlight. Deutsch: Ausgrabungen unter der Stadtkirche St. Dionys in Esslingen am Neckar: Ausgrabunden zwischen den Fundamenten für das Schiff der zweiten St.-Vitalis-Kirche (St. Vitalis II), die in der 1. Hälfte des 9. Jahrhunderts errichtet wurde. Die Mauer im Hintergrund ist das Fundament der Hauptfassade von St. Vitalis II; oben auf sieht man die Schwelle des Hauptportals. – Sorry für die schlechte Bildqualität: in den Ausgrabungen ist es sehr dunkel und weder Blitz noch Stativ sind erlaubt. Das Fotografieren ohne diese Hilfsmittel wurde mir jedoch ausdrücklich gestattet. Protestantska mestna cerkev svetega Dionizija v Esslingenu je gotska cerkev, ki stoji na južni strani Marktplatza in skupaj s katoliško cerkvijo svetega Pavla in Marijino cerkvijo sestavlja kompleks, ki daje podobo mestu. V letih 1960 do 1963 so bile med namestitvijo ogrevalnega sistema arheološke raziskave območja pod cerkvijo svetega Dionizija in okoli nje. Najdbe, vključno z nagrobno ploščo (Nordman), je mogoče danes videti v arheološkem muzeju mestne cerkve. Izkopavanja pod cerkvijo so bila povod za ustanovitev oddelka za arheologijo pri državnem uradu za kulturno dediščino. https://de.wikipedia.org/wiki/Mondaufgang_am_Meer https://upload.wikimedia.org/wikipedia/commons/c/c0/Sitzender_Mann.jpg Mondaufgang am Meer / Studien und Zeichnungen Deutsch: Sitzender Mann, um 1822, Pinsel in Schwarz, 9,9 x 6 cm Mondaufgang am Meer, auch Mondschein auf ruhigem Meer ist ein 1822 entstandenes Gemälde von Caspar David Friedrich. Das Bild in Öl auf Leinwand im Format 55 cm × 71 cm befindet sich in der Berliner Nationalgalerie zusammen mit seinem Pendant Dorflandschaft bei Morgenbeleuchtung. Friedrich verwendet in dem Gemälde die um 1822 angefertigte Pinselzeichnung (Durchzeichnung) Sitzender Mann sowie die 1818 entstandene Federzeichnung (Durchzeichnung) Zwei sitzende Frauen. Nach Willi Geismeier sollen die Pausen der Staffagefiguren von fremder Hand stammen und von Georg Friedrich Kersting ausgeführt worden sein. Dem steht die gleichförmige Linienführung entgegen, die eher auf die Durchzeichnungen von der Glasplatte der Camera obscura oder Nachzeichnungen des Blicks durch das Prisma der Camera lucida hindeutet. Nach neuesten Forschungen geht die Verwendung der Durchzeichnung auf die malerische Ausführung des Ölbildes und nicht auf dessen Unterzeichnung zurück. Vermutlich benutzte Friedrich die Pause mit den beiden Frauen auch für das verschollene Seestück mit Mondaufgang. Möglicherweise liegt dem Vordergrund eine verschollene Zeichnung vom Strand bei Stubbenkammer zu Grunde, die auf der Rügenreise im August 1818 entstanden sein müsste. Der im Bild dargestellte Strand ist für diese Gegend charakteristisch. https://sr.wikipedia.org/wiki/%D0%9C%D0%B5%D0%BA%D1%81%D0%B8%D0%BA%D0%BE http://upload.wikimedia.org/wikipedia/commons/b/b3/BenitoJuarez.jpg Мексико / Историја / 19. век Бенито Хуарез, први мексички председник индијанског порекла. Мексико, званично Сједињене Мексичке Државе, држава је у Северној Америци која се на северу граничи са САД, на југоистоку са Гватемалом и Белизеом, на западу са Тихим океаном, а на истоку са Мексичким заливом и Карипским морем. Површина Мексика износи 1.972.550 km² и по томе је 13. држава у свету. Главни и највећи град Мексика је Мексико Сити, а други већи градови су Екатепек де Морелос, Гвадалахара, Пуебла, Сијудад Хуарез, Тихуана, Монтереј и Леон. Број становника Мексика, према подацима из 2015. године, износио је 125.280.000, што је на 11. месту на свету. У преколумбовском Мексику постојале су многе индијанске културе које су створиле напредне цивилизације као што су Олмеци, Толтеци, Теотивакан, Запотеци, Маје и Астеци. Шпанија је 1521. године покорила ову територију и организовала је у Нову Шпанију. Мексико је стекао независност од Шпаније 1810. године. Период после стицања независности обележиле су привредне нестабилности, Америчко-мексички рат и територијални уступци САД, грађански рат, два царства и диктатура. Диктатура је 1910. године довела до Мексичке револуције, што је довело до доношења Устава из 1917. и успостављања данашњег политичког система. Након проглашења независности, Агустин де Итурбиде се одмах прогласио првим царем Мексика. Економска и политичка ситуација у Царству постајала је неподношљива, те је 1823. године Итурбиде био збачен и протеран, а Мексико је био проглашен републиком под именом Сједињене Мексичке Државе. Следеће године, 1824, проглашен је републикански устав, а Гвадалупе Викторија постао је први председник Мексика. Следећи период мексичке историје био је веома буран и нестабилан, како на политичком, тако и на економском плану. Валентин Гомез Фаријас је 1833. године извео више либералних реформи, што је изазвало револт у конзервативним круговима који је довео до распуштања прве федералне републике и стварања прве централистичке републике. Генерал Антонио Лопез де Санта Ана прогласио је 1835. године тзв. Седам закона, чиме је изазвао сепаратистичке реакције у више департмана. Сепаратистички устанци су углавном бивали угушени, осим у Тексасу, који је прогласио своју независност 1836. године, и које су потом анектирале Сједињене Државе. Године 1841. и Јукатан је такође прогласио своју независност. Тек 1848. године, поново је постао део мексичке државе. Између 1846. и 1848. Мексико је био у рату са САД због спора у вези са тексашким територијама. Рат је заршен споразумом Гвадалупе—Идалго, којим је Мексико био приморан да се одрекне више од половине својих територија у корист САД. Након завршетка рата, сукоби између политичких струја у земљи су се наставили, што је довело до једанаестог доласка на власт Санта Ане (1853—1855) који је по други пут успоставио диктатуру. Године 1854. либерали су се дигли на оружје под вођством Хуана Алвареза, што је довело до збацивања Санта Ане и доласка либерала на власт. Проглашење Реформских закона либералне владе није одговарал интересима конзервативних група, а нарочито цркви. Године 1857. проглашен је нови устав Мексика, који је, између осталог, успоставио одвајање цркве од државе, прогласивши Мексико лаичком државом, као и федерализам као облик владавине. Пошто конзервативни кругови нису хтели да признају тај устав, 1858. године избио је Реформски рат током којег су обе стране имале своје владе. Рат се завршио 1861. године победом либерала, а на власт је дошао Бенито Хуарез, први мексички председник индијанског порекла (припадао је народу Запотека). Током шездесетих година 19. века, Мексико је претрпео инвазију Француске која је помагала конзервативце и чији је резултат био успостављање Другог мексичког царства, на чији је престо сео Максимилијан Хабзбуршки под именом Максимилијан I од Мексика. Француска интервенција је била завршена 1867. коначним поразом конзервативаца. Максимилијан је био ухапшен, суђено му је 14. а погубљен је 19. јуна 1867. године у Сантијагу де Керетару. Бенито Хуарез је остао председник све до своје смрти 1872. године. Последње године његове власти доживеле су тешке критике разних либералних фракција. Након Хуарезове смрти, на месту председника нашао се Себастијан Лердо де Техада, за кога се говорило да је јакобинац. Након Техадиног неуспелог покушаја реизбора, на власт је дошао Порфирио Дијаз, републикански генерал током француске интервенције. Порфирио Дијаз је владао у два наврата, од 1786. до 1880, и од 1884. до 1911. Током овог периода, познатог под именом Порфиријато, Мексико је доживео значајан економски напредак захваљујући страним улагањима. С друге стране, овај период је такође познат и по великој друштвеној неједнакости и политичкој репресији. Радници и сељаци су живели веома бедно, политичка опозиција је била на силу елиминисана, а побуњеници су бивали протерани или слати на принудан рад. https://kk.wikipedia.org/wiki/%D0%96%D0%BE%D2%93%D0%B0%D1%80%D2%93%D1%8B_%D0%B6%D1%8B%D0%BB%D0%B4%D0%B0%D0%BC%D0%B4%D1%8B%D2%9B%D1%82%D1%8B_%D0%BF%D0%BE%D0%B9%D1%8B%D0%B7%D0%B4%D0%B0%D1%80%D1%8B%D0%BD%D1%8B%D2%A3_%D1%82%D1%96%D0%B7%D1%96%D0%BC%D1%96 https://upload.wikimedia.org/wikipedia/commons/7/72/Italo_NTV_Class_ETR_575_No_575-154.jpg Жоғарғы жылдамдықты пойыздарының тізімі Жоғарғы жылдамдықты пойыздарының тізімі Nuovo Trasporto Viaggiatori (NTV) is an Italian company which is Europe's first private open access operator of 300 km/h high-speed trains. NTV was created by four Italian businessmen to compete with Trenitalia. An order for 25 Alstom Automotrice à grande vitesse (AGV) train-sets each with 11 cars was announced on 17 January 2008. No 575-174 waits to depart at Santa Lucia Station, Venice Бұл әлем елдеріндегі жоғарғы жылдамдық пойыздарының тізімі. Бұл әлем елдеріндегі жоғарғы жылдамдық пойыздарының тізімі. https://nl.wikipedia.org/wiki/Lijst_van_personen_geboren_op_8_februari http://upload.wikimedia.org/wikipedia/commons/5/5e/Jan_Alberts_Meursinge.jpg Lijst van personen geboren op 8 februari Lijst van personen geboren op 8 februari Jan Alberts Meursinge geboren op 8 februari 1795 Nederlands: Afbeelding van Jan Alberts Meursinge (1795-1877), burgemeester van Anloo en Lid van Gedeputeerde Staten van Drenthe Dit is een lijst van personen die geboren zijn op 8 februari. De inhoud van deze pagina is gebaseerd op gegevens in Wikidata. Als er onvolkomenheden in deze lijst zitten, gelieve dan aldaar de gegevens aan te passen. Achter iedere persoon is hiervoor een link aanwezig met de aanduiding bewerken. Wijzigingen in Wikidata worden met enige regelmatigheid geautomatiseerd overgenomen naar deze pagina. 412 - Proclus, filosoof, schrijver, wiskundige en mythograaf uit het Byzantijnse Rijk bewerken 1284 - Eduard van Savoye, graaf, aristocraat bewerken 1291 - Alfons IV van Portugal, Portugese vorst bewerken 1293 - Clementia van Hongarije, Franse bewerken 1404 - Constantijn XI Palaiologos Dragases, keizer, Byzantijns keizer uit het Byzantijnse Rijk bewerken 1424 - Cristoforo Landino, filosoof, schrijver en dichter bewerken 1487 - Ulrich van Württemberg, Duitse graaf, aristocraat bewerken 1491 - Francesco Maria Sforza, Italiaanse graaf van Pavia bewerken 1513 - Daniele Barbaro, kardinaal en ambassadeur, vertaler, wetenschapper, wiskundige, rooms-katholiek priester, diplomaat en klerk uit de Republiek Venetië bewerken Dit is een lijst van personen die geboren zijn op 8 februari. De inhoud van deze pagina is gebaseerd op gegevens in Wikidata. Als er onvolkomenheden in deze lijst zitten, gelieve dan aldaar de gegevens aan te passen. Achter iedere persoon is hiervoor een link aanwezig met de aanduiding bewerken. Wijzigingen in Wikidata worden met enige (on)regelmatigheid geautomatiseerd overgenomen naar deze pagina. 412 - Proclus, filosoof, schrijver, wiskundige en mythograaf uit het Byzantijnse Rijk (overleden in 485) bewerken 1284 - Eduard van Savoye, graaf, aristocraat (overleden in 1329) bewerken 1291 - Alfons IV van Portugal, Portugese vorst (overleden in 1357) bewerken 1293 - Clementia van Hongarije, Franse (overleden in 1328) bewerken 1404 - Constantijn XI Palaiologos Dragases, keizer, Byzantijns keizer uit het Byzantijnse Rijk (overleden in 1453) bewerken 1424 - Cristoforo Landino, filosoof, schrijver en dichter (overleden in 1498) bewerken 1487 - Ulrich van Württemberg, Duitse graaf, aristocraat (overleden in 1550) bewerken 1491 - Francesco Maria Sforza, Italiaanse graaf van Pavia (overleden in 1512) bewerken 1513 - Daniele Barbaro, kardinaal en ambassadeur, vertaler, wetenschapper, wiskundige, rooms-katholiek priester, diplomaat en klerk uit de Republiek Venetië (overleden in 1570) bewerken 1574 - Willem van der Codde, Nederlandse rector magnificus van de Universiteit Leiden, theoloog, klassiek filoloog en academisch docent (overleden in 1625) bewerken 1589 - Peter Melander, Duitse graaf, officier (overleden in 1648) bewerken 1591 - Guercino, Italiaanse kunstschilder (overleden in 1666) bewerken 1593 - Louis de Nogaret de La Valette, Franse katholieke aartsbisschop, rooms-katholiek priester (overleden in 1639) bewerken 1634 - Theodosius III van Portugal, Portugese Prince of Brazil en Hertogen van Bragança, componist en astroloog (overleden in 1653) bewerken 1641 - Robert Knox, ontdekkingsreiziger uit het Koninkrijk Groot-Brittannië (overleden in 1720) bewerken 1650 - Jan Gijselingh, Nederlandse beeldhouwer (overleden in 1718) bewerken 1668 - Jan Six II, Nederlandse burgemeester van Amsterdam (overleden in 1750) bewerken 1677 - Jacques Cassini, Franse directeur, astronoom (overleden in 1756) bewerken 1708 - Václav Jan Kopřiva, Tsjechische componist, organist en muziekpedagoog (overleden in 1789) bewerken 1720 - Sakuramachi, Keizer van Japan uit het Tokugawa-shogunaat (overleden in 1750) bewerken 1727 - Jean-André Deluc, natuuronderzoeker, meteoroloog, geoloog en academisch docent uit de Republiek van Genève (overleden in 1817) bewerken 1741 - André Ernest Modeste Grétry, Belgische componist (overleden in 1813) bewerken 1744 - Karl Theodor von Dalberg, Duitse hertog, Keurvorst, aartsbisschop van Mainz en katholieke aartsbisschop, klerk en rooms-katholiek priester (overleden in 1817) bewerken 1753 - Paulus van der Heim, jonkheer, Eerste Kamerlid, Nederlands minister van Economische Zaken en Nederlands minister van Buitenlandse Zaken (overleden in 1823) bewerken 1762 - Gia Long, Vietnamese monarch (overleden in 1820) bewerken 1766 - Karel van Brunswijk-Wolfenbüttel, hertog van Brunswijk-Wolfenbüttel (overleden in 1806) bewerken 1768 - Anthony Carlisle, Britse chirurg en arts (overleden in 1840) bewerken 1770 - François Polfvliet, Belgisch lid van de Kamer van volksvertegenwoordigers (overleden in 1856) bewerken 1778 - Henri van den Hove, Belgisch lid van de Kamer van volksvertegenwoordigers en Tweede Kamerlid (overleden in 1842) bewerken 1781 - Wilhelmine von Sagan, Duitse hertogin, hertogin en prinses, salonnière en schrijver (overleden in 1839) bewerken 1785 - Jacobus Josephus van Rijckevorsel, Nederlandse politicus, koopman, tekenaar, lithograaf en glasschilder (overleden in 1862) bewerken 1786 - Charles de Bryas, Franse (overleden in 1853) bewerken 1787 - Charles Rousselle, Belgisch lid van de Kamer van volksvertegenwoordigers (overleden in 1867) bewerken 1792 - Caroline Augusta van Beieren, koningin-gemalin uit het Koninkrijk Beieren (overleden in 1873) bewerken 1793 - Seerp Brouwer, Tweede Kamerlid (overleden in 1856) bewerke https://zh.wikipedia.org/zh-sg/%E5%85%89%E5%AD%A6%E6%98%BE%E5%BE%AE%E9%95%9C http://upload.wikimedia.org/wikipedia/commons/3/31/Inverted_Microscope.jpg English: By Richard Wheeler (Zephyris) 2007. Zeiss ID 03 Inverted microscope for tissue culture. Deutsch: Inverses Mikroskop 光学显微镜是一种利用光学透镜产生影像放大效应的显微镜。 由物体入射的光被至少两个光学系统放大。首先物镜产生一个被放大实像,人眼通过作用相当于放大镜的目镜观察这个已经被放大了的实像。一般的光学显微镜有多个可以替换的物镜,这样观察者可以按需要更换放大倍数,也就是增加放大倍率,放大倍率是由目镜倍率乘上物镜倍率所得来的。这些物镜一般被安置在一个可以转动的物镜盘上,转动物镜盘就可以使不同的物镜方便地进入光路,物镜盘的英文是Nosepiece,又译作鼻轮。 十八世纪,光学显微镜的放大倍率已经提高到了1000倍,使人们能用眼睛看清微生物体的形态、大小和一些内部结构。直到物理学家发现了放大倍率与分辨率之间的规律,人们才知道光学显微镜的分辨率是有极限的,分辨率的这一极限限制了放大倍率的无限提高,1600倍成了光学显微镜放大倍率的最高极限,使得形态学的应用在许多领域受到了很大限制。光学显微镜的分辨率受到光波长的限制,一般不超过0.3微米。假如显微镜使用紫外线作为光源或物体被放在油中的话,分辨率还可以得到提高。 光学显微镜依样品的不同可分为反射式和透射式。反射显微镜的物体一般是不透明的,光从上面照在物体上,被物体反射的光进入显微镜。这种显微镜经常被用来观察固体等,多应用在工学、材料领域,在正立显微镜中,此类显微镜又称作金相显微镜。透射显微镜的物体是透明的或非常薄,光从可透过它进入显微镜。这种显微镜常被用来观察生物组织。 光学显微镜依其聚光镜和物镜的设计,可用来观察不同的样品。明视野用来观察薄的染色生物组织样品,暗视野功能的视野下,背景为黑色,能突显样品的细微面貌,观察未染色样品时,如活细胞,可利用相位差功能。另外还有微分干涉差功能,都常搭配在光学显微镜上。 依光源的不同,还有萤光显微镜、共聚焦显微镜等类别。 2014年10月8日,诺贝尔化学奖颁给了艾力克·贝齐格,W·E·莫尔纳尔 和斯特凡·W·赫尔,奖励其发展超分辨荧光显微镜,这将带来光学显微镜进入纳米级尺度中。 倒立显微镜(Inverted microscope)明视野用之照明光源和聚光镜是来自机身上方,光线穿过聚光镜到达样品,再穿过位于样品下方的物镜,然后借由反射镜和透镜到达观察者的眼睛或成像仪器。对萤光显微镜而言,萤光激发光源和物镜同位于底部。由于激发光源可以是高功率大型激光光源或弧光灯,倒立式的设计更能稳定显微镜镜的结构。倒立显微镜常用于观察培养中的细胞或组织,特别是应用在萤光的生物样品上。 https://lv.wikipedia.org/wiki/V%C4%81cijas_kancleru_uzskait%C4%ABjums https://upload.wikimedia.org/wikipedia/commons/3/3f/Kurt_Georg_Kiesinger_%28N%C3%BCrburgring%2C_1969%29.jpg Vācijas kancleru uzskaitījums Vācijas Federatīvā Republika (1949 — 1990) (bundeskanclers) Vācijas kancleru uzskaitījums / Uzskaitījums / Vācijas Federatīvā Republika (1949 — 1990) (bundeskanclers) Kurt Georg Kiesinger, Chancellor of Germany, 1969. Šajā uzskaitījumā apkopoti Vācijas kancleri, kopš 1867. gada 1. jūlija, kad tika izveidots šis amats Ziemeļvācijas savienības laikā. Pirmais, kas ieņēma šo amatu bija Oto fon Bismarks. Vācijas impērijas un Veimāras republikas laikā no 1871. gada līdz 1945. gadam tika lietots apzīmējums reihskanclers, no 1934. gada "Vācu tautas fīrers un reihskanclers". Pēc 1945. gada lieto nosaukumu bundeskanclers. Kanclers ir Vācijas valdības galva. https://vi.wikipedia.org/wiki/Panicum_virgatum https://upload.wikimedia.org/wikipedia/commons/2/29/PanicumVirgatum.jpg Panicum virgatum / Ứng dụng / Năng lượng sinh học English: A picture of Panicum virgatum. Panicum virgatum, một loài thực vật có hoa trong họ Hòa thảo, thường được biết đến với tên gọi "switchgrass", là một loại cỏ bụi sống lâu năm mọc bản địa ở Bắc Mỹ vào các mùa ấm áp, nơi mà nó thường mọc tự nhiên từ vĩ tuyến 55 độ N ở Canada và tiến về phía nam vào Hoa Kỳ với Mexico. Switchgrass là một trong các loài thực vật chiếm ưu thế tại các đồng cỏ cao ở vùng trung Bắc Mỹ và có thể được tìm thấy ở các đồng cỏ lâu năm, đồng cỏ bản địa, và mọc tự nhiên ở các vệ đường. Nó thường được sử dụng chủ yếu để bảo tồn đất trồng, sản xuất các sản phẩm thức ăn cho súc vật, sử dụng trong các cuộc săn, làm cỏ trồng kiểng. Gần đây nó được sử dụng để sản xuất sinh khối cho năng lượng sinh học như ethanol hay butanol, các dự án khử độc đất bằng cây trồng, sản xuất sợi, điện năng, nhiệt năng và còn được sử dụng để cô lập sinh học cacbon điôxít trong khí quyển. Cỏ switchgrass đã được nghiên cứu làm cây trồng cho năng lượng sinh học tái sinh kể từ giữa những năm 1980, bởi vì nó là một loại cỏ bản địa sống lâu năm trong mùa ấm áp với khả năng cho năng suất từ trung bình đến cao ở các vùng đất nông nghiệp khó trồng trọt. Hiện nay nó đang được xem xét để sử dụng trong vài quy trình chuyển hóa năng lượng sinh học, bao gồm sản xuất ethanol xen-lu-lo, khí sinh học, và chất đốt trực tiếp cho các ứng dụng nhiệt năng. Những thuận lợi chính về mặt nông nghiệp của cỏ switchgrass khi sử dụng làm thực vật năng lượng sinh học là thời gian sống lâu, chịu được hạn hán và lũ lụt, yêu cầu lượng thuốc diệt cỏ và phân bón tương đối thấp, dễ kiểm soát, sống khỏe mạnh trong đất nghèo dinh dưỡng và các điều kiện khí hậu khác nhau, và khả năng thích nghi rộng rãi ở những vùng khí hậu ôn đới. Ở một vài vùng phía nam ấm và ẩm, chẳng hạn như Alabama, cỏ switchgrass có khả năng cho sản lượng lên đến 25 tấn cỏ sấy khô bằng lò mỗi Hec-ta ((oven dry tonne) ODT/ha). Một bản tóm tắt về sản lượng cỏ switchgrass qua 13 khu nghiên cứu thử nghiệm ở Hoa Kỳ cho thấy hai loại cỏ tốt nhất ở mỗi thử nghiệm cho sản lượng từ 9.4 đến 22.9 tấn/ha, với sản lượng trung bình là 14.6 ODT/ha. Tuy nhiên, những chỉ số này được ghi nhận lại dựa trên các thử nghiệm quy mô nhỏ, và các cánh đồng thương mại có thể được mong đợi với sản lượng ít nhất là thấp hơn 20% so với các kết quả trên. Ở Hoa Kỳ, sản lượng cỏ switchgrass có vẻ là cao nhất ở các vùng ấm và ẩm với các mùa phát triển lâu dài chẳng hạn như vùng Đông Nam Hoa Kỳ và thấp nhất ở các vùng có mùa khô ngắn hạn tại phía Bắc Great Plains. Năng lượng đầu vào cần thiết để trồng cỏ switchgrass rất thuận lợi khi so sánh với các cây cho hạt hàng năm chẳng hạn như ngô, đậu tương, hay cải dầu, mà có thể yêu cầu nguồn năng lượng đầu vào tương đối cao khi gieo trồng, sấy khô hạt, và bón phân. Các nguồn nhập liệu từ cả thân cỏ dạng C4 thân thảo sống lâu năm đều là các nguồn nhập liệu mong muốn cho sinh khối năng lượng, vì chúng cần nguồn năng lượng hóa thạch đầu vào ít hơn để trồng và có thể đón được năng lượng mặt trời một cách hiệu quả bởi vì hệ thống quang hợp C4 và bản chất sống lâu năm của chúng. Một nghiên cứu chỉ ra rằng sẽ mất khoảng từ 0.97 đến 1.3 GJ (Giga Joule) để sản xuất 1 tấn cỏ switchgrass, so với 1.99 đến 2.66 GJ để sản xuất một tấn bắp. Một nghiên cứu khác cho thấy cỏ switchgrass sử dụng 0.8 GJ/ODT năng lượng hóa thạch so với hạt bắp là 2.9 GJ/ODT. Vậy là cỏ switchgrass có chứa xấp xỉ 18.8 GJ/ODT sinh khối, tỉ lệ đầu vào và ra về mặt năng lượng của cây nó có thể lên đến 20:1. Tỉ lệ rất triển vọng này là do năng lượng đầu ra tương đối cao trên mỗi hec-ta và năng lượng đầu vào cho sản xuất thấp. Những cố gắng đáng kể đang được thực hiện trong việc phát triển cỏ switchgrass làm cây trồng sản xuất ethanol xen-lu-lô tại Hoa Kỳ. Trong một bài diễn văn vào năm 2006, tổng thống Bush đề xuất sử dụng cỏ switchgrass để sản xuất ethanol; kể từ đó, hơn 100 triệu USD đã được đầu tư vào việc nghiên cứu cỏ switchgrass làm nguồn nhiên liệu sinh học tiềm năng. Cỏ switchgrass có tiềm năng sản xuất lên đến 380 lít ethanol cứ mỗi tấn cỏ thu hoạch được. Tuy nhiên, kỹ thuật chuyển hóa sinh khối thực vật thân thảo thành ethanol hiện tại là khoảng 340 lít trên mỗi tấn. Trái lại, lượng ethanol từ ngô cho khoảng 400 lít mỗi tấn ngô. Có vài cố gắng đáng kể nhằm làm tăng lượng ethanol trích từ ngô: (Ngô) Lượng ethanol đã được cải thiện từ 2.4 gallon trên mỗi giạ vào những năm 1980 đến 2.8 gallon hiện nay. Các giống ngô lai được phát triển đặc biệt để sản xuất ethanol đã chứng minh được rằng lượng ethanol tăng lên được 2.7 % - và khi sử dụng xen-lu-lô (sợi) trong hạt ngô, ngoài tinh bột ra, có thể tăng thêm lượng ethanol từ 10 đến 13 %. Với sự kết hợp của các giống lai và sự tối ưu hóa các quy trình, lượng ethanol theo lý thuyết khoảng 3.51 gallon mỗi giạ là có thể được – mà không gặp các tác động tiêu cực với hàm lượng protein hay dầu trong phần bã thực vật cho gia súc ăn. Sự cải thiện các quy trình trong ngành công nghiệp sử dụng ngô theo phương pháp cũ là dựa trên các kỹ thuật mới chẳng hạn như https://hu.wikipedia.org/wiki/Drottningholm https://upload.wikimedia.org/wikipedia/commons/e/eb/Drottningholm_castle_with_fountain_2005-08-14.jpg Drottningholms slott, Stockholm, Sweden. The swedish King's recidence. Suomi: Tessin vanhemman suunnittelema Drottningholmin linna. Drottningholm település Svédországban, Ekerö községben. Lovön szigetén, a Mälaren-tavon található. Királyi palotájáról és színházáról nevezetes. https://en.wikipedia.org/wiki/Thomas_Weber_(footballer) http://upload.wikimedia.org/wikipedia/commons/2/27/FC_Admira_Wacker_M%C3%B6dling_%282013%29_-_Thomas_Weber_%2801%29.jpg Thomas Weber (footballer) Thomas Weber (footballer) Deutsch: Kadervorstellung FC Admira Wacker Mödling am 2. Juli 2013 im Bundesstadion Südstadt. - Das Foto zeigt Thomas Weber. English: Teampresentation FC Admira Wacker Mödling at 2013-07-02 in Bundesstadion Südstadt. – The photo shows Thomas Weber. Camera location48°&#160;05′&#160;51.07″&#160;N, 16°&#160;18′&#160;41.12″&#160;EView this and other nearby images on: OpenStreetMap - Google Maps - Google Earth Thomas Weber is an Austrian footballer who plays for Admira Wacker. Thomas Weber (born 29 May 1993) is an Austrian footballer who plays for Admira Wacker. https://fr.wikipedia.org/wiki/Alo%C3%A8s https://upload.wikimedia.org/wikipedia/commons/0/0e/Aloe_maculata_JPG2.jpg Tableau des espèces acceptées Aloès / Taxonomie / Tableau des espèces acceptées Français&#160;: Fleur d’ Aloès maculés (Aloe maculata) - Habitat: L'Île-Rousse ( Haute-Corse) - France. English: Flower of Aloe maculata also known as Soap Aloe, Zebra Aloe or African Aloe - Habit: L'Île-Rousse ( Haute-Corse) - France. Walon: Fleûr d’ Aloès tchaborè (Aloe maculata) - Place: L'Île-Rousse (Ôte-Corse) - France. Aloe est un genre de plantes succulentes, les aloès, originaires d'Afrique, de Madagascar et les Iles Mascareines, de la péninsule arabique et Socotra. Certaines espèces d'aloès ont été introduites dans de nombreux pays. Vous pouvez en trouver autour de la Méditerranée, en Amérique du Nord, Amérique du Sud, Amérique Centrale, Inde, Asie du Sud-Est, ou encore en Korée et en Australie. Dans le sud de la France, autour des zones habitées on peut croiser en particulier des A. arborescens et des A. maculata. https://ar.wikipedia.org/wiki/%D9%81%D8%B1%D9%86%D8%B3%D8%A7_%D8%A7%D9%84%D8%B5%D8%BA%D9%8A%D8%B1%D8%A9 https://upload.wikimedia.org/wikipedia/commons/d/df/Strasbourg_%283187697047%29.jpg فرنسا الصغيرة هو لقب يطلق على الحي التاريخي لمدينة ستراسبورغ الفرنسية، يقع هذا الحي على ضفة النهر الكبير للمدينة ولقد صنف سنة 1988 من مآثر التاريخية العالمية من طرف المنظمة العالمية لتراث. بني هذا الحي اواخر القرن الخامس عشر لترحيب بآلجنود العائدين من الحملة الإيطالية،كانت المدينة تحتضن العديد من المدابغ. كما انها تتوفر على العديد من المناظر الخلابة، يعرف هذا الحي شعبية كبيرة بحيث أن السياح يتوافدون عليه من مختلف بلدان العالم، لها مميزات عديدة منها الحفاظ على المنازل الخشبية و المطاعم التقليدية. https://de.wikipedia.org/wiki/Fabian_Hamb%C3%BCchen https://upload.wikimedia.org/wikipedia/commons/2/23/AV0A5777_Fabian_Hamb%C3%BCchen_und_seine_Freundin_Marcia_Ev.jpg Fabian Hambüchen und seine damalige Freundin Marcia Ev bei der „Lambertz Monday Night“ 2017. Deutsch: Fabian Hambüchen und seine Freundin Marcia Ev bei der Lambertz Monday Night 2017. Fabian Hambüchen ist ein ehemaliger deutscher Kunstturner. Seine größten Erfolge errang er am Reck mit dem Olympiasieg 2016 und dem Weltmeistertitel 2007 sowie dem Gewinn der Bronzemedaille bei den Olympischen Spielen 2008 in Peking und der Silbermedaille 2012 in London. Nach dem Ende seiner aktiven Karriere ist er als Turnexperte für die ARD tätig. Fabian Hambüchen (* 25. Oktober 1987 in Bergisch Gladbach) ist ein ehemaliger deutscher Kunstturner. Seine größten Erfolge errang er am Reck mit dem Olympiasieg 2016 und dem Weltmeistertitel 2007 sowie dem Gewinn der Bronzemedaille bei den Olympischen Spielen 2008 in Peking und der Silbermedaille 2012 in London. Nach dem Ende seiner aktiven Karriere ist er als Turnexperte für die ARD tätig. https://de.wikipedia.org/wiki/Liste_der_Kulturdenkmale_in_Rathen https://upload.wikimedia.org/wikipedia/commons/2/2f/Elbtalbahn%2C_Zug_der_Linie_S1_in_Kurort_Rathen_%2801-2%29.jpg Liste der Kulturdenkmale in Rathen Liste der Kulturdenkmale in Rathen / Kurort Rathen English: train of the Dresden S-Bahn with a Bombardier Double-deck Coach in Rathen. Deutsch: Das Bild zeigt einen Zug der S-Bahnlinie S1 der Dresdner S-Bahn mit Bombardier Double-deck Doppelstockwagen in Rathen (Bahnstrecke Dresden–Děčín). Die Liste der Kulturdenkmale in Rathen enthält die Kulturdenkmale in Rathen. Die Anmerkungen sind zu beachten. Diese Liste ist eine Teilliste der Liste der Kulturdenkmale im Landkreis Sächsische Schweiz-Osterzgebirge. Diese Liste ist eine Teilliste der Liste der Kulturdenkmale in Sachsen. https://es.wikipedia.org/wiki/Lepus_europaeus https://upload.wikimedia.org/wikipedia/commons/2/2f/Skull_of_a_hare.png Lepus europaeus / Galería English: Skull of a hare. Deutsch: Schädel eines Feldhasen. La liebre común o liebre europea es una especie de mamífero lagomorfo de la familia Leporidae que se encuentra entre las principales piezas de caza. https://da.wikipedia.org/wiki/Ghil%27ad_Zuckermann https://upload.wikimedia.org/wikipedia/commons/7/7a/Zuckermann.jpg Professor Ghil'ad Zuckermann (2011) English: Ghil'ad Zuckermann עברית: גלעד צוקרמן Ghil'ad Zuckermann er en israelsk professor i lingvistik ved Adelaide Universitet i Australien. Han har en doktorgrad fra Oxfords universitet. Han mener at i israelsk “er der mange hebraiske elementer som følge af en bevidst vækkelse, men også en lang række pervasive sproglige egenskaber, der stammer fra en ubevidst modersmål som jiddisch." Han mener, at israelsk ikke er et genoplivet biblhebraisk, men et nyt sprog som er hybridt semito-europæisk af karakter, fordi både semitiske og indoeuropæiske elementer indgår meget kraftigt i tilblivelsen. Han skriver, at havde det været marokkanske jøder i stedet for østeuropæiske som havde skabt israelsk hebraisk, så var det formentlig blevet et semitisk sprog." Ghil'ad Zuckermann (født 1. juni 1971 i Tel Aviv) er en israelsk professor i lingvistik ved Adelaide Universitet i Australien. Han har en doktorgrad fra Oxfords universitet. Han mener at i israelsk (nyhebraisk, ivrit) “er der mange hebraiske elementer som følge af en bevidst vækkelse, men også en lang række pervasive sproglige egenskaber, der stammer fra en ubevidst modersmål som jiddisch." Han mener, at israelsk ikke er et genoplivet biblhebraisk, men et nyt sprog som er hybridt semito-europæisk af karakter, fordi både semitiske og indoeuropæiske elementer indgår meget kraftigt i tilblivelsen. Han skriver, at havde det været marokkanske jøder i stedet for østeuropæiske som havde skabt israelsk hebraisk, så var det formentlig blevet et semitisk sprog." https://ca.wikipedia.org/wiki/Galeria_Municipal_d%27Atenes http://upload.wikimedia.org/wikipedia/commons/7/70/Gallery_08.jpg Galeria Municipal d'Atenes Galeria Municipal d'Atenes / Imatges La Galeria Municipal d'Atenes és un prestigiós museu a la plaça Avdi en el cèntric barri de Metaxourgeio a Atenes, capital grega. https://nl.wikipedia.org/wiki/Lijst_van_onroerend_erfgoed_in_Hasselt https://upload.wikimedia.org/wikipedia/commons/e/e4/Hasselt_-_Huis_Demerstraat_78.jpg Lijst van onroerend erfgoed in Hasselt Lijst van onroerend erfgoed in Hasselt Nederlands: Burgerhuis in neoclassicistische stijl, Demerstraat 78 in Hasselt Burgerhuizen in neoclassicistische stijl Een overzicht van het onroerend erfgoed in Hasselt. Het onroerend erfgoed maakt deel uit van het cultureel erfgoed in België. Een overzicht van het onroerend erfgoed in Hasselt. Het onroerend erfgoed maakt deel uit van het cultureel erfgoed in België.
https://huggingface.co/kubarozek
Kuba Rożek kubarozek qbit-42 Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/the-vsp
vsp the-vsp Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/mateuswagner
Mateus Wagner mateuswagner Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/ronler
Ron Lehrer ronler Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/y-ler
Michael y-ler Research interests None yet Organizations models None public yet datasets None public yet
https://huggingface.co/hardware/intel
Scale Transformer Workloads with Intel AI Hardware performance and developer productivity at unmatched scale Easily optimize models for production Optimum Intel is the interface between Hugging Face's Transformers library and the different tools and libraries provided by Intel to accelerate end-to-end pipelines on Intel architectures. Intel Neural Compressor is an open-source library enabling the usage of the most popular compression techniques such as quantization, pruning and knowledge distillation. OpenVINO is an open-source toolkit enabling to optimize and deploy your model with high performance inference capabilities on Intel devices. With Optimum Intel, you can apply state-of-the-art optimization techniques on your Transformer models with minimal effort. Learn more about Optimum Intel Get high performance on CPU instances 3rd Generation Intel® Xeon® Scalable processors offer a balanced architecture that delivers built-in AI acceleration and advanced security capabilities. This allows you to place your transformer workloads where they perform best while minimizing costs. Learn more about Intel AI Hardware Quickly go from concept to scale With hardware and software optimized for AI workloads, an open, familiar, standards-based software environment and the hardware flexibility you need to create the deployment you want, Intel can help accelerate your time to production. Explore Intel Developer Zone
https://huggingface.co/spaces/Intel/ldm3d
App Files Files Community 3
https://huggingface.co/spaces/Intel/Stable-Diffusion-Side-by-Side
App Files Files Community 1
https://huggingface.co/spaces/Intel/Q8-Chat
App Files Files Community 1
https://huggingface.co/spaces/Intel/qa_sparse_bert
App Files Files Community
https://huggingface.co/spaces/Intel/intel-xai-tools-cam-demo
App Files Files Community
https://huggingface.co/Intel/Llama-2-13b-chat-hf-onnx-int4
Llama-2-13b-chat-hf-onnx-int4 Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository of INT4 weight only quantization for the 13B fine-tuned model in ONNX format. Note: Use of this model is governed by the Meta license. Please ensure you have accepted that License and got access to the FP32 model before downloading models here. This INT4 model is generated with Intel® Neural Compressor's weight-only quantization method. Intended Use Description Primary intended uses You can use the raw model for text generation inference Primary intended users Anyone doing text generation inference Out-of-scope uses This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people. Export to ONNX Model The FP32 model is exported with meta-llama/Llama-2-13b-chat-hf: optimum-cli export onnx --model meta-llama/Llama-2-13b-chat-hf --task text-generation ./llama2_13b_chat Build ONNX Runtime Build ONNX Runtime from resource to support MatMulWithQuantWeight op. You can refer to build-onnx-runtime-for-inferencing for more prerequisites. git clone -b sub_byte_quant_zp https://github.com/microsoft/onnxruntime.git cd onnxruntime ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests --build_wheel Run Quantization The weight-only quantization cofiguration is as below: dtype group_size scheme algorithm INT4 32 sym RTN Run INT4 weight-only quantization with Intel® Neural Compressor. We provide the key code below. For the complete quantization script, please refer to llama weight-only example. from neural_compressor import quantization, PostTrainingQuantConfig config = PostTrainingQuantConfig( approach="weight_only", calibration_sampling_size=[8], op_type_dict={".*": {"weight": {"bits": 4, "algorithm": ["RTN"], "scheme": ["sym"], "group_size": 32}}},) q_model = quantization.fit( "/path/to/llama2_13b_chat/decoder_model.onnx", # FP32 model path config, calib_dataloader=dataloader) q_model.save("/path/to/Llama-2-13b-chat-hf-onnx-int4/decoder_model.onnx") # INT4 model path Evaluation Operator Statistics Below shows the operator statistics in the INT4 ONNX model: Op Type Total INT4 weight FP32 MatMul 321 281 40 Evaluation of perplexity Evaluate the model with evaluation API of Intel® Extension for Transformers on lambada_openai task. from intel_extension_for_transformers.evaluation.lm_eval import evaluate model_path = "/path/to/Llama-2-13b-chat-hf-onnx-int4" # folder contains the INT4 model tokenizer = "Intel/Llama-2-13b-chat-hf-onnx-int4" batch_size = 64 tasks=["lambada_openai"] results = evaluate( model="hf-causal", model_args="pretrained=" + model_path + ",tokenizer="+ tokenizer, batch_size=batch_size, tasks=tasks, model_format="onnx" ) Model Model Size (GB) lambada_openai acc lambada_openai ppl FP32 49 0.7321 2.9163 INT4 8.1 0.7289 3.0061
https://huggingface.co/Intel/Llama-2-70b-hf-onnx-int4
Llama-2-70b-hf-onnx-int4 Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository of INT4 weight only quantization for the 70B fine-tuned model in ONNX format. Note: Use of this model is governed by the Meta license. Please ensure you have accepted that License and got access to the FP32 model before downloading models here. This INT4 model is generated with Intel® Neural Compressor's weight-only quantization method. Intended Use Description Primary intended uses You can use the raw model for text generation inference Primary intended users Anyone doing text generation inference Out-of-scope uses This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people. Export to ONNX Model The FP32 model is exported with meta-llama/Llama-2-70b-hf: optimum-cli export onnx --model meta-llama/Llama-2-70b-hf --task text-generation ./llama2_70b Build ONNX Runtime Build ONNX Runtime from resource to support MatMulWithQuantWeight op. You can refer to build-onnx-runtime-for-inferencing for more prerequisites. git clone -b sub_byte_quant_zp https://github.com/microsoft/onnxruntime.git cd onnxruntime ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests --build_wheel Run Quantization The weight-only quantization cofiguration is as below: dtype group_size scheme algorithm INT4 32 sym RTN Run INT4 weight-only quantization with Intel® Neural Compressor. We provide the key code below. For the complete quantization script, please refer to llama weight-only example. from neural_compressor import quantization, PostTrainingQuantConfig config = PostTrainingQuantConfig( approach="weight_only", calibration_sampling_size=[8], op_type_dict={".*": {"weight": {"bits": 4, "algorithm": ["RTN"], "scheme": ["sym"], "group_size": 32}}},) q_model = quantization.fit( "/path/to/llama2_70b/decoder_model.onnx", # FP32 model path config, calib_dataloader=dataloader) q_model.save("/path/to/Llama-2-70b-hf-onnx-int4/decoder_model.onnx") # INT4 model path Evaluation Operator Statistics Below shows the operator statistics in the INT4 ONNX model: Op Type Total INT4 weight FP32 MatMul 641 561 80 Evaluation of perplexity Evaluate the model with evaluation API of Intel® Extension for Transformers on lambada_openai task. from intel_extension_for_transformers.evaluation.lm_eval import evaluate model_path = "/path/to/Llama-2-70b-hf-onnx-int4" # folder contains the INT4 model tokenizer = "Intel/Llama-2-70b-hf-onnx-int4" batch_size = 64 tasks=["lambada_openai"] results = evaluate( model="hf-causal", model_args="pretrained=" + model_path + ",tokenizer="+ tokenizer, batch_size=batch_size, tasks=tasks, model_format="onnx" ) Model Model Size (GB) lambada_openai acc lambada_openai ppl FP32 257 0.7964 2.6612 INT4 41 0.7896 2.7546
https://huggingface.co/Intel/Llama-2-7b-chat-hf-onnx-int4
Llama-2-7b-chat-hf-onnx-int4 Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository of INT4 weight only quantization for the 7B fine-tuned model in ONNX format. Note: Use of this model is governed by the Meta license. Please ensure you have accepted that License and got access to the FP32 model before downloading models here. This INT4 model is generated with Intel® Neural Compressor's weight-only quantization method. Intended Use Description Primary intended uses You can use the raw model for text generation inference Primary intended users Anyone doing text generation inference Out-of-scope uses This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people. Export to ONNX Model The FP32 model is exported with meta-llama/Llama-2-7b-chat-hf: optimum-cli export onnx --model meta-llama/Llama-2-7b-chat-hf --task text-generation ./llama2_7b_chat Build ONNX Runtime Build ONNX Runtime from resource to support MatMulWithQuantWeight op. You can refer to build-onnx-runtime-for-inferencing for more prerequisites. git clone -b sub_byte_quant_zp https://github.com/microsoft/onnxruntime.git cd onnxruntime ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests --build_wheel Run Quantization The weight-only quantization cofiguration is as below: dtype group_size scheme algorithm INT4 32 asym GPTQ Run INT4 weight-only quantization with Intel® Neural Compressor. We provide the key code below. For the complete quantization script, please refer to llama weight-only example. from neural_compressor import quantization, PostTrainingQuantConfig config = PostTrainingQuantConfig( approach="weight_only", calibration_sampling_size=[8], op_type_dict={".*": {"weight": {"bits": 4, "algorithm": ["GPTQ"], "scheme": ["asym"], "group_size": 32}}},) q_model = quantization.fit( "/path/to/llama2_7b_chat/decoder_model.onnx", # FP32 model path config, calib_dataloader=dataloader) q_model.save("/path/to/Llama-2-7b-chat-hf-onnx-int4/decoder_model.onnx") # INT4 model path Evaluation Operator Statistics Below shows the operator statistics in the INT4 ONNX model: Op Type Total INT4 weight FP32 MatMul 257 161 96 Evaluation of perplexity Evaluate the model with evaluation API of Intel® Extension for Transformers on lambada_openai task. from intel_extension_for_transformers.evaluation.lm_eval import evaluate model_path = "/path/to/Llama-2-7b-chat-hf-onnx-int4" tokenizer = "Intel/Llama-2-7b-chat-hf-onnx-int4" batch_size = 64 tasks=["lambada_openai"] results = evaluate( model="hf-causal", model_args="pretrained=" + model_path + ",tokenizer="+ tokenizer, batch_size=batch_size, tasks=tasks, model_format="onnx" ) Model Model Size (GB) lambada_openai acc lambada_openai ppl FP32 26 0.7058 3.2788 INT4 11 0.7025 3.4120
https://huggingface.co/Intel/fid_flan_t5_base_nq
Edit model card Fusion-In-Decoder Base on Natural Questions This trained model is based on the Fusion-In-Decoder model, and trained on the Natural Questions dataset. Model Details Model is based on Fusion-In-Decoder, which in turn is based on the google/flan-t5-base checkpoint as the base model. For training, we utilized text retrieval for each query, which provides a collection of relevant passages for it. We note that the passages were retrieved using a corpus based on Wikipedia. Evaluation See model performance on Evaluation Results tab on the right side. Dataset used to train Intel/fid_flan_t5_base_nq Evaluation results Exact Macth on NQ KILT self-reported 51.550
https://huggingface.co/Intel/tvp-base
TVP base model The TVP model was proposed in Text-Visual Prompting for Efficient 2D Temporal Video Grounding by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding. The goal of this model is to incorporate trainable prompts into both visual inputs and textual features to temporal video grounding(TVG) problems. It was introduced in this paper. TVP got accepted to CVPR'23 conference. Model description The abstract from the paper is the following: In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video. Benefiting from fine-grained 3D visual features, the TVG techniques have achieved remarkable progress in recent years. However, the high complexity of 3D convolutional neural networks (CNNs) makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards efficient TVG, we propose a novel text-visual prompting (TVP) framework, which incorporates optimized perturbation patterns (that we call ‘prompts’) into both visual inputs and textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross-modal feature fusion using only low-complexity sparse 2D visual features. Further, we propose a Temporal-Distance IoU (TDIoU) loss for efficient learning of TVG. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves 5× inference acceleration over TVG using 3D visual features. Intended uses & limitations(TODO) You can use the raw model for temporal video grounding. How to use Here is how to use this model to get the logits of a given video and text in PyTorch: import av import cv2 import numpy as np import torch from huggingface_hub import hf_hub_download from transformers import AutoProcessor, TvpForVideoGrounding def pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): """ Convert the video from its original fps to the target_fps and decode the video with PyAV decoder. Returns: frames (tensor): decoded frames from the video. Return None if the no video stream was found. fps (float): the number of frames per second of the video. """ fps = float(container.streams.video[0].average_rate) clip_size = sampling_rate * num_frames / target_fps * fps delta = max(container.streams.video[0].frames - clip_size, 0) start_idx = delta * clip_idx / num_clips end_idx = start_idx + clip_size - 1 timebase = container.streams.video[0].duration / container.streams.video[0].frames video_start_pts = int(start_idx * timebase) video_end_pts = int(end_idx * timebase) stream_name = {"video": 0} seek_offset = max(video_start_pts - 1024, 0) container.seek(seek_offset, any_frame=False, backward=True, stream=container.streams.video[0]) frames = {} for frame in container.decode(**stream_name): if frame.pts < video_start_pts: continue if frame.pts <= video_end_pts: frames[frame.pts] = frame else: frames[frame.pts] = frame break frames = [frames[pts] for pts in sorted(frames)] return frames, fps def decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): """ Decode the video and perform temporal sampling. Args: container (container): pyav container. sampling_rate (int): frame sampling rate (interval between two sampled frames). num_frames (int): number of frames to sample. clip_idx (int): if clip_idx is -1, perform random temporal sampling. If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip. num_clips (int): overall number of clips to uniformly sample from the given video. target_fps (int): the input video may have different fps, convert it to the target video fps before frame sampling. Returns: frames (tensor): decoded frames from the video. """ assert clip_idx >= -2, "Not a valied clip_idx {}".format(clip_idx) frames, fps = pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps) clip_size = sampling_rate * num_frames / target_fps * fps index = torch.linspace(0, clip_size - 1, num_frames) index = torch.clamp(index, 0, len(frames) - 1).long().tolist() frames = [frames[idx] for idx in index] frames = [frame.to_rgb().to_ndarray() for frame in frames] frames = torch.from_numpy(np.stack(frames)) return frames def get_resize_size(image, max_size): """ Args: image: np.ndarray max_size: The max size of height and width Returns: (height, width) Note the height/width order difference >>> pil_img = Image.open("raw_img_tensor.jpg") >>> pil_img.size (640, 480) # (width, height) >>> np_img = np.array(pil_img) >>> np_img.shape (480, 640, 3) # (height, width, 3) """ height, width = image.shape[-2:] if height >= width: ratio = width * 1.0 / height new_height = max_size new_width = new_height * ratio else: ratio = height * 1.0 / width new_width = max_size new_height = new_width * ratio size = {"height": int(new_height), "width": int(new_width)} return size file = hf_hub_download(repo_id="Intel/tvp_demo", filename="3MSZA.mp4", repo_type="dataset") model = TvpForVideoGrounding.from_pretrained("Intel/tvp-base") decoder_kwargs = dict( container=av.open(file, metadata_errors="ignore"), sampling_rate=1, num_frames=model.config.num_frm, clip_idx=0, num_clips=1, target_fps=3, ) raw_sampled_frms = decode(**decoder_kwargs) raw_sampled_frms = raw_sampled_frms.permute(0, 3, 1, 2) text = "person turn a light on." processor = AutoProcessor.from_pretrained("Intel/tvp-base") size = get_resize_size(raw_sampled_frms, model.config.max_img_size) data = processor( text=[text], videos=list(raw_sampled_frms.numpy()), return_tensors="pt", max_text_length=100, size=size ) data["pixel_values"] = data["pixel_values"].to(model.dtype) data["labels"] = torch.tensor([30.96, 24.3, 30.4]) output = model(**data) print(f"The model's output is {output}") def get_video_duration(filename): cap = cv2.VideoCapture(filename) if cap.isOpened(): rate = cap.get(5) frame_num = cap.get(7) duration = frame_num/rate return duration return -1 duration = get_video_duration(file) timestamp = output['logits'].tolist() start, end = round(timestamp[0][0]*duration, 1), round(timestamp[0][1]*duration, 1) print(f"The time slot of the video corresponding to the text \"{text}\" is from {start}s to {end}s") Limitations and bias TODO Training data The TVP model was pretrained on public datasets: charades, Training procedure Preprocessing TODO Pretraining TODO Evaluation results Please refer to Table 2 for TVP's performance on Temporal Video Grounding task. BibTeX entry and citation info @inproceedings{zhang2023text, title={Text-visual prompting for efficient 2d temporal video grounding}, author={Zhang, Yimeng and Chen, Xin and Jia, Jinghan and Liu, Sijia and Ding, Ke}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={14794--14804}, year={2023} }
https://huggingface.co/Intel/Llama-2-70b-chat-hf-onnx-int4
Llama-2-70b-chat-hf-onnx-int4 Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository of INT4 weight only quantization for the 70B fine-tuned model in ONNX format. Note: Use of this model is governed by the Meta license. Please ensure you have accepted that License and got access to the FP32 model before downloading models here. This INT4 model is generated with Intel® Neural Compressor's weight-only quantization method. Intended Use Description Primary intended uses You can use the raw model for text generation inference Primary intended users Anyone doing text generation inference Out-of-scope uses This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people. Export to ONNX Model The FP32 model is exported with meta-llama/Llama-2-70b-chat-hf: optimum-cli export onnx --model meta-llama/Llama-2-70b-chat-hf --task text-generation ./llama2_70b_chat Build ONNX Runtime Build ONNX Runtime from resource to support MatMulWithQuantWeight op. You can refer to build-onnx-runtime-for-inferencing for more prerequisites. git clone -b sub_byte_quant_zp https://github.com/microsoft/onnxruntime.git cd onnxruntime ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests --build_wheel Run Quantization The weight-only quantization cofiguration is as below: dtype group_size scheme algorithm INT4 32 asym RTN Run INT4 weight-only quantization with Intel® Neural Compressor. We provide the key code below. For the complete quantization script, please refer to llama weight-only example. from neural_compressor import quantization, PostTrainingQuantConfig config = PostTrainingQuantConfig( approach="weight_only", calibration_sampling_size=[8], op_type_dict={".*": {"weight": {"bits": 4, "algorithm": ["RTN"], "scheme": ["asym"], "group_size": 32}}},) q_model = quantization.fit( "/path/to/llama2_70b_chat/decoder_model.onnx", # FP32 model path config, calib_dataloader=dataloader) q_model.save("/path/to/Llama-2-70b-chat-hf-onnx-int4/decoder_model.onnx") # INT4 model path Evaluation Operator Statistics Below shows the operator statistics in the INT4 ONNX model: Op Type Total INT4 weight FP32 MatMul 641 561 80 Evaluation of perplexity Evaluate the model with evaluation API of Intel® Extension for Transformers on lambada_openai task. from intel_extension_for_transformers.evaluation.lm_eval import evaluate model_path = "/path/to/Llama-2-70b-chat-hf-onnx-int4" tokenizer = "Intel/Llama-2-70b-chat-hf-onnx-int4" batch_size = 64 tasks=["lambada_openai"] results = evaluate( model="hf-causal", model_args="pretrained=" + model_path + ",tokenizer="+ tokenizer, batch_size=batch_size, tasks=tasks, model_format="onnx" ) Model Model Size (GB) lambada_openai acc lambada_openai ppl FP32 257 0.7543 2.6181 INT4 43 0.7510 2.6561
https://huggingface.co/datasets/Intel/neural-chat-dataset-v1-1
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer. Here is a collective list of instruction dataset used for Neural Chat fine-tuning. The total number of instruction samples and tokens are about 1.1M and 326M respectively. The collective dataset has been validated on multiple LLMs (such as MPT, LLama) by the NeuralChat team (Kaokao Lv, Wenxin Zhang, Xuhui Ren, and Haihao Shen) from Intel/SATG/AIA/AIPT. Thanks to Hello-SimpleAI, databricks, TigerResearch/TigerBot for releasing the open-source instruction dataset. Downloads last month1
https://huggingface.co/Intel/Llama-2-7b-hf-onnx-int4
Llama-2-7b-hf-onnx-int4 Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository of INT4 weight only quantization for the 7B fine-tuned model in ONNX format. Note: Use of this model is governed by the Meta license. Please ensure you have accepted that License and got access to the FP32 model before downloading models here. This INT4 model is generated with Intel® Neural Compressor's weight-only quantization method. Intended Use Description Primary intended uses You can use the raw model for text generation inference Primary intended users Anyone doing text generation inference Out-of-scope uses This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people. Export to ONNX Model The FP32 model is exported with meta-llama/Llama-2-7b-hf: optimum-cli export onnx --model meta-llama/Llama-2-7b-hf --task text-generation ./llama2_7b Build ONNX Runtime Build ONNX Runtime from resource to support MatMulWithQuantWeight op. You can refer to build-onnx-runtime-for-inferencing for more prerequisites. git clone -b sub_byte_quant_zp https://github.com/microsoft/onnxruntime.git cd onnxruntime ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests --build_wheel Run Quantization The weight-only quantization cofiguration is as below: dtype group_size scheme algorithm INT4 32 asym GPTQ Run INT4 weight-only quantization with Intel® Neural Compressor. We provide the key code below. For the complete quantization script, please refer to llama weight-only example. from neural_compressor import quantization, PostTrainingQuantConfig config = PostTrainingQuantConfig( approach="weight_only", calibration_sampling_size=[8], op_type_dict={".*": {"weight": {"bits": 4, "algorithm": ["GPTQ"], "scheme": ["asym"], "group_size": 32}}},) q_model = quantization.fit( "/path/to/llama2_7b/decoder_model.onnx", # FP32 model path config, calib_dataloader=dataloader) q_model.save("/path/to/Llama-2-7b-hf-onnx-int4/decoder_model.onnx") # INT4 model path Evaluation Operator Statistics Below shows the operator statistics in the INT4 ONNX model: Op Type Total INT4 weight FP32 MatMul 257 161 96 Evaluation of perplexity Evaluate the model with evaluation API of Intel® Extension for Transformers on lambada_openai task. from intel_extension_for_transformers.evaluation.lm_eval import evaluate model_path = "/path/to/Llama-2-7b-hf-onnx-int4" tokenizer = "Intel/Llama-2-7b-hf-onnx-int4" batch_size = 64 tasks=["lambada_openai"] results = evaluate( model="hf-causal", model_args="pretrained=" + model_path + ",tokenizer="+ tokenizer, batch_size=batch_size, tasks=tasks, model_format="onnx" ) Model Model Size (GB) lambada_openai acc lambada_openai ppl FP32 26 0.7392 3.3950 INT4 11 0.7343 3.4832