modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
unknown | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
BigSalmon/GPTNeo350MInformalToFormalLincoln6 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Ita2SqlModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ita2SqlModel
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/InformalToFormalLincoln22 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | "2023-05-21T09:21:11Z" | ---
datasets:
- fka/awesome-chatgpt-prompts
tags:
- art
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BigSalmon/MrLincoln125MNeo | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: mit
language:
- ru
pipeline_tag: text2text-generation
widget:
- text: '<SC1>- Как тебя зовут?\n- Даша\n- А меня Денис\n- <extra_id_0>'
---
# Den4ikAI/FRED-T5-XL-chitchat
Болталка на основе FRED-T5-XL. Длина контекста модели 6-8 реплик.
# Пример использования
```python
import torch
import transformers
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained(model_dir)
t5_model = transformers.T5ForConditionalGeneration.from_pretrained(model_dir)
while True:
print('-'*80)
dialog = []
while True:
msg = input('H:> ').strip()
if len(msg) == 0:
break
dialog.append('- ' + msg)
dialog.append('- <extra_id_0>')
input_ids = t5_tokenizer('<SC1>'+'\n'.join(dialog), return_tensors='pt').input_ids
out_ids = t5_model.generate(input_ids=input_ids,
max_length=200,
eos_token_id=t5_tokenizer.eos_token_id,
early_stopping=True,
do_sample=True,
temperature=1.0,
top_k=0,
top_p=0.85)
dialog.pop(-1)
t5_output = t5_tokenizer.decode(out_ids[0][1:]).replace('<extra_id_0>','')
if '</s>' in t5_output:
t5_output = t5_output[:t5_output.find('</s>')].strip()
print('B:> {}'.format(t5_output))
dialog.append('- '+t5_output)
```
# Citation
```
@MISC{Den4ikAI/FRED-T5-XL-chitchat,
author = {Denis Petrov},
title = {Russian chitchat model},
url = {https://huggingface.co/Den4ikAI/FRED-T5-XL-chitchat},
year = 2023
}
```
|
BigSalmon/prepositions | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | "2023-05-21T10:16:46Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-v28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-v28
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0515
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.153 | 1.0 | 190 | 0.1216 | 0.96 |
| 0.086 | 2.0 | 380 | 0.0668 | 0.9770 |
| 0.0298 | 3.0 | 570 | 0.0515 | 0.9822 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
BinksSachary/DialoGPT-small-shaxx | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | This is the gptq 4bit quantization of this model: https://huggingface.co/openaccess-ai-collective/manticore-13b
This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128)
CUDA_VISIBLE_DEVICES=0 python llama.py ./Manticore-13b-GPTQ-Triton c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors manticore-13b-4bit-128g.safetensors |
Bloodwarrior/Chikfalay | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-21T10:44:32Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Smoden/newest_wizard_of_oz_diff_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
BritishLibraryLabs/bl-books-genre | [
"pytorch",
"distilbert",
"text-classification",
"multilingual",
"dataset:blbooksgenre",
"transformers",
"genre",
"books",
"library",
"historic",
"glam ",
"lam",
"license:mit",
"has_space"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 76 | "2023-05-21T10:59:40Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: paraphrase-multilingual-MiniLM-L12-v2-fuman-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphrase-multilingual-MiniLM-L12-v2-fuman-model
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 494 | 7.3217 |
| 8.6564 | 2.0 | 988 | 6.5916 |
| 6.9071 | 3.0 | 1482 | 6.4349 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Broadus20/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 206.42 +/- 46.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Brona/poc_de | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Middelz2/roberta-large-aphasia-narration
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Middelz2/roberta-large-aphasia-narration
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3771
- Validation Loss: 0.3237
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8033 | 0.5454 | 0 |
| 0.5682 | 0.4461 | 1 |
| 0.4581 | 0.3754 | 2 |
| 0.4143 | 0.3457 | 3 |
| 0.3771 | 0.3237 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bubb-les/DisloGPT-medium-HarryPotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetune_req
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_req
This model is a fine-tuned version of [deprem-ml/deprem_bert_128k](https://huggingface.co/deprem-ml/deprem_bert_128k) on a private dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1891
- Accuracy: 0.875
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16,451 | "2023-05-21T11:36:40Z" | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: Type_of_relation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Type_of_relation
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0885
- Macro F1: 0.7537
- Precision: 0.7463
- Recall: 0.7783
- Kappa: 0.6636
- Accuracy: 0.7783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 101 | 1.2153 | 0.5786 | 0.5030 | 0.6835 | 0.4719 | 0.6835 |
| No log | 2.0 | 203 | 1.0583 | 0.6615 | 0.6707 | 0.7365 | 0.5699 | 0.7365 |
| No log | 3.0 | 304 | 0.9495 | 0.6925 | 0.6934 | 0.7525 | 0.6069 | 0.7525 |
| No log | 4.0 | 406 | 0.8934 | 0.7325 | 0.7283 | 0.7635 | 0.6400 | 0.7635 |
| 0.976 | 5.0 | 507 | 0.9247 | 0.7219 | 0.7166 | 0.7660 | 0.6352 | 0.7660 |
| 0.976 | 6.0 | 609 | 0.8751 | 0.7502 | 0.7422 | 0.7685 | 0.6594 | 0.7685 |
| 0.976 | 7.0 | 710 | 0.9145 | 0.7510 | 0.7395 | 0.7783 | 0.6640 | 0.7783 |
| 0.976 | 8.0 | 812 | 0.9934 | 0.7479 | 0.7423 | 0.7808 | 0.6609 | 0.7808 |
| 0.976 | 9.0 | 913 | 0.9641 | 0.7506 | 0.7425 | 0.7734 | 0.6594 | 0.7734 |
| 0.3286 | 10.0 | 1015 | 0.9702 | 0.7560 | 0.7587 | 0.7746 | 0.6641 | 0.7746 |
| 0.3286 | 11.0 | 1116 | 1.0610 | 0.7430 | 0.7370 | 0.7746 | 0.6530 | 0.7746 |
| 0.3286 | 12.0 | 1218 | 1.0251 | 0.7537 | 0.7442 | 0.7722 | 0.6611 | 0.7722 |
| 0.3286 | 13.0 | 1319 | 1.0703 | 0.7511 | 0.7433 | 0.7771 | 0.6615 | 0.7771 |
| 0.3286 | 14.0 | 1421 | 1.0767 | 0.7534 | 0.7451 | 0.7771 | 0.6631 | 0.7771 |
| 0.1456 | 14.93 | 1515 | 1.0885 | 0.7537 | 0.7463 | 0.7783 | 0.6636 | 0.7783 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 71 | "2023-05-21T11:38:29Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Vas123/codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Vas123/codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.3192
- Validation Loss: 9.3837
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -945, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.3192 | 9.3837 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | "2023-05-21T11:45:08Z" | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
pipeline_tag: feature-extraction
---
[xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned for sentence embeddings with [SimCSE](http://dx.doi.org/10.18653/v1/2021.emnlp-main.552) (Gao et al., EMNLP 2021).
See a similar English model released by Gao et al.: https://huggingface.co/princeton-nlp/unsup-simcse-roberta-base.
Fine-tuning was done using the [reference implementation of unsupervised SimCSE](https://github.com/princeton-nlp/SimCSE) and the 1M sentences from English Wikipedia released by the authors.
As a sentence representation, we used the average of the last hidden states (`pooler_type=avg`), which is compatible with Sentence-BERT.
Fine-tuning command:
```bash
python train.py \
--model_name_or_path xlm-roberta-base \
--train_file data/wiki1m_for_simcse.txt \
--output_dir unsup-simcse-xlm-roberta-base \
--num_train_epochs 1 \
--per_device_train_batch_size 32 \
--gradient_accumulation_steps 16 \
--learning_rate 1e-5 \
--max_seq_length 128 \
--pooler_type avg \
--overwrite_output_dir \
--temp 0.05 \
--do_train \
--fp16 \
--seed 28852
```
## Citation
```bibtex
@article{vamvas-sennrich-2023-rsd,
title={Towards Unsupervised Recognition of Semantic Differences in Related Documents},
author={Jannis Vamvas and Rico Sennrich},
year={2023},
eprint={2305.13303},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 54 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.84 +/- 0.98
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 855 | "2023-05-21T12:19:10Z" | ---
license: apache-2.0
datasets:
- bigcode/ta-prompt
language:
- id
metrics:
- character
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 71 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.42 +/- 58.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- qg_squad
metrics:
- rouge
model-index:
- name: flan-t5-base-SQuAD-QG
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: qg_squad
type: qg_squad
config: qg_squad
split: validation
args: qg_squad
metrics:
- name: Rouge1
type: rouge
value: 52.5686
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-SQuAD-QG
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the qg_squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5526
- Rouge1: 52.5686
- Rouge2: 29.8958
- Rougel: 48.6057
- Rougelsum: 48.6025
- Meteor: 47.5477
- Bleu-n: 21.1573
- Bleu-1: 53.1650
- Bleu-2: 27.0832
- Bleu-3: 17.3550
- Bleu-4: 11.6241
- Gen Len: 14.2079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor | Bleu-n | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| 0.6194 | 1.0 | 2366 | 0.5616 | 51.4072 | 28.7617 | 47.4858 | 47.496 | 46.1739 | 20.4191 | 52.6343 | 26.3347 | 16.8839 | 11.3072 | 14.0406 |
| 0.5727 | 2.0 | 4733 | 0.5540 | 52.0791 | 29.3729 | 48.1664 | 48.1802 | 46.8836 | 20.7120 | 53.2071 | 26.9025 | 17.1791 | 11.4335 | 14.0766 |
| 0.55 | 3.0 | 7099 | 0.5521 | 52.2372 | 29.5935 | 48.2874 | 48.2968 | 47.1455 | 20.9160 | 52.9894 | 26.8627 | 17.2081 | 11.5033 | 14.1710 |
| 0.5293 | 4.0 | 9466 | 0.5523 | 52.5425 | 29.8483 | 48.558 | 48.5627 | 47.4331 | 21.0784 | 53.3308 | 27.1435 | 17.3826 | 11.6385 | 14.1482 |
| 0.5225 | 5.0 | 11830 | 0.5526 | 52.5686 | 29.8958 | 48.6057 | 48.6025 | 47.5477 | 21.1573 | 53.1650 | 27.0832 | 17.3550 | 11.6241 | 14.2079 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | "2023-05-21T12:28:59Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta_finetuned_squad_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_finetuned_squad_2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_qspot_exp
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7443820225
- name: NER Recall
type: recall
value: 0.6625
- name: NER F Score
type: f_score
value: 0.7010582011
---
| Feature | Description |
| --- | --- |
| **Name** | `en_qspot_exp` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (17 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `commodity`, `company`, `delivery_cap`, `delivery_location`, `delivery_port`, `delivery_state`, `incoterms`, `measures`, `package_type`, `pickup_cap`, `pickup_location`, `pickup_port`, `pickup_state`, `quantity`, `stackable`, `volume`, `weight` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 70.11 |
| `ENTS_P` | 74.44 |
| `ENTS_R` | 66.25 |
| `TOK2VEC_LOSS` | 55701.24 |
| `NER_LOSS` | 84318.22 | |
CLAck/en-vi | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | "2023-05-21T12:54:59Z" | # Vocabulary Trimmed [xlm-roberta-base](https://huggingface.co/xlm-roberta-base): `vocabtrimmer/xlm-roberta-base-trimmed-en-50000`
This model is a trimmed version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-base | vocabtrimmer/xlm-roberta-base-trimmed-en-50000 |
|:---------------------------|:-------------------|:-------------------------------------------------|
| parameter_size_full | 278,295,186 | 124,495,186 |
| parameter_size_embedding | 192,001,536 | 38,401,536 |
| vocab_size | 250,002 | 50,002 |
| compression_rate_full | 100.0 | 44.73 |
| compression_rate_embedding | 100.0 | 20.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 50000 | 2 | |
CLS/WubiBERT_models | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: flan-t5-base-qg-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-qg-squad_v2
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6469
- eval_rouge1: 39.1078
- eval_rouge2: 18.2571
- eval_rougeL: 35.879
- eval_rougeLsum: 35.8905
- eval_meteor: 34.5328
- eval_bleu4: 13.7701
- eval_bleu-p1: 39.9710
- eval_bleu-p2: 15.9261
- eval_bleu-p3: 9.5710
- eval_bleu-p4: 6.2028
- eval_gen_len: 14.6978
- eval_runtime: 1044.5092
- eval_samples_per_second: 11.367
- eval_steps_per_second: 1.422
- epoch: 3.0
- step: 48870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=2.5005e-05
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
CLTL/icf-levels-ber | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | # `vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en`
This model is a fine-tuned version of [vocabtrimmer/xlm-roberta-base-trimmed-en-50000](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-50000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 68.51 | 68.51 | 68.51 | 67.26 | 68.51 | 68.63 | 68.51 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en/raw/main/eval.json). |
CLTL/icf-levels-etn | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-urdu_bkk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-urdu_bkk
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1416
- Wer: 54.4035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8024 | 3.35 | 800 | 0.6546 | 39.5662 |
| 0.1409 | 6.69 | 1600 | 0.8260 | 47.5698 |
| 0.0301 | 10.04 | 2400 | 0.9745 | 47.8208 |
| 0.0074 | 13.39 | 3200 | 1.0855 | 54.9558 |
| 0.0021 | 16.74 | 4000 | 1.1416 | 54.4035 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CM-CA/Cartman | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-21T13:22:41Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1497
- Mae: 0.6986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2594 | 1.0 | 196 | 1.2004 | 0.7123 |
| 1.1455 | 2.0 | 392 | 1.1497 | 0.6986 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Callidior/bert2bert-base-arxiv-titlegen | [
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:arxiv_dataset",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | summarization | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 145 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93076
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2417
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2318 | 1.0 | 1563 | 0.1862 | 0.9282 |
| 0.1494 | 2.0 | 3126 | 0.2417 | 0.9308 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Cameron/BERT-eec-emotion | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | "2023-05-21T13:40:50Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Cameron/BERT-jigsaw-severetoxic | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/71598/bronya-rand |
Cameron/BERT-mdgender-convai-ternary | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/71486/r-aurora |
Cameron/BERT-rtgender-opgender-annotations | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/71536/albedo-overlord |
Camzure/MaamiBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/71522/rosetta-granblue-fantasy |
Canadiancaleb/DialoGPT-small-walter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/71990/ereshkigal-14-outfits-fate-grand-order-fgo-14 |
Canadiancaleb/jessebot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/68364/mylene-rafa-holfort-otome-gee-sekai-wa-mob-ni-kibishii-sekai-desu |
Canyonevo/DialoGPT-medium-KingHenry | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/71546/uta-one-piece |
Capreolus/birch-bert-large-car_mb | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
] | null | {
"architectures": [
"BertForNextSentencePrediction"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/45963?modelVersionId=50582 |
Capreolus/birch-bert-large-mb | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
] | null | {
"architectures": [
"BertForNextSentencePrediction"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/71672/mem-cho-or-oshi-no-ko-or-lora |
Captain-1337/CrudeBERT | [
"pytorch",
"bert",
"text-classification",
"arxiv:1908.10063",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/70517/toru-asakura-idolmaster |
CarlosTron/Yo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: CarlosCreaitart/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Cat/Kitty | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-21T14:14:24Z" | ---
widget:
- text: "NEW YORK (TheStreet) -- Microsoft (MSFT) - Get Free Report had its price target raised to $39 from $38 by analysts at Jefferies who maintained their 'underperform' rating. In Thursday's pre-market trading session shares are advancing 1.24% to $44.79. This action comes as Microsoft said yesterday that it will eliminate up to 7,800 jobs mostly in its phone unit as it looks to restructure its phone hardware business that has been struggling, the New York Times reports."
example_title: "MSFT news (positive)"
- text: "Adobe Brings Major New Innovations to Video Tools SAN JOSE, Calif.--(BUSINESS WIRE)--Today, ahead of the 2023 NAB Show – the preeminent conference and exhibition driving the evolution of broadcast, media and entertainment – Adobe (Nasdaq:ADBE) announced industry-first innovations across its family of video applications, including AI-powered text-based video editing and automated color tone-mapping capabilities in Premiere Pro. SAN JOSE, Calif.--(BUSINESS WIRE)."
example_title: "ADBE news (neutral)"
- text: "Unilever PLC (NYSE: UL)’s stock price has gone decline by -0.61 in comparison to its previous close of 54.27, however, the company has experienced a -1.61% decrease in its stock price over the last five trading days. The Wall Street Journal reported on 10/24/22 that Dry Shampoo Recalled Due to Potential Cancer-Causing Ingredient."
example_title: "UL news (negative)"
---
# Finetuned destilBERT model for stock news classification
This destilbert model was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery. The articles consisted of the headlines plus abstract of the article.
For the finetuning, a single NVidia K80 was used for about four hours.
DistilBERT is a smaller, faster and lighter version of BERT. It was trained by distilling BERT base and has 40% less parameters than bert-base-uncased.
It runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark.
DistilBERT does not have token-type embeddings, pooler and retains only half of the layers from Google’s BERT.
## Features
- The model can handle various text classification tasks, especially when it comes to stock and finance news sentiment classification.
- The output of the model are the three classes "positive", "neutral" and "negative" plus the models respective confidence score of the class.
- The model was fine-tuned on a custom datasets that was curated by Kern AI and labeled in our tool refinery.
- The model is currently supported by the PyTorch framework and can be easily deployed on various platforms using the HuggingFace Pipeline API.
## Usage
To use the model, you need to install the HuggingFace Transformers library:
```bash
pip install transformers
```
Then you can load the model and the tokenizer from the HuggingFace Hub:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KernAI/stock-news-destilbert")
tokenizer = AutoTokenizer.from_pretrained("KernAI/stock-news-destilbert")
```
To classify a single sentence or a sentence pair, you can use the HuggingFace Pipeline API:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = classifier("This is a positive sentence.")
print(result)
# [{'label': 'POSITIVE', 'score': 0.9998656511306763}]
``` |
Cathy/reranking_model | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: 3d render of sksrpm
---
### sksrpm_test_run Dreambooth model trained by fogside with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
3d render of sksrpm (use that on your prompt)
![3d render of sksrpm 0](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%281%29.jpg)![3d render of sksrpm 1](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%282%29.jpg)![3d render of sksrpm 2](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%283%29.jpg)![3d render of sksrpm 3](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%284%29.jpg)![3d render of sksrpm 4](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%285%29.jpg)![3d render of sksrpm 5](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%286%29.jpg)![3d render of sksrpm 6](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%287%29.jpg)![3d render of sksrpm 7](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%288%29.jpg)![3d render of sksrpm 8](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%289%29.jpg)![3d render of sksrpm 9](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2810%29.jpg)![3d render of sksrpm 10](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2811%29.jpg)![3d render of sksrpm 11](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2812%29.jpg)![3d render of sksrpm 12](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2813%29.jpg)![3d render of sksrpm 13](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2814%29.jpg)![3d render of sksrpm 14](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2815%29.jpg)![3d render of sksrpm 15](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2816%29.jpg)![3d render of sksrpm 16](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2817%29.jpg)![3d render of sksrpm 17](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2818%29.jpg)![3d render of sksrpm 18](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2819%29.jpg)![3d render of sksrpm 19](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2820%29.jpg)![3d render of sksrpm 20](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2821%29.jpg)![3d render of sksrpm 21](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2822%29.jpg)![3d render of sksrpm 22](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2823%29.jpg)![3d render of sksrpm 23](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2824%29.jpg)![3d render of sksrpm 24](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2825%29.jpg)![3d render of sksrpm 25](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2826%29.jpg)![3d render of sksrpm 26](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2827%29.jpg)![3d render of sksrpm 27](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2828%29.jpg)![3d render of sksrpm 28](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2829%29.jpg)![3d render of sksrpm 29](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2830%29.jpg)![3d render of sksrpm 30](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2831%29.jpg)![3d render of sksrpm 31](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2832%29.jpg)![3d render of sksrpm 32](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2833%29.jpg)![3d render of sksrpm 33](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2834%29.jpg)![3d render of sksrpm 34](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2835%29.jpg)![3d render of sksrpm 35](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2836%29.jpg)![3d render of sksrpm 36](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2837%29.jpg)![3d render of sksrpm 37](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2838%29.jpg)![3d render of sksrpm 38](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2839%29.jpg)![3d render of sksrpm 39](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2840%29.jpg)![3d render of sksrpm 40](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2841%29.jpg)![3d render of sksrpm 41](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2842%29.jpg)![3d render of sksrpm 42](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2843%29.jpg)![3d render of sksrpm 43](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2844%29.jpg)![3d render of sksrpm 44](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2845%29.jpg)![3d render of sksrpm 45](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2846%29.jpg)![3d render of sksrpm 46](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2847%29.jpg)![3d render of sksrpm 47](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2848%29.jpg)![3d render of sksrpm 48](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2849%29.jpg)![3d render of sksrpm 49](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2850%29.jpg)![3d render of sksrpm 50](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2851%29.jpg)![3d render of sksrpm 51](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2852%29.jpg)![3d render of sksrpm 52](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2853%29.jpg)![3d render of sksrpm 53](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2854%29.jpg)![3d render of sksrpm 54](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2855%29.jpg)![3d render of sksrpm 55](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2856%29.jpg)![3d render of sksrpm 56](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2857%29.jpg)![3d render of sksrpm 57](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2858%29.jpg)![3d render of sksrpm 58](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2859%29.jpg)![3d render of sksrpm 59](https://huggingface.co/fogside/sksrpm-test-run/resolve/main/concept_images/3d%20render%20of%20sksrpm_%2860%29.jpg)
|
dccuchile/albert-tiny-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: jokyere49/ppo-SnowballTarget-UnityMLAgent
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dccuchile/albert-xlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 764.00 +/- 346.36
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MOMMYFKER122 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MOMMYFKER122 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MOMMYFKER122
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
dccuchile/distilbert-base-spanish-uncased | [
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 670 | null | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Chakita/Kalbert | [
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Cheatham/xlm-roberta-base-finetuned | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: 4BAI_simple_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# 4BAI_simple_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cheatham/xlm-roberta-large-finetuned-d12 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: canine_vowelizer_2105_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine_vowelizer_2105_v6
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1704
- Precision: 0.9998
- Recall: 0.9998
- F1: 0.9998
- Accuracy: 0.9391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4826 | 1.0 | 3885 | 0.4310 | 0.9997 | 0.9998 | 0.9997 | 0.8467 |
| 0.4118 | 2.0 | 7770 | 0.3556 | 0.9997 | 0.9998 | 0.9997 | 0.8748 |
| 0.369 | 3.0 | 11655 | 0.3126 | 0.9997 | 0.9998 | 0.9997 | 0.8893 |
| 0.339 | 4.0 | 15540 | 0.2811 | 0.9997 | 0.9998 | 0.9998 | 0.9014 |
| 0.3192 | 5.0 | 19425 | 0.2589 | 0.9997 | 0.9998 | 0.9998 | 0.9095 |
| 0.3052 | 6.0 | 23310 | 0.2399 | 0.9997 | 0.9998 | 0.9998 | 0.9157 |
| 0.281 | 7.0 | 27195 | 0.2252 | 0.9997 | 0.9998 | 0.9998 | 0.9207 |
| 0.2749 | 8.0 | 31080 | 0.2117 | 0.9998 | 0.9998 | 0.9998 | 0.9248 |
| 0.2589 | 9.0 | 34965 | 0.2011 | 0.9998 | 0.9998 | 0.9998 | 0.9285 |
| 0.253 | 10.0 | 38850 | 0.1940 | 0.9998 | 0.9998 | 0.9998 | 0.9314 |
| 0.2428 | 11.0 | 42735 | 0.1842 | 0.9998 | 0.9998 | 0.9998 | 0.9348 |
| 0.2433 | 12.0 | 46620 | 0.1783 | 0.9998 | 0.9998 | 0.9998 | 0.9365 |
| 0.2265 | 13.0 | 50505 | 0.1751 | 0.9998 | 0.9998 | 0.9998 | 0.9375 |
| 0.2244 | 14.0 | 54390 | 0.1721 | 0.9998 | 0.9998 | 0.9998 | 0.9387 |
| 0.2203 | 15.0 | 58275 | 0.1704 | 0.9998 | 0.9998 | 0.9998 | 0.9391 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cheatham/xlm-roberta-large-finetuned-d1r01 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CheonggyeMountain-Sherpa/kogpt-trinity-poem | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - alejandrojimenez/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Chiuchiyin/DialoGPT-small-Donald | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-bio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bio
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5150
- Precision: 0.3828
- Recall: 0.1593
- F1: 0.2250
- Accuracy: 0.8456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 240 | 0.5753 | 0.5 | 0.0004 | 0.0008 | 0.8341 |
| No log | 2.0 | 480 | 0.5232 | 0.4101 | 0.0862 | 0.1424 | 0.8424 |
| 0.835 | 3.0 | 720 | 0.5150 | 0.3828 | 0.1593 | 0.2250 | 0.8456 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ChristianOrr/madnet_keras | [
"tensorboard",
"dataset:flyingthings-3d",
"dataset:kitti",
"arxiv:1810.05424",
"vision",
"deep-stereo",
"depth-estimation",
"Tensorflow2",
"Keras",
"license:apache-2.0"
] | depth-estimation | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
{}
---
Raven-X-1.1
---
Raven-X model v1.1 by Siliconic Technologies.
Delta model upgradation and 32-bit quantization for raven-x-001.
This is a custom model for Raven AI.
This model is a modified version of vicuna-13b-delta and llama model, trained on oasst, chatgpt, sharegpt datasets and wikipedia.
Created and Fine-tuned by Akshit Kumar.
Raven AI System is a modified version of Visda AI System. |
Chun/w-en2zh-mtm | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-4500-lyrics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-4500-lyrics
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- Accuracy: 0.8963
- F1: 0.8821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Chungu424/qazwsx | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Fine-tune a BLOOMZ-based ad generation model using peft, transformers and bitsandbytes
|
ClaudeCOULOMBE/RickBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: lab2_fine_tune_en_ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab2_fine_tune_en_ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
|
CleveGreen/FieldClassifier_v2_gpt | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
tags:
- ctranslate2
- int8
- float16
license: bsd-3-clause
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen-350M-mono --output_dir /home/michael/tmp-ct2fast-codegen-350M-mono --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen-350M-mono"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen (CodeGen-Mono 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 350M** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 350M* and further pre-trained on a Python programming language dataset, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 350M) was firstly initialized with *CodeGen-Multi 350M*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
CleveGreen/JobClassifier_v2 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
tags:
- ctranslate2
- int8
- float16
license: bsd-3-clause
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen-2B-mono](https://huggingface.co/Salesforce/codegen-2B-mono)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen-2B-mono --output_dir /home/michael/tmp-ct2fast-codegen-2B-mono --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen-2B-mono"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen (CodeGen-Mono 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 2B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 2B* and further pre-trained on a Python programming language dataset, and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 2B) was firstly initialized with *CodeGen-Multi 2B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
CoShin/XLM-roberta-large_ko_en_nil_sts | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-21T18:56:35Z" | ---
tags:
- ctranslate2
- int8
- float16
license: apache-2.0
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen2-1B](https://huggingface.co/Salesforce/codegen2-1B)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen2-1B --output_dir /home/michael/tmp-ct2fast-codegen2-1B --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json configuration_codegen.py .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen2-1B"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen2-1B")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen2 (CodeGen2-1B)
## Model description
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality.
### Causal sampling
For regular causal sampling, simply generate completions given the context:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-1B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-1B", trust_remote_code=True, revision="main")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
### Infill sampling
For **infill** sampling, we introduce three new special token types:
* `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill.
* `<sep>`: Seperator token between the suffix and the infilled sample. See below.
* `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output.
For example, if we want to generate infill for the following cursor position of a function:
```python
def hello_world():
|
return name
```
we construct an input to the model by
1. Inserting `<mask_1>` token in place of cursor position
2. Append `<sep>` token to indicate the boundary
3. Insert another `<mask_1>` to indicate which mask we want to infill.
The final snippet looks as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-1B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-1B", trust_remote_code=True, revision="main")
def format(prefix, suffix):
return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>"
prefix = "def hello_world():\n "
suffix = " return name"
text = format(prefix, suffix)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):])
```
You might want to truncate the model output with `<eom>`.
## Training data
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
## Training procedure
CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption.
Please refer to the paper for more details.
## Evaluation results
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
## Intended use and limitations
As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2023codegen2,
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
journal={arXiv preprint},
year={2023}
}
```
|
CoachCarter/distilbert-base-uncased-finetuned-squad | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-21T18:57:16Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 261 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 16,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 335,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CodeDanCode/SP-KyleBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
tags:
- ctranslate2
- int8
- float16
license: apache-2.0
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen2-3_7B](https://huggingface.co/Salesforce/codegen2-3_7B)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen2-3_7B --output_dir /home/michael/tmp-ct2fast-codegen2-3_7B --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json configuration_codegen.py .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen2-3_7B"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen2 (CodeGen2-3.7B)
## Model description
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality.
### Causal sampling
For regular causal sampling, simply generate completions given the context:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-3_7B", trust_remote_code=True, revision="main")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
### Infill sampling
For **infill** sampling, we introduce three new special token types:
* `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill.
* `<sep>`: Seperator token between the suffix and the infilled sample. See below.
* `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output.
For example, if we want to generate infill for the following cursor position of a function:
```python
def hello_world():
|
return name
```
we construct an input to the model by
1. Inserting `<mask_1>` token in place of cursor position
2. Append `<sep>` token to indicate the boundary
3. Insert another `<mask_1>` to indicate which mask we want to infill.
The final snippet looks as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-3_7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-3_7B", trust_remote_code=True, revision="main")
def format(prefix, suffix):
return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>"
prefix = "def hello_world():\n "
suffix = " return name"
text = format(prefix, suffix)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):])
```
You might want to truncate the model output with `<eom>`.
## Training data
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
## Training procedure
CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption.
Please refer to the paper for more details.
## Evaluation results
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
## Intended use and limitations
As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2023codegen2,
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
journal={arXiv preprint},
year={2023}
}
```
|
CodeMonkey98/distilroberta-base-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutxlm-tokenclass-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-tokenclass-finetuned
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2039
- Answer Precision: 0.9231
- Answer Recall: 0.9180
- Answer F1: 0.9205
- Answer Number: 366
- Header Precision: 0.8194
- Header Recall: 0.9219
- Header F1: 0.8676
- Header Number: 64
- Question Precision: 0.9115
- Question Recall: 0.9428
- Question F1: 0.9269
- Question Number: 437
- Overall Precision: 0.9088
- Overall Recall: 0.9308
- Overall F1: 0.9197
- Overall Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CodeNinja1126/bert-q-encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- generated_from_trainer
model-index:
- name: clip-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-finetuned
This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CodeNinja1126/koelectra-model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- ctranslate2
- int8
- float16
license: bsd-3-clause
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen-2B-multi](https://huggingface.co/Salesforce/codegen-2B-multi)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen-2B-multi --output_dir /home/michael/tmp-ct2fast-codegen-2B-multi --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen-2B-multi"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen-2B-multi")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen (CodeGen-Multi 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 2B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 2B* and further pre-trained on a dataset of multiple programming languages, and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 2B) was firstly initialized with *CodeGen-NL 2B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
CodeNinja1126/xlm-roberta-large-kor-mrc | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
widget:
- text: Em uma bela manhã de
- text: Em uma cidade tão grande como
- text: Maria e Joana são
license: mit
datasets:
- mc4
language:
- pt
metrics:
- perplexity
library_name: transformers
pipeline_tag: text-generation
---
# OPT-125M finetuned Portuguese
Fine-tuning the [OPT-125M](https://huggingface.co/facebook/opt-125m) model on a reduced corpus of mc4-Portuguese with approximately 300M tokens.
###### Hyper-parameters
- learning_rate = 5e-5
- batch_size = 32
- warmup = 500
- seq_length = 512
- num_train_epochs = 2.0
With an A100 with 40GB of RAM, the training took around 3 hours
**Perplexity:** 9.4
## Sample Use
```python
from transformers import pipeline
generator = pipeline('text-generation', model='Mirelle/opt-125M-pt-br-finetuned', max_length=100, do_sample=True)
generator("Em uma bela manhã de")
``` |
CoderEFE/DialoGPT-marxbot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"has_space"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: feedback-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# feedback-classification
This model is a fine-tuned version of [aubmindlab/bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5277
- Macro F1: 0.8192
- Accuracy: 0.8101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 347 | 0.5070 | 0.8125 | 0.8043 |
| 0.5829 | 2.0 | 694 | 0.5277 | 0.8192 | 0.8101 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CoffeeAddict93/gpt1-call-of-the-wild | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CoffeeAddict93/gpt1-modest-proposal | [
"pytorch",
"openai-gpt",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"OpenAIGPTLMHeadModel"
],
"model_type": "openai-gpt",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- ctranslate2
- int8
- float16
license: bsd-3-clause
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen-6B-multi](https://huggingface.co/Salesforce/codegen-6B-multi)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen-6B-multi --output_dir /home/michael/tmp-ct2fast-codegen-6B-multi --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen-6B-multi"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen-6B-multi")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen (CodeGen-Multi 6B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 6B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 6B* and further pre-trained on a dataset of multiple programming languages, and "6B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 6B) was firstly initialized with *CodeGen-NL 6B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
CoffeeAddict93/gpt2-medium-modest-proposal | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
datasets:
- food101
language:
- en
pipeline_tag: image-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CohleM/bert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1637 | 1.0 | 8558 | 1.2249 |
| 0.9209 | 2.0 | 17116 | 1.2173 |
| 0.7006 | 3.0 | 25674 | 1.2906 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CohleM/mbert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93004
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Accuracy: 0.9300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2334 | 1.0 | 1563 | 0.1875 | 0.9279 |
| 0.1512 | 2.0 | 3126 | 0.2298 | 0.9300 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cometasonmi451/Mine | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dl1 Dreambooth model trained by soumenmaiti with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Connor/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | Access to model NexViolentus/DeepFartLab is restricted and you are not in the authorized list. Visit https://huggingface.co/NexViolentus/DeepFartLab to ask for access. |
ConstellationBoi/Oop | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-300m-kik-t22-1k-ft-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-300m-kik-t22-1k-ft-v3
This model is a fine-tuned version of [mutisya/wav2vec2-300m-kik-t22-1k](https://huggingface.co/mutisya/wav2vec2-300m-kik-t22-1k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0647
- Wer: 0.0676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.2217 | 0.39 | 400 | 2.8741 | 1.0 |
| 2.0303 | 0.79 | 800 | 0.6791 | 0.8622 |
| 0.6391 | 1.18 | 1200 | 0.3112 | 0.4681 |
| 0.4388 | 1.57 | 1600 | 0.2214 | 0.3472 |
| 0.3411 | 1.96 | 2000 | 0.1757 | 0.2714 |
| 0.281 | 2.36 | 2400 | 0.1439 | 0.2373 |
| 0.2484 | 2.75 | 2800 | 0.1335 | 0.2064 |
| 0.2276 | 3.14 | 3200 | 0.1197 | 0.1795 |
| 0.1998 | 3.54 | 3600 | 0.1155 | 0.1701 |
| 0.1984 | 3.93 | 4000 | 0.1026 | 0.1500 |
| 0.173 | 4.32 | 4400 | 0.0992 | 0.1410 |
| 0.1738 | 4.72 | 4800 | 0.0990 | 0.1407 |
| 0.174 | 5.11 | 5200 | 0.0962 | 0.1410 |
| 0.154 | 5.5 | 5600 | 0.0923 | 0.1260 |
| 0.1528 | 5.89 | 6000 | 0.0932 | 0.1251 |
| 0.1371 | 6.29 | 6400 | 0.0839 | 0.1115 |
| 0.1263 | 6.68 | 6800 | 0.0849 | 0.1111 |
| 0.1268 | 7.07 | 7200 | 0.0757 | 0.0991 |
| 0.1101 | 7.47 | 7600 | 0.0778 | 0.0966 |
| 0.1062 | 7.86 | 8000 | 0.0718 | 0.0937 |
| 0.0992 | 8.25 | 8400 | 0.0785 | 0.0959 |
| 0.0925 | 8.64 | 8800 | 0.0679 | 0.0884 |
| 0.093 | 9.04 | 9200 | 0.0672 | 0.0818 |
| 0.0867 | 9.43 | 9600 | 0.0663 | 0.0763 |
| 0.0783 | 9.82 | 10000 | 0.0693 | 0.0766 |
| 0.0758 | 10.22 | 10400 | 0.0668 | 0.0758 |
| 0.073 | 10.61 | 10800 | 0.0674 | 0.0738 |
| 0.0694 | 11.0 | 11200 | 0.0637 | 0.0725 |
| 0.0628 | 11.39 | 11600 | 0.0671 | 0.0701 |
| 0.0647 | 11.79 | 12000 | 0.0648 | 0.0672 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Contrastive-Tension/BERT-Base-NLI-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Lendalf/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Contrastive-Tension/BERT-Distil-NLI-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- ctranslate2
- int8
- float16
license: bsd-3-clause
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen-6B-mono](https://huggingface.co/Salesforce/codegen-6B-mono)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-21 using
```
ct2-transformers-converter --model Salesforce/codegen-6B-mono --output_dir /home/michael/tmp-ct2fast-codegen-6B-mono --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen-6B-mono"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen-6B-mono")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# CodeGen (CodeGen-Mono 6B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 6B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 6B* and further pre-trained on a Python programming language dataset, and "6B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 6B) was firstly initialized with *CodeGen-Multi 6B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
Contrastive-Tension/BERT-Large-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: gym-Taxi-v3-Q-Learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sadra-barikbin/gym-Taxi-v3-Q-Learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Contrastive-Tension/BERT-Large-NLI-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3394 |
| 2.6968 | 2.0 | 500 | 1.7850 |
| 2.6968 | 3.0 | 750 | 1.7112 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Culmenus/checkpoint-168500-finetuned-de-to-is_nr2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_small_summarized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_small_summarized
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6902
- Accuracy: 0.85
- Precision: 0.5
- Recall: 0.2
- F1: 0.2857
- D-index: 1.5496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1600
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 200 | 0.4286 | 0.85 | 0.0 | 0.0 | 0.0 | 1.4725 |
| No log | 2.0 | 400 | 0.4340 | 0.85 | 0.0 | 0.0 | 0.0 | 1.4725 |
| 0.5579 | 3.0 | 600 | 0.5781 | 0.85 | 0.0 | 0.0 | 0.0 | 1.4725 |
| 0.5579 | 4.0 | 800 | 0.5235 | 0.82 | 0.3929 | 0.3667 | 0.3793 | 1.5718 |
| 0.4459 | 5.0 | 1000 | 0.8260 | 0.87 | 1.0 | 0.1333 | 0.2353 | 1.5507 |
| 0.4459 | 6.0 | 1200 | 1.1135 | 0.835 | 0.3636 | 0.1333 | 0.1951 | 1.5044 |
| 0.4459 | 7.0 | 1400 | 1.1934 | 0.835 | 0.4118 | 0.2333 | 0.2979 | 1.5423 |
| 0.1376 | 8.0 | 1600 | 1.4773 | 0.815 | 0.3704 | 0.3333 | 0.3509 | 1.5530 |
| 0.1376 | 9.0 | 1800 | 1.3215 | 0.825 | 0.3529 | 0.2 | 0.2553 | 1.5165 |
| 0.0603 | 10.0 | 2000 | 1.5101 | 0.85 | 0.5 | 0.1667 | 0.25 | 1.5370 |
| 0.0603 | 11.0 | 2200 | 1.5265 | 0.845 | 0.4545 | 0.1667 | 0.2439 | 1.5304 |
| 0.0603 | 12.0 | 2400 | 1.5031 | 0.86 | 0.625 | 0.1667 | 0.2632 | 1.5502 |
| 0.0269 | 13.0 | 2600 | 1.5167 | 0.86 | 0.5714 | 0.2667 | 0.3636 | 1.5874 |
| 0.0269 | 14.0 | 2800 | 1.5916 | 0.85 | 0.5 | 0.2 | 0.2857 | 1.5496 |
| 0.0 | 15.0 | 3000 | 1.6241 | 0.85 | 0.5 | 0.2 | 0.2857 | 1.5496 |
| 0.0 | 16.0 | 3200 | 1.6417 | 0.85 | 0.5 | 0.2 | 0.2857 | 1.5496 |
| 0.0 | 17.0 | 3400 | 1.6398 | 0.855 | 0.5556 | 0.1667 | 0.2564 | 1.5436 |
| 0.0043 | 18.0 | 3600 | 1.6703 | 0.85 | 0.5 | 0.2333 | 0.3182 | 1.5621 |
| 0.0043 | 19.0 | 3800 | 1.6900 | 0.85 | 0.5 | 0.2 | 0.2857 | 1.5496 |
| 0.0 | 20.0 | 4000 | 1.6902 | 0.85 | 0.5 | 0.2 | 0.2857 | 1.5496 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DJStomp/TestingSalvoNET | [
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sexism
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sexism
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4452
- Accuracy: 0.8523
- F1: 0.8507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4307 | 1.0 | 1876 | 0.3620 | 0.8518 | 0.8495 |
| 0.308 | 2.0 | 3752 | 0.4452 | 0.8523 | 0.8507 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DannyMichael/ECU911 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sexism-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sexism-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3597
- Accuracy: 0.8555
- F1: 0.8540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.43 | 1.0 | 1876 | 0.3597 | 0.8555 | 0.8540 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Declan/ChicagoTribune_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `CoLA`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
matthews_corr: 0.6295 |
DeepPavlov/marianmt-tatoeba-enru | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: other
---
### Introduction
Basiclly an update to the old attempt of [vicuna-chinese-replication-beta](https://huggingface.co/keyfan/vicuna-chinese-replication-beta)
* We adopted an curriculum-learning like approch, starting from simple QAs to reasoning-intensive coding & mathamatical problems. Coincidentally, [Ziya](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1) adopted the same idea during SFT period.
* The base model was changed from [chinese-llama](https://huggingface.co/ziqingyang/chinese-llama-lora-13b) to [chinese-llama-plus](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b). However, as observed by [BiLLa](https://github.com/Neutralzz/BiLLa), continue training on Chinese-only corpus significantly increases its perplexity on English corpus, which in turns undermines its abilities in fields like mathematical calculation in our preliminary experiment. The subject of continuing-training is under-studied, while using bilingual corpus may be a better alternative as shown so far.
* We changed to the Vicuna v1.1 conversative template and included more CoT training data.
Again, this is for research purpose only. There's no guarantee for its performance. All credits to the original authors of LLaMA and Chinese-LLaMA.
Compared with previous release, new model improves on coding and reasoning problem. However it still suffers from hallucinations and perform poorly on Chinese domain-specific problem, e.g. chinese literature and idioms.
### Usage
We use exactly the Vicuna template for training and inference. Sample code as below.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "keyfan/vicuna-chinese-replication-v1.1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(checkpoint).cuda()
template = ("A chat between a curious human and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the human's questions. "
"USER: {}\nASSISTANT:")
question = template.format("Who was the president of the United States in 1955?")
inputs = tokenizer.encode(question, return_tensors="pt").cuda()
outputs = model.generate(inputs, do_sample=True, temperature=0.2, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
```
### Evaluation
* Result on the [Chinese-LLaMA-Alpaca devset](https://github.com/ymcui/Chinese-LLaMA-Alpaca/tree/main/examples) compared with the result of Alpaca-Plus-13B. For simplity, we only sample one answer for each question without any cherry-picking. We used the template as provided in their repo. GPT-4 have strong bias for more detailed answers, so the score may not be consistent with human evaluation.
| Model | Macro-Average | QA | OQA | REASONING | LITERATURE | ENTERTAINMENT | GENERATION | TRANSLATION | CODE | ETHICS |
| - | - | - | - | - | - | - | - | - | - | - |
| Alpaca-Plus-13B | 77.3 | 70 | 74 | 70 | **80** | **77** | 82 | **89** | 64 | **90** |
| ours | **82.4** | **81** | **87** | **88** | 73 | **78** | **85** | 83 | **83** | 84 |
* Result on the newly released [C-Eval test set](https://cevalbenchmark.com/index.html#home) with 5-shot. We slightly modified [MOSS's code](https://github.com/SJTU-LIT/ceval/blob/main/code/evaluator_series/evaluators/moss.py) from ceval codebase by moving the '答案:' (Answer: ) suffix from the end of question to the beginning of the chatbot response.
| Average | Avg(Hard) | STEM | Social Science | Humanities | Others |
| - | - | - | - | - | - |
| 37.0 | 29.5 | 34.6 | 44.5 | 35.7 | 35.9 |
|
DeividasM/wav2vec2-large-xlsr-53-lithuanian | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
---
[![Build Status](https://www.oregonlive.com/resizer/tPuJNKHRU8uXzzhnFfT1eRT_bMY=/1280x0/smart/cloudfront-us-east-1.images.arcpublishing.com/advancelocal/CZE2LRKOKBDPZIF5OZRSGUB5NE.png)]()
read the full article here : https://pastelink.net/x3grym4r
Source : https://yamcode.com/
Flash News : https://jsitor.com/AHmCHs9Mdj
Biden last Talk : https://jsfiddle.net/1vbwzgLy/
Russian Ukrain Breaking News : https://paste.ee/p/9zwAP
SUBSCRIBE: $1 for 3 months Jay Skurski
It's finally here. The seemingly endless run-up to the NFL draft has reached its conclusion, with the first round kicking off Thursday in Kansas City. Buffalo Bills fans will have to be patient until their scheduled turn at No. 27 comes up sometime after 11 p.m. That gives you plenty of time to digest this special, draft edition of the Bills Mailbag. Let's dive right in ...
Sam Ruggiero asks: Ok, Jay ... showtime! The draft has finally and thankfully arrived! Simply put, what, in your humble opinion, do the Bills do? Do they: Move up? Move down? Stay put? What position(s) are they most in need of? Will Brandon Beane and Sean McDermott draft the best player available, or one of need? Finally, who will they select in the first round? Go for it!
Jay: Way to get your money's worth with the mailbag submission, Sam. This year is particularly hard to handicap, because Beane has so many different directions he could go. My list of needs looks like this, from most pressing to not so urgent: Wide receiver, linebacker, offensive tackle, defensive tackle, tight end, safety, running back. Beane said during his pre-draft press conference that, if he was in Las Vegas, he'd wager more on moving down than up, and I agree with that. The Bills enter with just six draft picks, and, as shown above, have a list of needs that extends beyond that. Adding extra picks in a move down would be ideal. Forecasting who they will pick in the first round is extraordinarily difficult, since it is hard to predict the 26 players (barring a trade up) who will be selected before. In my final mock draft, I gave the Bills wide receiver Jordan Addison from the University of Southern California. He's got the versatility to play in the slot or outside, and is thought to be an excellent route runner. I love his fit as a top-three receiver for the Bills.
People are also reading... LeSean McCoy has harsh words for Jim Kelly: 'You're this great quarterback in the Super Bowl? Loss, loss, loss, loss' Alan Pergament: Ch. 4's Abby Fridmann turns down new contract, becomes latest 'Wake Up!' anchor to exit Lancaster golf course ends 'corkage fee,' Breathalyzer test amid online criticism Nardin teacher's removal following use of 'racially charged language' sparked controversy Golden Knights tweet 'Imagine not liking Jack Eichel,' and Sabres fans were merciless in replies He's 102. He lives in Amherst. And he just won the 'Nobel Prize' of statistics Nicholas D'Angelo pleads guilty to 8 felonies in rape case Inside the Sabres: Analyzing the depth chart and where GM Kevyn Adams will add this summer Erik Brady: 20 years later, Drew Rosenhaus recounts how he fooled the Bills into drafting Willis McGahee Seneca Nation plans to bid for McKinley Mall Pigeon was supposed to report to Probation Office when released from jail. He went to Florida instead Woman who defied father's arranged marriage returns from Yemen; feds say she's still in danger Erie County real estate transactions Over 100 Nardin families delay re-enrollment decision amid leadership strife Rich Products preparing to shift some work to India; impact on local jobs unclear
Bart D'Angelo asks: Do you think teams draft now more for need? I am a little confused about hearing how many times a team will say they take the best player available. Is this old school, or now true for only a sport like hockey? In hockey, players being so young, they do take best available with years of development. The reason I'm saying this is using Bijan Robinson as an example. If he really is so good and being compared to Saquon Barkley, how can teams pass on him? I believe, like many others, drafting a running back in the first round is generally not a good idea, though if he is special and defenses must respect him, it seems it even opens up the offense for a passing team like the Bills.
Jay: I love this question, because it is something I've thought about a lot. The idea that teams draft the "best player available" has always felt forced to me. Look at the Bills' first-round draft picks under Brandon Beane. Josh Allen, Tremaine Edmunds, Ed Oliver, Greg Rousseau and Kaiir Elam all just so happened to fill major needs. Were they all the best player available when it was the Bills' turn to draft? Perhaps. Beane has said himself that when it's close, a team will defer to the need position on the roster, which is a fair way of approaching the draft. You're right, Bart, that hockey is a different beast, because those players have such a long developmental period. As for Robinson and the debate over taking a running back in the first round, there is a significant sample size that suggests it is not a wise pick. The production you can get from a mid-round pick or even an undrafted free agent far too often matches or at least comes close to what a team gets from a first-round pick. Now, the difference here is that the likelihood of the Bills drafting a player with a true first-round grade this year if they stay at No. 27 overall feels quite small. Let's say the team only has 16 first-round grades, and two of them are quarterbacks. That means they would need one of 14 players to fall to No. 27. Not impossible, but also not likely. I have a feeling Robinson, who some consider to be the best prospect in this class, will be long gone before the Bills are on the clock. That is as much a sign of the weakness of this draft class as it is a belief in his ability.
Support Local Journalism Your subscription makes our reporting possible. SUBSCRIBE: $1 for 3 months
Bruce asks: Zay Flowers? Really Jay? Why would they select a player with very short arms and a small catch window? How many of these players have been a success in the NFL? This has bust written all over it. I hate this idea. If a tackle falls, get him. Get Spencer Brown on the bench. If an interior defensive lineman falls, get him. If a tight end falls, get him. I want nothing to do with this receiving pool. I get the feeling that if this happens, you will rue your article that holds you to your word. You now own this. No walking away from this for you.
Jay: Bruce has got me feeling nervous all of a sudden. To be clear, in my final mock draft, I went with Addison ... but Flowers would fit the need for a slot receiver. I'm willing to listen to the conversation about an offensive tackle, too. Darnell Wright from Tennessee would push Spencer Brown for the starting job at right tackle, I'm just not sure he'll last until the Bills pick. Defensive tackle is definitely a sleeper position to keep an eye on, because the team's projected top four at that spot are all free agents after the 2023 season. I don't love the idea of a tight end in the first round. The team just paid Dawson Knox to be their starter there. I'm all for taking advantage of a deep class at tight end, but feel like doing so should happen in the middle rounds.
DeAndre Hopkins to Buffalo Bills trade speculation bubbles as NFL Draft approaches
With the NFL draft set to begin Thursday, speculation continues to intensify that the Arizona Cardinals will trade wide receiver DeAndre Hopkins before the weekend is over.
Jeff Miller asks: I wrote some weeks ago saying that before the Bills add weapons for Josh Allen, they need to add protection in front of him. With all the free agent signings of offensive linemen, I'm sleeping a lot better. My question: It seems as though the Bills' coaches prefer the offensive linemen who can play multiple positions. Is that true for most teams? I assume that is looked for by the scouts before the draft.
Jay: Correct. Positional flexibility is something every team craves, although to what degree varies throughout the league. You're right, Jeff, that the Bills have continually put a high value on it - for good reason. Take Ryan Bates, for example. Not only is he the team's starting right guard, he is also the primary backup center and can even play tackle in a pinch. Having a player who can play all five positions on the offensive line is valuable, because it allows the team to perhaps keep a player at another position. You're also right that scouts look for players who may project to a different position at the NFL level. Frequently, you hear about college tackles who move to guard in the NFL. In the Bills' case, Matt Milano played some safety at Boston College, but has turned into an All-Pro linebacker in Buffalo.
Ryan O'Halloran: A 'don't do' list of draft suggestions for Bills GM Brandon Beane
You've heard of to-do lists for NFL general managers? Entering this week's draft, here's a don't do list for the Bills.
Bob Rajczak asks: Your pick of Drew Sanders to the Bills in the second round seems like wishful thinking. I'd love it, but doubt he'd still be there. Do you think the Bills might trade back from No. 27 to pick early in Round 2 to get extra picks and select a linebacker then? Any chance the Bills trade a first- or second-round pick from the 2024 draft for a premium pick this year in, say, the second round, keeping No. 27, No. 59 and adding that pick? With the hoped-for third-round compensatory pick in 2024, we could offer that pick and, say, a 2024 second-rounder for that No. 2 pick this year, netting us three picks in the first two rounds in 2023 - a wide receiver, linebacker and offensive lineman, defensive lineman or tight end.
Jay: Depending on the way the board falls - and the willingness of another team to move up - moving down and out of the first round entirely feels like a real possibility. If that happens, the value of a linebacker somewhere in the 30s would be about in line with where they are expected to be picked. One thing to keep in mind about trading future picks: This year's draft class isn't thought to be all that strong. If anything, I could see teams wanting to add more picks in 2024. As for that projected, compensatory third-round selection, the Bills can't trade that because they don't officially have it. They could trade their own third-rounder in 2024. The best way to add picks this year is through a trade down, not sacrificing next year's draft capital.
Thank you for all the questions. As a reminder, they can be submitted via Twitter to @JaySkurski or via email to jskurski@buffnews.com. Enjoy the draft!
0 Comments Tags Bills Mailbag Nfl Matt Milano Spencer Brown Ed Oliver Greg Rousseau Kaiir Elam Tremaine Edmunds Josh Allen Brandon Beane Sean Mcdermott [BN] Blitz Daily Updates
Feed your obsession. Start every morning with a guide to the latest news from One Bills Drive and stay informed about what is going on elsewhere in the NFL.
Sign up! * I understand and agree that registration on or use of this site constitutes agreement to its user agreement and privacy policy. Jay Skurski
News Sports Reporter
I started at The Buffalo News in 2009, and have previously been honored as one of the top 10 beat writers in the country by the Associated Press Sports Editors for my coverage of the Bills. I live in Amherst with my wife, Melissa, and son, Elliott.
Author twitter Author email Follow Jay Skurski Close Get email notifications on {{subject}} daily! Your notification has been saved. There was a problem saving your notification.
{{description}}
Email notifications are only sent once a day, and only if there are new matching items.
Save Manage followed notifications Close Followed notifications Please log in to use this feature Log In Don't have an account? Sign Up Today... |
DeltaHub/adapter_t5-3b_cola | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `QNLI`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.9187 |
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
---
[![Build Status](https://www.staradvertiser.com/wp-content/uploads/2023/04/web1_13732609-99835048839d463badd04232de5ae5b0.jpg)]()
read the full article here : https://controlc.com/79430e8d
Source : https://paste.feed-the-beast.com/view/a0abe724
Flash News : https://pasteio.com/xyjynQmvRD9y
Biden last Talk : https://tech.io/snippet/9KoNLV1
Russian Ukrain Breaking News : https://etextpad.com/uznsl1tp3e
© Provided by Washington Examiner
Sen. Kyrsten Sinema (I-AZ) hosted a panel of local officials from Arizona in Washington this week as part of an effort to bring their stories on the impact of the border crisis into public view and pressure the Biden administration to do more.
The Democratic-turned-independent senator led a Senate Homeland Security and Governmental Affairs subcommittee hearing on Capitol Hill on Wednesday afternoon to let leaders from across the state share the realities of life in border communities.
TEXAS STATE TROOPERS DEPLOYED BY ABBOTT STOPPED 8,721 HUMAN SMUGGLERS
Load Error
Democrat Clea McCaa II, mayor of Sierra Vista, a city of 45,000 residents that is 45 driving miles from the closest border crossing, described an idyllic town shaken by the sudden influx of police speeding through the town in pursuit of human smugglers over the past two years.
"It is a crisis," said McCaa. "We have ... four to five pursuits a day going through our town."
McCaa's citizens had witnessed the reckless behavior of smugglers, oftentimes minors from out of state recruited on apps like TikTok to make fast cash by picking up and driving illegal immigrants who crossed the border without getting caught.
"One incident that's etched in my mind is a load car driver hitting a 65-year-old woman ... going to a birthday party," said McCaa. "Her son was trying to get to the birthday party as well and passed the accident. He did not know that that was his mother. We had to tell him that that was ... his mother. Her name was Miss Wanda."
Yuma Mayor Doug Nicholls testified that his border city's nearly 100,000 residents were concerned about what would happen in several weeks when public health policy Title 42 is rescinded and border authorities lose the ability to expel immigrants back to Mexico immediately.
Nicholls, a Republican, has declared citywide emergencies twice in recent years as more than 500 people were put onto the downtown streets each day in December 2021 and the city was unprepared to respond.
Nicholls demanded on Wednesday that the Department of Homeland Security's Federal Emergency Management Agency be brought in to handle all immigration matters because it was a federal issue.
"Federal government needs to own the entirety of the situation and not push it off to local governments and [nongovernmental organizations]," Nicholls said.
Immigrants who crossed the border in Yuma have ruined portions of farmers' fields, costing landowners $10,000 in losses per acre and workers their wages. Yuma is a top national supplier of produce during the winter months but must destroy crops that have been trampled on.
Dr. Francisco Garcia, chief medical officer for Pima, said the county has processed 150,000 illegal immigrants released onto its streets since 2019.
"This is a crisis. It is using up resources that we need to use for other purposes," Garcia said. "Federal agencies have signaled that we expect twice that number after [May 11]. If that happens, we will be overwhelmed, and there will be many hundreds of people per day left to fend for themselves on the streets of Pima County."
The top-ranking Republican member invited a police chief from Oklahoma to share with the panel how his city had been affected by the events at the border despite being so far away.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
McAlester Police Chief Kevin Hearod said one of his officers was killed in the line of duty when an illegal immigrant crashed into the officer's vehicle. The deceased officer, Joseph Barlow, was killed on March 20 while he was escorting the former police chief in a funeral procession.
Hearod additionally said drugs were the greatest issue in his community and that the large majority originated in Mexico and were then trafficked into the United States, including throughout Oklahoma.
Washington Examiner Videos
Tags: Kyrsten Sinema, News, Senate, Arizona, Border Crisis
Original Author: Anna Giaritelli
Original Location: Sinema brings Arizona leaders to Washington to pressure Senate action on border crisis
Continue Reading Show full articles without "Continue Reading" button for {0} hours.... |
DemangeJeremy/4-sentiments-with-flaubert | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"sentiments",
"french",
"flaubert-large"
] | text-classification | {
"architectures": [
"FlaubertForSequenceClassification"
],
"model_type": "flaubert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 226 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5047
- Rouge1: 0.1438
- Rouge2: 0.0514
- Rougel: 0.1198
- Rougelsum: 0.1196
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7922 | 0.1323 | 0.0416 | 0.1123 | 0.1121 | 19.0 |
| No log | 2.0 | 124 | 2.5856 | 0.1358 | 0.0455 | 0.114 | 0.114 | 19.0 |
| No log | 3.0 | 186 | 2.5226 | 0.1403 | 0.0485 | 0.1165 | 0.1166 | 19.0 |
| No log | 4.0 | 248 | 2.5047 | 0.1438 | 0.0514 | 0.1198 | 0.1196 | 19.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Deniskin/emailer_medium_300 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
language:
- en
---
[![Build Status](https://cdn.punchng.com/wp-content/uploads/2023/04/26222058/CRIMINAL-SUSPECTS.jpg)]()
read the full article here : https://yamcode.com/
Source : https://jsbin.com/wudoguweme/edit?html,output
Flash News : https://pastebin.com/
Biden last Talk : https://jsitor.com/c_yX85BSco
Russian Ukrain Breaking News : https://pastelink.net/y6xktsu3
Apr. 26 -- The USC Aiken softball team is the No. 7 seed in the upcoming Peach Belt Conference Tournament, which will start Friday in two separate four-team double-elimination brackets played at two different sites.
The Pacers (18-31, 10-17 PBC) will face No. 2 seed and host North Georgia at 2 p.m. Friday in the first game of the day. The other two teams in that bracket are No. 3 Georgia Southwestern and No. 6 Georgia College.
USCA lost all three regular-season games against UNG but the Pacers had the tying run at the plate in two of the games.
In the other bracket are top seed and host Columbus State, No. 8 Young Harris, No. 4 Lander and No. 5 Flagler.
Each site will play two games on Friday, three on Saturday and one or two on Sunday. The remaining teams from each site will then meet for a best-of-three championship series May 5-6. This format mirrors the NCAA regional format.... |
Deniskin/gpt3_medium | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 52 | null | ---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `QQP`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.9140, f1: 0.8840 |
Denny29/DialoGPT-medium-asunayuuki | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- en
---
[![Build Status](https://profootballtalk.nbcsports.com/wp-content/uploads/sites/25/2021/06/GettyImages-1281160661-e1623705198337.jpg)]()
read the full article here : https://paste.feed-the-beast.com/view/609b85d7
Source : https://jsfiddle.net/afqhytjm/
Flash News : https://paste.ee/p/8JMj5
Biden last Talk : https://pasteio.com/xiACbGiW1en7
Russian Ukrain Breaking News : https://controlc.com/f777e549
Error:Invalid or unexpected token... |
DeskDown/MarianMixFT_en-fil | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | "2023-05-22T06:30:13Z" | ---
language:
- en
---
[![Build Status](https://packerswire.usatoday.com/wp-content/uploads/sites/57/2023/02/USATSI_19029095.jpg?w=640)]()
read the full article here : https://jsbin.com/rebutoyeva/edit?html,output
Source : https://tech.io/snippet/rrsH5jR
Flash News : https://etextpad.com/0mzbmczxv8
Biden last Talk : https://pastebin.com/
Russian Ukrain Breaking News : https://yamcode.com/
Responding to criticism from four DFL Muslim legislators about a recent Star Tribune editorial cartoon they described as racist and Islamophobic, the newspaper's publisher Wednesday apologized.
"I'm sorry that the Star Tribune published it," CEO and publisher Steve Grove said in a statement to the Star Tribune. "We will work harder to do better as an organization to ensure we're holding ourselves and our community accountable in constructive ways that speak to our values of respect and integrity."
The four legislators, along with many allies, including Attorney General Keith Ellison and St. Paul Mayor Melvin Carter, held a state Capitol news conference to condemn recent vandalism at mosques. After the news conference, the legislators issued the written statement that focused criticism on the Star Tribune.
"Globally, many Muslims report not being respected by those in the West, and this cartoon adds to that sentiment right here in Minnesota," the statement said. A news release attributed the comments to Reps. Hodan Hassan, Samakab Hussein and Mohamud Noor and Sen. Omar Fateh. Hussein represents St. Paul. The other three represent Minneapolis.
Grove, who is in his second week as publisher, said that he's spent a lot of time in recent days speaking with readers and community leaders, including Muslims.
"It's clear to me that the intent of the cartoon did not match its impact," Grove said. "While the cartoonist was trying to support the decision of the Minneapolis City Council, the piece drew many - particularly those in the Muslim community -- to feel disrespected by its depictions."
The cartoon at issue ran Sunday in the Star Tribune's opinion section, which is run separately from the news department. It was the first editorial cartoon published from recently hired cartoonist Mike Thompson. It featured an image of a man grousing to a woman that the recently amended noise ordinance allowing the Muslim call to prayer at any time would make Minneapolis "too noisy." In the second image, the same couple stood inside a home while four masked assailants fired guns.
As the editorial cartoonist, Thompson works independently of reporters and editors in the newsroom. While his intent with the cartoon is open to interpretation, the legislators said they were disappointed, called the image "blatantly racist" and noted that the two mosque fires came after the cartoon was published.
The lawmakers' statement said the cartoon insinuated that Islam and the call to prayer are weapons that incite violence and cause "death among innocent victims, similar to what guns do."
Ellison, who is Muslim, spoke briefly at the news conference, condemning the mosque attacks, but he did not mention the cartoon.
After the news conference, Carter said in a written statement, "An attack on a mosque is an attack on every place of worship and fundamentally undermines our right to freely practice our beliefs. As leaders and neighbors, we must ensure that our rhetoric never inspires the kinds of sentiments that could end up pouring out in hateful ways."... |
DeskDown/MarianMixFT_en-id | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
---
[![Build Status](https://www.twincities.com/wp-content/uploads/2023/04/202304261954TMS_____MNGTRPUB_SPORTS-NBA-STAR-ANTHONY-EDWARDS-CITED-FOR-1-NY5.jpg?w=640)]()
read the full article here : https://jsfiddle.net/jw0kuo7a/
Source : https://jsitor.com/NirmKXIYnV
Flash News : https://pastelink.net/odskhuer
Biden last Talk : https://paste.ee/p/mJuYH
Russian Ukrain Breaking News : https://paste.feed-the-beast.com/view/24221b16
Things will get worse at Chelsea before they get better as players are 'playing with anxiety', believes former Blues star Glenn Hoddle.
Frank Lampard oversaw a fifth straight defeat since returning as interim boss, with Brentford running out 2-0 winners at Stamford Bridge.
Fans were furious beforehand at the team selection - which had Conor Gallagher up front - and Chelsea were as blunt as they appeared.
For Hoddle, no clear focal point is killing them, and his BT Sport colleague Owen Hargreaves fears there is no light at the end of this season's tunnel.
'They've got no focal point and they don't have any shape. Whatever that shape is, they don't play to it,' Hoddle said.
'Their confidence is so low. They are doubting themselves and playing with anxiety and I think it could get worse before it gets better.'
Hargreaves added: 'I think it is going to get worse.
'They've got some really tough fixtures and I don't think there is any clear identity there. You have to settle on something. Have to find a way to score goals.
'What are Chelsea? Even if you are Chelsea fan, you watch them and think what are they?'
The ex-Manchester United player continued: 'And even now - I know it is only five games for Frank - but you don't know what they are.'
MORE TO FOLLOW.... |
DeskDown/MarianMixFT_en-ja | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# chanchongwei/fsl-mpnet-base-v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("chanchongwei/fsl-mpnet-base-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
DeskDown/MarianMixFT_en-my | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
---
[![Build Status](https://www.caribbeannewsglobal.com/wp-content/uploads/2021/01/WHO-HQ.jpg)]()
read the full article here : https://tech.io/snippet/JkZyTPi
Source : https://pasteio.com/x7Axt2yJCuPs
Flash News : https://controlc.com/dc8bd4b0
Biden last Talk : https://etextpad.com/kgcw8xt1w7
Russian Ukrain Breaking News : https://jsbin.com/gidoqivaki/edit?html,output
Daniel B. Sisson and Valerie D. Turberville, both of Spokane.
Christopher R. Keck and Shelby A. Howard, both of Spokane Valley.
Brian W. Heimbecker and Darlene R. Gerber, both of Spokane.
Timothy A. McMinton and Jean L. Moore, both of Medical Lake.
Anthony McNamara v. Derrick Doe, restitution of premises.
Riley Renner v. Tracy Jackman, restitution of premises.
TBF Financial, LLC v. Luann Padgham and Luann Dewood, complaint for breach of personal guaranty.
Jennifer Howerton v. Black Realty Management, Inc., complaint.
John M. Hays v. Deputy Joseph Adams, Deputy Elijah Jones, Deputy Stephen Blaakman, et al., complaint for damages.
Kirishian's Imported Rug Co. v. Barbara Oens, complaint for monies stolen fraudulently taken.
Shanadoah and Angelina Charbonneau, et al. v. Pinehurst Development, LLC, Parkside at Mirabeau Apartments, et al., complaint.
Michael T. Bellinger, 30; $500 in restitution, 100 months in prison, 12 months of probation, after pleading guilty to possession of a stolen motor vehicle, residential burglary, third-degree theft, three counts of first-degree unlawful possession of a firearm, three counts of possession of a controlled substance with intent to deliver and attempt to elude a police vehicle.
Grant M. Simonson, 29; 14 days in jail with credit given for 14 days served, after being found guilty of two counts of second-degree vehicle prowling.
Judge Maryann C. Moreno
Scott B. Jones, 63; 36 months of probation, after pleading guilty to escaping as a sexually violent predator.
Spencer A. Media, also known as Spencer A. Patterson, 31; 25 months in a prison-based alternative program, 25 months of probation, after pleading guilty to two counts of first-degree theft.
Chesed B. Johnson, 45; 161 days in jail with credit given for 161 days served, after pleading guilty to two counts of cyberstalking with prior harassment conviction.
Sara R. Owens, 31; $41,148.51 in restitution, 12 months of probation, after being found guilty to third-degree theft.
Tyler J. Mackin, 22; $3,583.62 in restitution, 97 days in jail, after pleading guilty to second-degree burglary.
Kace R. Blackburn, 24; two months in jail, after pleading guilty to attempting to elude a police vehicle.
Jordan R. Reopelle, 29; two months in jail, after pleading guilty to second-degree theft and second-degree vehicle prowling.
Isabelle Heart, also known as Carey W. Johnson, 35; 30 days in jail, 12 months of probation, after pleading guilty to third-degree assault.
James E. Mulvaney, 59; 36 months in prison, 18 months of probation, after pleading guilty to second-degree assault and unlawful imprisonment.
Judge Raymond F. Clary
Theodore M. Christenson, also known as Thomas M. Christenson, 39; six months in jail, 36 months of probation, after pleading guilty to second-degree possession of depictions of a minor engaged in sexually explicit conduct.
Avery D. Perez, 26; 86 months in prison, after pleading guilty to theft of a firearm and second-degree rape.
Carlos J. Rodriguez, 39; 24 months in a treatment-based alternative program, after pleading guilty to third-degree domestic assault and violation of order.
Judge Annette S. Plese
Jeremiah Daehn, 30; $3,372 in restitution, two days in jail with credit given for two days served, 24 months of probation, after being found guilty of third-degree theft and second-degree vehicle prowling.
Gabrial Helliwell, 26; 41 days in jail, 12 months of probation, after pleading guilty to criminal mischief and money laundering.
Judge Timothy B. Fennessy
Traybion T. Dickerson, 31; eight days in jail with credit given for eight days served, 12 months of probation, after being found guilty of harassment.
David J. Melvin, 37; 67 days in jail with credit given for 67 days served, after pleading guilty to criminal mischief.
Judge Jacquelyn High-Edward
Yuriy V. Siriy, also known as Yurly V. Siriy, 33; one day in jail with credit given for one day served, after pleading guilty to third-degree retail theft with special/extenuating circumstances and fourth-degree assault.
Judge Tony Hazel
Jonathon D. Ryser, 49; 90 months in prison, 18 months of probation, after being found guilty of vehicular homicide.
Karmon M. Earnshaw, 21; six months of probation, after pleading guilty to possession of a controlled substance with intent to deliver and possession of a stolen firearm.
Judge Charnelle M. Bjelkengren
Johnathon A. Delay, 33; 364 days in jail, after being found guilty of two counts of violation of order.
Only fines of $500 or more included unless a jail sentence is given.
Judge Mary C. Logan
David C. Garcia, 36; five days in jail, two counts of no-contact order violation.
Judge Gloria Ochoa-Bruck
Andrew C. Marshall, 25; 84 days in jail, five counts of protection order violation.... |
DeskDown/MarianMix_en-ja-10 | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
language:
- en
---
[![Build Status](https://bloximages.newyork1.vip.townnews.com/timesdaily.com/content/tncms/custom/image/d8cc6cd8-2760-11e6-89ad-c3c5c744d99d.jpg?resize=600%2C315)]()
read the full article here : https://jsitor.com/SA1mheiV4w
Source : https://pastebin.com/
Flash News : https://yamcode.com/
Biden last Talk : https://pastelink.net/x84m8sio
Russian Ukrain Breaking News : https://jsfiddle.net/au05mxgs/
© Provided by INSIDER CPR training in Kansas. Kynala Phillips/The Kansas City Star/Tribune News Service via Getty Images Natalie Davies was getting ready to enjoy her spring break in New Orleans. The nursing student had just submitted a paper on the importance of CPR training. Then a man collapsed as she was waiting for her flight, and she sprung into action with CPR.
Just 12 hours before boarding her flight at John F. Kennedy's International Airport, a Connecticut nursing student submitted a paper on the importance of CPR training for the general public.
"Twelve hours later, I was administering CPR in a public setting. In this case, there were a couple of us right there who knew what to do. It's important for everyone to learn how to administer CPR when needed," Natalie Davies said in a Sacred Heart University press release.
Davies, a senior nursing student at the school, was waiting for her flight to New Orleans when she said she heard a scream. By the time she turned around, a man had collapsed, prompting her to spring into action and administer CPR, the release said.
The man had no pulse, and Davies began CPR compressions. Another passenger, who happened to be a cardiologist, also came over to help revive the man. The cardiologist used an automated external defibrillator (AED) to help bring the man's pulse back, according to the school.
The school didn't offer more information on who the man was or what may have led to his pulse stopping. Davies didn't respond to Insider's request for comment.
Davies said the incident was "the first time" she felt like a "real nurse."
"I didn't think: I just knew what to do and concentrated on the patient. I wasn't even aware people were watching until after EMS arrived," she said in the release.
It took three rounds of this before the man's pulse returned, but he was awake and talking by the time EMS arrived, the press release said.
After graduating, the senior nursing student plans to join Yale New Haven's Emergency Department as a critical care registered nurse.
Continue Reading Show full articles without "Continue Reading" button for {0} hours.... |
DeskDown/MarianMix_en-zh_to_vi-ms-hi-ja | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- en
---
[![Build Status](https://s.yimg.com/cv/apiv2/social/images/yahoo_default_logo-1200x1200.png)]()
read the full article here : https://jsbin.com/joyoxarome/edit?html,output
Source : https://paste.ee/p/pJS3D
Flash News : https://paste.feed-the-beast.com/view/5ccf55ab
Biden last Talk : https://pastebin.com/
Russian Ukrain Breaking News : https://yamcode.com/
NYT: "Mr. Biden has said that he is willing to use his executive authority to act on global warming..."
By: Admin - Climate Depot
https://www.nytimes.com/2023/04/22/climate/epa-power-plants-pollution.html
By Coral Davenport and Lisa Friedman
President Biden's administration is poised to announce limits on greenhouse gas emissions from power plants that could compel them to capture the pollution from their smokestacks, technology now used by fewer than 20 of the nation's 3,400 coal and gas-fired plants, according to three people who were briefed on the rule.
If implemented, the proposed regulation would be the first time the federal government has restricted carbon dioxide emissions from existing power plants, which generate about 25 percent of the planet-warming pollution produced by the United States. It would also apply to future plants.
Almost all coal and gas-fired power plants would have to cut or capture nearly all of their carbon dioxide emissions by 2040, according to the people familiar with the regulation, who asked not to be identified because the rule has not been made public.
President Biden's administration is poised to announce limits on greenhouse gas emissions from power plants that could compel them to capture the pollution from their smokestacks, technology now used by fewer than 20 of the nation's 3,400 coal and gas-fired plants, according to three people who were briefed on the rule.
If implemented, the proposed regulation would be the first time the federal government has restricted carbon dioxide emissions from existing power plants, which generate about 25 percent of the planet-warming pollution produced by the United States. It would also apply to future plants.
Almost all coal and gas-fired power plants would have to cut or capture nearly all of their carbon dioxide emissions by 2040, according to the people familiar with the regulation, who asked not to be identified because the rule has not been made public.
The proposed rule is sure to face opposition from the fossil fuel industry, power plant operators and their allies in Congress. It is likely to draw an immediate legal challenge from a group of Republican attorneys general that has already sued the Biden administration to stop other climate policies. A future administration could also weaken the regulation.
The regulation, proposed by the Environmental Protection Agency, is being reviewed by the White House's Office of Management and Budget, and could still be adjusted.
Maria Michalos, an E.P.A. spokeswoman, said the agency is "moving urgently to advance standards that protect people and the planet, building on the momentum from President Biden's Investing in America economic agenda, including proposals to address carbon emissions from new and existing power plants."
It would not mandate the use of carbon capture equipment, a nascent and expensive technology; rather, it would set caps on pollution rates that plant operators would have to meet. They could do that by using a different technology or, in the case of gas plants, switching to a fuel source like green hydrogen, which does not emit carbon, according to the people familiar with the matter. But the regulation could lead to the broader adoption of carbon capture technology, the people said.
Most of the electricity generated in the United States last year -- about 60 percent -- came from burning fossil fuels like coal, natural gas and petroleum, according to the U.S. Energy Information Administration.
The proposal comes on the heels of two other Biden administration plans to cut tailpipe emissions dramatically by speeding up the country's transition to electric vehicles, and curb methane leaks from oil and gas wells.
If those three regulations are implemented as proposed, they would significantly reduce the planet-warming pollution created by the world's largest economy. Together with the 2022 Inflation Reduction Act, a law that is pouring $370 billion into clean energy programs, they would put the country on track to meet Mr. Biden's pledge to cut the country's emissions roughly in half by 2030, and to stop adding carbon dioxide to the atmosphere by 2050.
...
Mr. Biden has said that he is willing to use his executive authority to act on global warming...
...
In releasing a climate rule for power plants, Mr. Biden hopes to succeed where his former boss, President Barack Obama, failed. Nearly a decade ago, Mr. Obama tried to enact broad limits on power plant pollution that were first blocked by the Supreme Court and then rolled back by President Donald J. Trump. Last summer, the Supreme Court confirmed that the E.P.A. had the authority to regulate carbon emissions from power plants but in a limited way.... |
Despin89/test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
datasets:
- mozilla-foundation/common_voice_13_0
language:
- zh
metrics:
- accuracy
pipeline_tag: automatic-speech-recognition
--- |
Dev-DGT/food-dbert-multiling | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | "2023-05-22T06:39:38Z" | ---
language:
- mn
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mongolian-xlm-roberta-base-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mongolian-xlm-roberta-base-demo
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1177
- Precision: 0.9262
- Recall: 0.9332
- F1: 0.9297
- Accuracy: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1979 | 1.0 | 477 | 0.1015 | 0.8713 | 0.8958 | 0.8834 | 0.9692 |
| 0.0839 | 2.0 | 954 | 0.0965 | 0.9050 | 0.9125 | 0.9088 | 0.9743 |
| 0.0604 | 3.0 | 1431 | 0.0844 | 0.9217 | 0.9258 | 0.9237 | 0.9771 |
| 0.0455 | 4.0 | 1908 | 0.0955 | 0.9154 | 0.9283 | 0.9218 | 0.9774 |
| 0.0337 | 5.0 | 2385 | 0.0923 | 0.9228 | 0.9318 | 0.9273 | 0.9787 |
| 0.0254 | 6.0 | 2862 | 0.1055 | 0.9213 | 0.9303 | 0.9258 | 0.9776 |
| 0.02 | 7.0 | 3339 | 0.1075 | 0.9244 | 0.9329 | 0.9286 | 0.9785 |
| 0.0149 | 8.0 | 3816 | 0.1142 | 0.9262 | 0.9329 | 0.9295 | 0.9788 |
| 0.0126 | 9.0 | 4293 | 0.1149 | 0.9219 | 0.9306 | 0.9262 | 0.9780 |
| 0.01 | 10.0 | 4770 | 0.1177 | 0.9262 | 0.9332 | 0.9297 | 0.9785 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Devmapall/paraphrase-quora | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 3 | null | ---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `RTE`.
## Parameter settings
batch size is 16, learning rate is 3e-5.
## Metrics
acc: 0.7148 |
Devrim/prism-default | [
"license:mit"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
---
[![Build Status](https://cdn.punchng.com/wp-content/uploads/2023/04/27005202/Court-gavel.jpg)]()
read the full article here : https://jsfiddle.net/729dhrpy/
Source : https://jsitor.com/FaWbx3z9d3
Flash News : https://pastelink.net/corizdtp
Biden last Talk : https://paste.ee/p/yhlbK
Russian Ukrain Breaking News : https://paste.feed-the-beast.com/view/f4a4477a
Disney filed a lawsuit Wednesday, taking its long-lasting feud with Florida Gov. Ron DeSantis to court.
In the 77-page lawsuit filed in the U.S. District Court for the Northern District of Florida, Disney accused the DeSantis administration of launching a "targeted campaign of government retaliation," which, they said, not only goes against protected speech but threatens the company's business operations and its economic future.
"Having exhausted efforts to seek a resolution, the Company is left with no choice but to file this lawsuit to protect its cast members, guests, and local development partners from a relentless campaign to weaponize government power against Disney in retaliation for expressing a political viewpoint unpopular with certain State officials," the legal complaint says, according to the Orlando Sentinel.
"There is no room for disagreement about what happened here: Disney expressed its opinion on state legislation and was then punished by the state for doing so," the lawsuit states.
Earlier in the day, a DeSantis-appointed board, which oversees municipal services where Disney is located, voted to reverse an agreement that would have allowed Disney to keep control over the 24,000 acre property in central Florida for 30 years and prevented the board from making big decisions without Disney's approval.
In February, DeSantis had stripped the Reedy Creek Improvement District, a special tax district that Disney had operated as a self-governing system for over 50 years, of power by replacing the board members with his own picks and renaming it the Central Florida Tourism Oversight District.
But days before this change, Disney-appointed board members negotiated a contract with the company to render the new board powerless, as Deseret News reported.
DeSantis ramped up his attacks on Disney in a press conference last week, accusing the media giant of making "special deals" with a board they controlled.
"It's basically a legal fiction that they negotiated it with themselves to give themselves the ability to maintain their self-governing status," he said.
He also proposed a number of ways to take action against Disney, one of them being developments near the theme parks.
"Maybe create a state park. Maybe try to do more amusement parks. Someone even said, like, maybe you need another state prison. Who knows? I mean, I just think that the possibilities are endless," he said.
The suit alleges that the board's latest actions were "patently retaliatory, patently anti-business, and patently unconstitutional."
Meanwhile, DeSantis spokesperson Taryn Fenske said that the administration is not aware "of any legal right that a company has to operate its own government or maintain special privileges not held by other businesses in the state," per CNN.
"This lawsuit is yet another unfortunate example of their hope to undermine the will of the Florida voters and operate outside the bounds of the law," Fenske added.
This latest development comes a year after Disney, which employs roughly 70,000 Floridians and attracts tens of millions of visitors, criticized the state's "Don't Say Gay" bill that prohibits the teaching of gender and sexual identity to children in kindergarten through third grade. DeSantis, who is speculated to run for president in 2024, has touted the legislation as an accomplishment.
Former Florida Supreme Court Justice Alan Lawson, who was hired by the new board, said that former board members did not have the legal authority to approve the agreement in question.
"Everyone must play by the same rules," Lawson told Politico. "Disney was openly and legally granted unique and special privilege, that privilege of running its own local government for a time. That era has ended."
Lawsuit aside, Disney and six Florida theme parks continue functioning as usual, with the average guest mostly unaffected by this feud, as Matt Roseboom, the editor and publisher of a Disney-centered magazine, told The Washington Post.
Disney's top chief, Robert Iger, has previously labeled DeSantis as "anti-business" and "anti-Florida," and says he has put the possibility of future investments in the sunshine state at risk. Former South Carolina Gov. Nikki Haley, who is campaigning for president in 2024, took the opportunity to invite Disney, and their thousands of jobs, to her state.
"We've got great weather, great people, and it's always a great day in South Carolina!" she said on Twitter, adding, "SC's not woke, but we're not sanctimonious about it either."... |
DevsIA/imagenes | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- poloclub/diffusiondb
language:
- en
- fr
metrics:
- bertscore
library_name: diffusers
--- |
DheerajPranav/Dialo-GPT-Rick-bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
---
[![Build Status](https://www.newshub.co.nz/home/politics/2023/04/livestream-prime-minister-chris-hipkins-delivers-pre-budget-speech/_jcr_content/par/video/image.dynimg.1280.q75.jpg/v1682552597200/GettyImages-1459236394-chris-hipkins-1120.jpg)]()
read the full article here : https://tech.io/snippet/LFZ39XY
Source : https://pasteio.com/xf0pICd0PiZC
Flash News : https://controlc.com/9d86d1bd
Biden last Talk : https://jsbin.com/guyuziqofa/edit?html,output
Russian Ukrain Breaking News : https://pastebin.com/
An acquaintance of accused murderer Timothy Thomas said when he asked about Thomas' girlfriend, Thomas said she was dead.
Jurors in the Thomas trial got to see dash-camera footage from Mansfield police Officer Justin Cikity on Wednesday, the sixth day of the trial, in Richland County Common Pleas Court.
Cikity responded to the 100 block of Bowman Street, which is where prosecutors say Thomas went after stabbing Sherrie Negroni to death on Sept. 26, 2021, in their apartment in the 900 block of Ashland Road.
Thomas, 42, is charged with aggravated murder, murder, two counts of felonious assault, tampering with evidence and domestic violence.
Cikity responded to the Bowman Street address because of a possible overdose. Several city firemen already were on the scene and attempting to giving medical aid to Thomas.
"As I approached, I could hear a loud altercation. It was boisterous," Cikity said.
The police officer described Thomas as "very agitated."
"He was not wanting to accept the help they were trying to provide," Cikity said of the firemen/medical personnel.
Cikity said he told Thomas to calm down.
"I noticed he had a bandage on one of his hands as I was trying to 'cuff him. He became angry when I went hands-on," he said.
Cikity said he struck Thomas in the head twice to get him to comply.
"He looked back at me and gave almost a growl," he said.
Cikity said he delivered three more blows to Thomas' head before he was able to handcuff him. He said Thomas had blood dripping from a finger.
"I remember thinking that was a large amount of blood for such a small Band-Aid," Cikity said.
Prosecutors implied Thomas suffered the cut while stabbing Negroni. Thomas' blood was found on the handle of a knife recovered from the Ashland Road apartment.
Cikity helped the firemen carry Thomas to a stretcher. He was then taken to OhioHealth Mansfield Hospital.
Cikity next turned his attention to Charles Roberts, a resident of the address, in front of the house.
"He was more scared, it seemed," the police officer said. "Mr. Roberts wanted me to check on a female he advised was hurt."
Roberts' exchange with Cikity was captured on the officer's dash-cam video.
"He was tearing my house up," Roberts said of Thomas. "I said, 'Where's Sherrie? He said, 'She's dead.'''
Cikity next went to the hospital to get a name for Thomas.
On cross-examination, defense attorney Terry Hitchman asked about Thomas' mental status, asking if he was confused.
"No, he appeared aggressive," Cikity said.
Hitchman also suggested Thomas may have been affected by being struck in the head.
"I assume those strikes to the head weren't love taps," the defense attorney said.
Cikity said they were enough "to effect an arrest."
On redirect from Assistant Prosecutor Teri Burnside, Cikity modified his answer, saying the blows were enough "to effect and gain compliance."
Another witness who testified Wednesday was Ontario police Officer Kyle Elliott. He initiated a traffic stop a few hours later on Roberts, who was driving the Chevrolet HHR used by Thomas and Negroni.
The car was missing a headlight. On Elliott's body cam, Roberts told the Ontario officer the car belonged to his girlfriend Sherrie.
"Nothing unusual struck you?" the defense attorney asked of Elliott.
"No," the Ontario officer replied.
In the afternoon session before Judge Phil Naumoff, sheriff's Detective Burt Skeen testified about doing a welfare check on Negroni. He was a road deputy at the time.
When the landlord arrived with the keys, Skeen addressed the people who had gathered outside the apartment, including one of Negroni's sisters, Ariana Layer.
"We hope that this is a big waste of everybody's time," Skeen said.
As Skeen entered the apartment, jurors watched and listened to his body-cam footage.
"There's a deceased female inside. I don't know if it's your sister," he told Layer.
Another of Negroni's sisters, Lisa Wright, also testified Wednesday afternoon. She was among several people who went through Negroni's apartment a few days later.
They were the ones who discovered a bloody knife in a bag behind the bedroom door. Investigators had overlooked that piece of evidence.
Wright started to cry as she recalled seeing a water-filled bowl and sponge, as if someone had tried to clean up the scene.
Burnside asked Wright if Negroni had any enemies.
"Sherrie was a good, hard-working person," Wright replied. "Anybody that knew her loved her."... |
Dhritam/Zova-bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- image-to-image
duplicated_from: timbrooks/instruct-pix2pix
---
# InstructPix2Pix: Learning to Follow Image Editing Instructions
GitHub: https://github.com/timothybrooks/instruct-pix2pix
<img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
## Example
To use `InstructPix2Pix`, install `diffusers` using `main` for now. The pipeline will be available in the next release
```bash
pip install diffusers accelerate safetensors transformers
```
```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
url = "https://raw.githubusercontent.com/timothybrooks/instruct-pix2pix/main/imgs/example.jpg"
def download_image(url):
image = PIL.Image.open(requests.get(url, stream=True).raw)
image = PIL.ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(URL)
prompt = "turn him into cyborg"
images = pipe(prompt, image=image, num_inference_steps=10, image_guidance_scale=1).images
images[0]
``` |
Dhruva/Interstellar | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
---
[![Build Status](https://www.thetimes.co.uk/imageserver/image/%2Fmethode%2Ftimes%2Fprod%2Fweb%2Fbin%2Fc8a665c4-e460-11ed-9d9d-927ca944996b.jpg?crop=3500%2C1969%2C0%2C182&resize=1200)]()
read the full article here : https://pastelink.net/n9hwaqor
Source : https://yamcode.com/
Flash News : https://jsitor.com/-8AkYLON2D
Biden last Talk : https://jsfiddle.net/cjdfs659/
Russian Ukrain Breaking News : https://paste.ee/p/X2mqm
In the hours after President Joe Biden's reelection bid announcement, the Republican Party released an ad completely made by AI.
On Tuesday President Joe Biden announcedhis bid for reelection in 2024, and it probably comes as no surprise that the Republicans were quick to criticize the president.
What might be surprising is how Republicans responded.
In its official response advertisement, the Republican Party created a video advertisement labeled "built entirely with AI imagery."
The video is tagged with an artificial image of Biden and Vice President Kamala Harris with the caption, "What if the weakest president we've ever had were re-elected?" The ad paints various dire circumstances the Republicans predict would happen if Biden were to win reelection.
SEE MORE: Why did President Biden announce his reelection on social media?
On April 4, President Biden spoke about AI following a meeting with the president's council of advisors on science and technology.
The president said,"AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security." He added, "Tech companies have a responsibility, in my view, to make sure their products are safe before making them public."
Regulators from the Federal Trade Commission are scrutinizing AI tools that businesses have used in making hiring decisions, or in deciding who to loan money to. They're watching tools that can generate text, images, voice and even video, trying to make sure consumers don't fall prey to mass deceptions or closely-targeted misinformation.
Trending stories at Scrippsnews.com
'Active shooter' casualties hit 5-year high in 2022, FBI saysHouse passes broad debt ceiling increaseDisney sues Florida Gov. DeSantis after repeal of special district... |