modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
unknown
card
stringlengths
51
438k
BigSalmon/GPTIntro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2023-05-23T07:27:32Z"
--- language: - mn license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: mongolian-gpt2-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mongolian-gpt2-ner This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2599 - Precision: 0.1483 - Recall: 0.2561 - F1: 0.1878 - Accuracy: 0.9149 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4822 | 1.0 | 477 | 0.3452 | 0.1156 | 0.2072 | 0.1484 | 0.8876 | | 0.3376 | 2.0 | 954 | 0.3196 | 0.1369 | 0.2304 | 0.1717 | 0.8975 | | 0.3084 | 3.0 | 1431 | 0.2915 | 0.1242 | 0.2257 | 0.1603 | 0.9015 | | 0.2889 | 4.0 | 1908 | 0.2800 | 0.1328 | 0.2375 | 0.1704 | 0.9063 | | 0.275 | 5.0 | 2385 | 0.2734 | 0.1439 | 0.2452 | 0.1814 | 0.9099 | | 0.264 | 6.0 | 2862 | 0.2691 | 0.1426 | 0.2420 | 0.1795 | 0.9115 | | 0.256 | 7.0 | 3339 | 0.2639 | 0.1411 | 0.2442 | 0.1789 | 0.9129 | | 0.2498 | 8.0 | 3816 | 0.2628 | 0.1482 | 0.2511 | 0.1864 | 0.9135 | | 0.2438 | 9.0 | 4293 | 0.2603 | 0.1483 | 0.2548 | 0.1875 | 0.9143 | | 0.2388 | 10.0 | 4770 | 0.2599 | 0.1483 | 0.2561 | 0.1878 | 0.9149 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/InformalToFormalLincoln17
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-300m-swa-r22-2k-ft-pseudo-v1 results: [] duplicated_from: mutisya/wav2vec2-300m-swa-r22-2k-ft-pseudo-v1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-300m-swa-r22-2k-ft-pseudo-v1 This model is a fine-tuned version of [mutisya/wav2vec2-300m-swa-r22-2k](https://huggingface.co/mutisya/wav2vec2-300m-swa-r22-2k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3668 - Wer: 0.3979 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.3491 | 0.06 | 400 | 2.8865 | 1.0 | | 2.4271 | 0.12 | 800 | 1.0885 | 0.8482 | | 0.8845 | 0.19 | 1200 | 0.5152 | 0.5594 | | 0.6576 | 0.25 | 1600 | 0.4299 | 0.4929 | | 0.5928 | 0.31 | 2000 | 0.3760 | 0.4718 | | 0.5442 | 0.37 | 2400 | 0.3654 | 0.4649 | | 0.5374 | 0.43 | 2800 | 0.3575 | 0.4584 | | 0.5111 | 0.5 | 3200 | 0.3522 | 0.4532 | | 0.5081 | 0.56 | 3600 | 0.3550 | 0.4593 | | 0.5016 | 0.62 | 4000 | 0.3342 | 0.4503 | | 0.5002 | 0.68 | 4400 | 0.3605 | 0.4520 | | 0.5045 | 0.74 | 4800 | 0.3407 | 0.4694 | | 0.5026 | 0.81 | 5200 | 0.3729 | 0.4675 | | 0.4972 | 0.87 | 5600 | 0.3458 | 0.4567 | | 0.4887 | 0.93 | 6000 | 0.3435 | 0.4505 | | 0.494 | 0.99 | 6400 | 0.3466 | 0.4449 | | 0.4434 | 1.05 | 6800 | 0.3382 | 0.4421 | | 0.4499 | 1.12 | 7200 | 0.3495 | 0.4612 | | 0.4471 | 1.18 | 7600 | 0.3365 | 0.4469 | | 0.4497 | 1.24 | 8000 | 0.3641 | 0.4518 | | 0.4517 | 1.3 | 8400 | 0.3572 | 0.4521 | | 0.4349 | 1.36 | 8800 | 0.3383 | 0.4520 | | 0.4396 | 1.43 | 9200 | 0.3591 | 0.4666 | | 0.4462 | 1.49 | 9600 | 0.3335 | 0.4509 | | 0.4393 | 1.55 | 10000 | 0.3406 | 0.4506 | | 0.4416 | 1.61 | 10400 | 0.3335 | 0.4456 | | 0.4511 | 1.67 | 10800 | 0.3339 | 0.4465 | | 0.4322 | 1.74 | 11200 | 0.3253 | 0.4428 | | 0.4356 | 1.8 | 11600 | 0.3242 | 0.4489 | | 0.4353 | 1.86 | 12000 | 0.3329 | 0.4424 | | 0.4408 | 1.92 | 12400 | 0.3159 | 0.4401 | | 0.4176 | 1.98 | 12800 | 0.3108 | 0.4338 | | 0.3977 | 2.05 | 13200 | 0.3263 | 0.4330 | | 0.38 | 2.11 | 13600 | 0.3305 | 0.4394 | | 0.3835 | 2.17 | 14000 | 0.3192 | 0.4320 | | 0.3881 | 2.23 | 14400 | 0.3141 | 0.4339 | | 0.3767 | 2.29 | 14800 | 0.3227 | 0.4339 | | 0.3818 | 2.36 | 15200 | 0.3127 | 0.4290 | | 0.3783 | 2.42 | 15600 | 0.3115 | 0.4333 | | 0.381 | 2.48 | 16000 | 0.3092 | 0.4273 | | 0.3857 | 2.54 | 16400 | 0.3187 | 0.4377 | | 0.3761 | 2.6 | 16800 | 0.3124 | 0.4260 | | 0.3718 | 2.67 | 17200 | 0.3156 | 0.4264 | | 0.3843 | 2.73 | 17600 | 0.3136 | 0.4340 | | 0.39 | 2.79 | 18000 | 0.3006 | 0.4242 | | 0.387 | 2.85 | 18400 | 0.3069 | 0.4269 | | 0.3835 | 2.91 | 18800 | 0.3205 | 0.4351 | | 0.3766 | 2.98 | 19200 | 0.2988 | 0.4224 | | 0.3485 | 3.04 | 19600 | 0.3084 | 0.4202 | | 0.3377 | 3.1 | 20000 | 0.3112 | 0.4233 | | 0.3428 | 3.16 | 20400 | 0.3169 | 0.4269 | | 0.3418 | 3.22 | 20800 | 0.3190 | 0.4226 | | 0.3407 | 3.29 | 21200 | 0.3243 | 0.4266 | | 0.3302 | 3.35 | 21600 | 0.3050 | 0.4244 | | 0.3484 | 3.41 | 22000 | 0.3137 | 0.4294 | | 0.3342 | 3.47 | 22400 | 0.3096 | 0.4217 | | 0.3465 | 3.53 | 22800 | 0.3065 | 0.4225 | | 0.3401 | 3.6 | 23200 | 0.3093 | 0.4252 | | 0.3372 | 3.66 | 23600 | 0.3028 | 0.4225 | | 0.3409 | 3.72 | 24000 | 0.3143 | 0.4285 | | 0.3425 | 3.78 | 24400 | 0.3027 | 0.4266 | | 0.342 | 3.84 | 24800 | 0.2993 | 0.4221 | | 0.3351 | 3.91 | 25200 | 0.3037 | 0.4240 | | 0.3446 | 3.97 | 25600 | 0.3110 | 0.4308 | | 0.3174 | 4.03 | 26000 | 0.3075 | 0.4231 | | 0.2988 | 4.09 | 26400 | 0.3205 | 0.4263 | | 0.3071 | 4.15 | 26800 | 0.3058 | 0.4195 | | 0.3044 | 4.22 | 27200 | 0.3026 | 0.4176 | | 0.296 | 4.28 | 27600 | 0.3224 | 0.4198 | | 0.3118 | 4.34 | 28000 | 0.3074 | 0.4175 | | 0.3038 | 4.4 | 28400 | 0.3219 | 0.4189 | | 0.3053 | 4.46 | 28800 | 0.3074 | 0.4170 | | 0.3086 | 4.53 | 29200 | 0.3079 | 0.4149 | | 0.3016 | 4.59 | 29600 | 0.3055 | 0.4202 | | 0.3041 | 4.65 | 30000 | 0.3121 | 0.4194 | | 0.3032 | 4.71 | 30400 | 0.3030 | 0.4163 | | 0.3029 | 4.77 | 30800 | 0.2937 | 0.4183 | | 0.3025 | 4.84 | 31200 | 0.3095 | 0.4186 | | 0.3035 | 4.9 | 31600 | 0.3043 | 0.4162 | | 0.3024 | 4.96 | 32000 | 0.3030 | 0.4172 | | 0.2928 | 5.02 | 32400 | 0.3089 | 0.4106 | | 0.2678 | 5.08 | 32800 | 0.3207 | 0.4126 | | 0.2657 | 5.15 | 33200 | 0.3148 | 0.4114 | | 0.2675 | 5.21 | 33600 | 0.3099 | 0.4121 | | 0.2678 | 5.27 | 34000 | 0.3125 | 0.4106 | | 0.2691 | 5.33 | 34400 | 0.3142 | 0.4137 | | 0.2647 | 5.39 | 34800 | 0.3158 | 0.4136 | | 0.2775 | 5.46 | 35200 | 0.3151 | 0.4152 | | 0.2747 | 5.52 | 35600 | 0.3089 | 0.4099 | | 0.2707 | 5.58 | 36000 | 0.3120 | 0.4101 | | 0.2697 | 5.64 | 36400 | 0.3148 | 0.4114 | | 0.2799 | 5.7 | 36800 | 0.3129 | 0.4134 | | 0.2697 | 5.77 | 37200 | 0.3112 | 0.4090 | | 0.2766 | 5.83 | 37600 | 0.3159 | 0.4177 | | 0.2715 | 5.89 | 38000 | 0.3078 | 0.4106 | | 0.2749 | 5.95 | 38400 | 0.3054 | 0.4094 | | 0.2638 | 6.01 | 38800 | 0.3229 | 0.4111 | | 0.2415 | 6.08 | 39200 | 0.3269 | 0.4153 | | 0.2422 | 6.14 | 39600 | 0.3248 | 0.4124 | | 0.2461 | 6.2 | 40000 | 0.3214 | 0.4093 | | 0.246 | 6.26 | 40400 | 0.3233 | 0.4083 | | 0.2429 | 6.32 | 40800 | 0.3263 | 0.4130 | | 0.2423 | 6.39 | 41200 | 0.3224 | 0.4078 | | 0.2489 | 6.45 | 41600 | 0.3162 | 0.4082 | | 0.2426 | 6.51 | 42000 | 0.3251 | 0.4107 | | 0.2446 | 6.57 | 42400 | 0.3198 | 0.4095 | | 0.2448 | 6.63 | 42800 | 0.3198 | 0.4056 | | 0.2414 | 6.7 | 43200 | 0.3273 | 0.4114 | | 0.2421 | 6.76 | 43600 | 0.3137 | 0.4089 | | 0.2421 | 6.82 | 44000 | 0.3185 | 0.4089 | | 0.2332 | 6.88 | 44400 | 0.3178 | 0.4062 | | 0.2454 | 6.94 | 44800 | 0.3246 | 0.4081 | | 0.245 | 7.01 | 45200 | 0.3276 | 0.4072 | | 0.2152 | 7.07 | 45600 | 0.3476 | 0.4149 | | 0.2221 | 7.13 | 46000 | 0.3461 | 0.4099 | | 0.2195 | 7.19 | 46400 | 0.3413 | 0.4070 | | 0.2195 | 7.25 | 46800 | 0.3326 | 0.4040 | | 0.2146 | 7.32 | 47200 | 0.3434 | 0.4050 | | 0.2194 | 7.38 | 47600 | 0.3355 | 0.4054 | | 0.2173 | 7.44 | 48000 | 0.3287 | 0.4057 | | 0.2122 | 7.5 | 48400 | 0.3345 | 0.4066 | | 0.2184 | 7.56 | 48800 | 0.3324 | 0.4058 | | 0.2205 | 7.63 | 49200 | 0.3279 | 0.4050 | | 0.2164 | 7.69 | 49600 | 0.3292 | 0.4064 | | 0.2193 | 7.75 | 50000 | 0.3359 | 0.4052 | | 0.2236 | 7.81 | 50400 | 0.3350 | 0.4075 | | 0.218 | 7.87 | 50800 | 0.3332 | 0.4044 | | 0.2207 | 7.94 | 51200 | 0.3324 | 0.4024 | | 0.2143 | 8.0 | 51600 | 0.3317 | 0.4070 | | 0.2043 | 8.06 | 52000 | 0.3400 | 0.4009 | | 0.1983 | 8.12 | 52400 | 0.3444 | 0.4017 | | 0.2012 | 8.18 | 52800 | 0.3567 | 0.4044 | | 0.1908 | 8.25 | 53200 | 0.3544 | 0.4031 | | 0.1965 | 8.31 | 53600 | 0.3476 | 0.4042 | | 0.1958 | 8.37 | 54000 | 0.3459 | 0.4032 | | 0.2036 | 8.43 | 54400 | 0.3526 | 0.4039 | | 0.2001 | 8.49 | 54800 | 0.3434 | 0.4028 | | 0.1952 | 8.56 | 55200 | 0.3535 | 0.4031 | | 0.1964 | 8.62 | 55600 | 0.3607 | 0.4037 | | 0.1983 | 8.68 | 56000 | 0.3545 | 0.4016 | | 0.1999 | 8.74 | 56400 | 0.3497 | 0.4017 | | 0.193 | 8.8 | 56800 | 0.3429 | 0.4008 | | 0.1954 | 8.87 | 57200 | 0.3503 | 0.4011 | | 0.1993 | 8.93 | 57600 | 0.3506 | 0.3989 | | 0.1942 | 8.99 | 58000 | 0.3481 | 0.3997 | | 0.1888 | 9.05 | 58400 | 0.3599 | 0.4001 | | 0.1857 | 9.11 | 58800 | 0.3613 | 0.4007 | | 0.1811 | 9.18 | 59200 | 0.3608 | 0.4004 | | 0.1827 | 9.24 | 59600 | 0.3676 | 0.4015 | | 0.1849 | 9.3 | 60000 | 0.3625 | 0.3982 | | 0.181 | 9.36 | 60400 | 0.3681 | 0.3980 | | 0.1843 | 9.42 | 60800 | 0.3661 | 0.3998 | | 0.1841 | 9.49 | 61200 | 0.3643 | 0.4006 | | 0.1805 | 9.55 | 61600 | 0.3648 | 0.3993 | | 0.1839 | 9.61 | 62000 | 0.3660 | 0.3998 | | 0.1825 | 9.67 | 62400 | 0.3671 | 0.3985 | | 0.1844 | 9.73 | 62800 | 0.3651 | 0.3983 | | 0.1774 | 9.8 | 63200 | 0.3660 | 0.3992 | | 0.1812 | 9.86 | 63600 | 0.3689 | 0.3983 | | 0.1807 | 9.92 | 64000 | 0.3677 | 0.3983 | | 0.1787 | 9.98 | 64400 | 0.3668 | 0.3978 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/InformalToFormalLincoln24
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- metrics: - accuracy pipeline_tag: text-classification tags: - finance ---
BigSalmon/InformalToFormalLincoln25
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: MrNoOne/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BobBraico/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: flan-t5-base-samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: test args: samsum metrics: - name: Rouge1 type: rouge value: 47.1222 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-samsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.3701 - Rouge1: 47.1222 - Rouge2: 23.3908 - Rougel: 39.7231 - Rougelsum: 43.3842 - Gen Len: 17.1465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.4683 | 1.0 | 921 | 1.3897 | 46.737 | 23.2046 | 39.4441 | 43.2001 | 17.1526 | | 1.3586 | 2.0 | 1842 | 1.3726 | 47.2757 | 23.701 | 39.7059 | 43.502 | 17.2222 | | 1.3138 | 3.0 | 2763 | 1.3701 | 47.1222 | 23.3908 | 39.7231 | 43.3842 | 17.1465 | | 1.2828 | 4.0 | 3684 | 1.3737 | 47.3039 | 23.5383 | 39.8402 | 43.5561 | 17.3309 | | 1.2492 | 5.0 | 4605 | 1.3738 | 47.557 | 23.7814 | 40.1904 | 43.89 | 17.2332 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
BumBelDumBel/ZORK-AI-TEST
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - th library_name: transformers pipeline_tag: automatic-speech-recognition ---
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
574
"2023-05-23T10:28:40Z"
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: true thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `torch` libraries installed. ```bash pip install transformers==4.28.1 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="fouadbakour/gpt-neo-1_3B-eurisko-website", torch_dtype=torch.float16, trust_remote_code=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "fouadbakour/gpt-neo-1_3B-eurisko-website", padding_side="left" ) model = AutoModelForCausalLM.from_pretrained( "fouadbakour/gpt-neo-1_3B-eurisko-website", torch_dtype=torch.float16, device_map={"": "cuda:0"} ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "fouadbakour/gpt-neo-1_3B-eurisko-website" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` GPTNeoForCausalLM( (transformer): GPTNeoModel( (wte): Embedding(50257, 2048) (wpe): Embedding(2048, 2048) (drop): Dropout(p=0.0, inplace=False) (h): ModuleList( (0-23): 24 x GPTNeoBlock( (ln_1): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) (attn): GPTNeoAttention( (attention): GPTNeoSelfAttention( (attn_dropout): Dropout(p=0.0, inplace=False) (resid_dropout): Dropout(p=0.0, inplace=False) (k_proj): Linear(in_features=2048, out_features=2048, bias=False) (v_proj): Linear(in_features=2048, out_features=2048, bias=False) (q_proj): Linear(in_features=2048, out_features=2048, bias=False) (out_proj): Linear(in_features=2048, out_features=2048, bias=True) ) ) (ln_2): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) (mlp): GPTNeoMLP( (c_fc): Linear(in_features=2048, out_features=8192, bias=True) (c_proj): Linear(in_features=8192, out_features=2048, bias=True) (act): NewGELUActivation() (dropout): Dropout(p=0.0, inplace=False) ) ) ) (ln_f): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) ) (lm_head): CastOutputToFloat( (0): Linear(in_features=2048, out_features=50257, bias=False) ) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=fouadbakour/gpt-neo-1_3B-eurisko-website --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Capreolus/birch-bert-large-mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
"2023-05-23T11:14:55Z"
--- language: - en --- [![Build Status](https://www.gannett-cdn.com/presto/2023/04/19/PMAN/fd0d3e30-a3c8-410f-a6ce-a40c810b963c-Thomas_jury_selection.jpg?auto=webp&crop=5301,2982,x0,y317&format=pjpg&width=1200)]() read the full article here : https://yamcode.com/breaking-news-update-1-05232023-120545 Source : https://rentry.co/zod2h Flash News : https://rentry.co/ostm6 Biden last Talk : https://rextester.com/MCT17784 Russian Ukrain Breaking News : https://rentry.co/gg2da The Ondo State Security Network Agency also known as Amotekun Corps has arrested no fewer than 31 suspected criminals at different locations across the state. The suspects were paraded at the headquarters of the corps in Akure, the state capital, on Wednesday. The Corps Commander, Adetunji Adeleye, told journalists that the suspects were arrested for different criminal activities, including rape, violation of anti-grazing law, kidnapping, armed robbery and murder among others. Adeleye urged residents of the state to be security conscious and be on the lookout for herdsmen who he said were attempting to flood forests in the state. He disclosed that about 9,000 herdsmen attempted to enter the state through the border communities in Edo, Kogi and Ekiti States with a large number of cattle to perpetrate crimes recently, but were sent back. He said, "The reality is that in the last month, we have a very heavy annoying unusual influx of herdsmen into the state. We have to drive them back to where they came from. "In the last month, we are confronted with over 9,000 herders against the laws of the land." Adeleye said only herdsmen who registered with the state government would be allowed in the state. "The trend has been on for a long time, especially in the last month, but we are now working on different modalities of ensuring that herders that are legitimate to be in the state are here. "The ones who are criminals and are not supposed to be in the state are driven out of our forests. Also, we are making efforts towards the border towns because we noticed that those that found their ways now to Ondo are through Ekiti State," he added. Adeleye who said the corps was determined to make the region uncomfortable for criminals, added that all the suspects paraded would be charged....
Captain272/lstm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2023-05-23T11:18:50Z"
--- base_model: stabilityai/stable-diffusion-2-1-unclip-small training_prompt: A Suzuki Jimny driving down a mountain road protagonist: Suzuki Jimny tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - text-to-video - tune-a-video inference: false --- # Make-A-Protagonist - car-turn
Cdial/hausa-asr
[ "wav2vec2", "automatic-speech-recognition", "ha", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dccuchile/albert-base-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
--- language: - en --- [![Build Status](https://bloximages.chicago2.vip.townnews.com/buffalonews.com/content/tncms/assets/v3/editorial/8/11/811dee94-e487-11ed-8440-dbf0540ff85b/6413841a8e211.image.jpg?crop=1600%2C840%2C0%2C112&resize=438%2C230&order=crop%2Cresize)]() read the full article here : https://paste.feed-the-beast.com/view/dd1bbc45 Source : https://jsfiddle.net/47fjma5n/ Flash News : https://paste.ee/p/DZAEO Biden last Talk : https://pasteio.com/xI15cPoilqix Russian Ukrain Breaking News : https://controlc.com/11695113 The Texans own two of the first 12 picks in Thursday night's first round, but they may not wind up making picks at No. 2 and No 12. Adam Schefter of ESPN reports that the team has not had many trade discussions about the first of their two picks, but that they are exploring moves involving the 12th selection. They have reportedly spoken to teams about moving in either direction once the first round is underway. The direction they go could be linked to what they do with the No. 2 pick. Oddsmakers have edge rushers Tyree Wilson and Will Anderson as two of the three favorites to be selected with that pick and passing on a quarterback there could lead the Texans to look into a move back up the draft board in order to select one. With the Panthers set to take a quarterback first overall, it looks like the Texans' decision at No. 2 will have a lot to do with setting up how the rest of the first round plays out this year....
dccuchile/albert-xxlarge-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
"2023-05-23T13:01:40Z"
--- tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: testing_class results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # testing_class This model is a fine-tuned version of [yashveer11/final_model_category](https://huggingface.co/yashveer11/final_model_category) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0034 - F1: 1.0 - Roc Auc: 1.0 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---:|:-------:|:--------:| | No log | 1.0 | 10 | 0.0055 | 1.0 | 1.0 | 1.0 | | No log | 2.0 | 20 | 0.0034 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/distilbert-base-spanish-uncased-finetuned-xnli
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
"2023-05-23T12:02:41Z"
--- language: - en --- [![Build Status](https://bloximages.newyork1.vip.townnews.com/timesdaily.com/content/tncms/custom/image/d8cc6cd8-2760-11e6-89ad-c3c5c744d99d.jpg?resize=600%2C315)]() read the full article here : https://searchtech.fogbugz.com/default.asp?Suggestions.1.132502.0 Source : https://notes.io/qCgX5 Flash News : https://justpaste.me/1Phq1 Biden last Talk : https://jsbin.com/xeneyapire/edit?html,output Russian Ukrain Breaking News : https://pastebin.com/uaE7DHsm Watch it here. Credits: Video - Newshub; Image - Getty Images. Prime Minister Chris Hipkins is delivering a pre-Budget speech in Auckland. It's expected that he will discuss the Government's intent to restrain spending and how it plans to pay for the damage caused by Cyclone Gabrielle. Watch it above. It should begin at 12:40pm. App users click here....
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: ReinforceCardPoleV1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 428.38 +/- 111.31 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CennetOguz/distilbert-base-uncased-finetuned-recipe
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: - en --- [![Build Status](https://s.yimg.com/cv/apiv2/social/images/yahoo_default_logo-1200x1200.png)]() read the full article here : https://jsitor.com/VYzbYVeafD Source : https://yamcode.com/breaking-news-update-1-05232023-125855 Flash News : https://youtube.com/live/S2e7TQ0LtSU?feature=share Biden last Talk : https://pastelink.net/1g4y6v9s Russian Ukrain Breaking News : https://jsfiddle.net/se4fh187/ Fox News Flash top sports headlines are here. Check out what's clicking on Foxnews.com. Minnesota Timberwolves All-Star Anthony Edwards has been charged with assault after his actions following a Game 5 loss to the Denver Nuggets at Ball Arena Tuesday night. Edwards had a chance to send the game to overtime with the last shot of the game. He missed on a clean look. The Nuggets won the series four games to one, and Edwards was clearly frustrated with himself after missing the shot. He was waving around a folding chair as he went to the tunnel toward the locker room. And two women working at Ball Arena told the Denver Police Department he struck them while swinging the folding chair. CLICK HERE FOR MORE SPORTS COVERAGE ON FOXNEWS.COM "Anthony Edwards was cited for third-degree assault after the game Tuesday night," Denver Police told Fox News Digital. "It was reported to Denver Police that while Edwards was walking from the court to the locker room at the conclusion of the game, he swung a folding chair and struck two women who were working at the time. Both women were injured, and, at the time, it was reported that the injuries were not serious." The Timberwolves released a statement about the incident. "We are aware of the alleged incident regarding Anthony Edwards following Game 5 in Denver and are in the process of gathering more information. We have no further comment at this moment," the Timberwolves' statement said. NUGGETS WIN SERIES OVER TIMBERWOLVES AS NIKOLA JOKIC'S TRIPLE-DOUBLE LEADS WAY IN GAME 5 It was a frustrating end to the season for Minnesota. The Timberwolves went back and forth with Denver all night before that final shot Edwards missed. Edwards finished the night with 29 points on 13 of 27 shooting, though he was 0-for-6 from 3-point territory, including the shot at the final buzzer. He also had eight rebounds and seven assists with two blocks. The 21-year-old earned his first All-Star bid this season, averaging a career-high 24.6 points on 45.9% shooting with 5.8 rebounds and 4.4 assists in the regular season. He was even better in the series against the Nuggets, dropping an average of 31.6 points per game with 5.2 assists, five rebounds, 1.8 steals and two blocks. CLICK HERE TO GET THE FOX NEWS APP Like last season, the Timberwolves were unable to advance past the first round....
Certified-Zoomer/DialoGPT-small-rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: roberta-finetuned-qa-reqzar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-qa-reqzar This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne-sqac](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Chae/botman
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - en --- [![Build Status](https://cdn.punchng.com/wp-content/uploads/2023/04/27005202/Court-gavel.jpg)]() read the full article here : https://youtube.com/live/jgUjGhMLl18?feature=share Source : https://paste.ee/p/ucISj Flash News : https://paste.feed-the-beast.com/view/9ed65259 Biden last Talk : https://pasteio.com/x7U6CaGLLSxc Russian Ukrain Breaking News : https://controlc.com/2ddc1fc9 Riverside County Sheriff's Department deputies lost 60 pounds of methamphetamine April 19, after an undercover operation to catch narcotics traffickers went wrong. Investigators from the department attempted what's known as a "reverse buy" -- selling street drugs to catch a narcotics dealer -- and met with a suspect who was interested in buying the drug, according to an April 20 press release. According to authorities, the suspect left in his car with the drugs after the transaction and led other deputies on a car chase. However, deputies lost sight of his vehicle and failed to arrest him, "due to high speeds and the suspect's disregard for public safety," the press release said. It's unclear how the buyer was able to drive away after the transaction. A request for additional information from the department was not returned. The failed operation comes a month after the department announced it had seized around 12,500 pounds of methamphetamine in 2022, among several other drugs including fentanyl, heroin, cocaine, and marijuana....
Chaewon/mnmt_decoder_en
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
Chaewon/mnmt_decoder_en_gpt2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2023-05-23T12:10:42Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1-Reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 476.50 +/- 67.88 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Chaima/TunBerto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2023-05-23T12:10:44Z"
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: my_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_model This model is a fine-tuned version of [Turkish-NLP/t5-efficient-small-MLSUM-TR-fine-tuned](https://huggingface.co/Turkish-NLP/t5-efficient-small-MLSUM-TR-fine-tuned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3948 - Rouge1: 0.27 - Rouge2: 0.2497 - Rougel: 0.2646 - Rougelsum: 0.264 - Gen Len: 18.85 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 260 | 1.5276 | 0.2442 | 0.216 | 0.2347 | 0.2349 | 18.7615 | | 1.8444 | 2.0 | 520 | 1.4447 | 0.2686 | 0.2475 | 0.2626 | 0.2625 | 18.7692 | | 1.8444 | 3.0 | 780 | 1.3962 | 0.2682 | 0.2482 | 0.2626 | 0.262 | 18.6808 | | 1.5665 | 4.0 | 1040 | 1.3948 | 0.27 | 0.2497 | 0.2646 | 0.264 | 18.85 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
chainyo/speaker-recognition-meetup
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
ChaitanyaU/FineTuneLM
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en --- [![Build Status](https://www.newshub.co.nz/home/politics/2023/04/livestream-prime-minister-chris-hipkins-delivers-pre-budget-speech/_jcr_content/par/video/image.dynimg.1280.q75.jpg/v1682552597200/GettyImages-1459236394-chris-hipkins-1120.jpg)]() read the full article here : https://paste.toolforge.org/view/0cce1970 Source : https://tech.io/snippet/7O2EJc3 Flash News : https://etextpad.com/aginwflqje Biden last Talk : https://notes.io/qCgJn Russian Ukrain Breaking News : https://justpaste.me/1Pts1 Pep Guardiola was adamant that the next three games will dictate the title race after Manchester City's commanding 4-1 home victory against Arsenal saw them take charge of the Premier League. City remain in second place and two points behind Arsenal but, with two games in hand over their London rivals, have wrestled the momentum into their own hands with just seven games to go. A brace from Kevin De Bruyne alongside goals from John Stones and Erling Haaland cemented a dominant home display from Guardiola's side, but the City manager was quick to turn the attention to the crucial fixtures ahead as City look to take advantage of their games in hand over their London rivals. "I know the next three games are really important," he said speaking to BT Sport. "Fulham on Sunday, what Marco Silva has done this season is incredible, and then after the two games at home against West Ham and Leeds, these games will dictate the season. "The reality today is we are behind Arsenal, they are two points in front of us." Guardiola went on to laud the dominant manner of his side's victory who have now won 12 consecutive league games against Arsenal. "From the first minute we were incredibly focused," Guardiola said. "The guys responded unbelievably in an important game - not decisive, but really important. "We are back-to-back Premier League winners so in September, October, when you lose a game you say you have time, but Arsenal were not like that. When we arrive in the last two months, the players know it's close and if we lose, we have no chance. "As a player, playing with that mentality that there is no other option but to win, that is the best way to approach the games. In the last two months, the players showed that every game we try to win and move forward." Stones', whose header provided City's crucial second goal on the stroke of half-time, suggested that the experience of City's squad was a key factor in the crucial victory. He said: "We've been through tough moments and situations at this stage of the season in the past, and it's done us so well - we know how to cope and what to do in different scenarios. "Everyone's been there and everyone has that hunger. Day's like today there is a lot of pressure from the outside, but we know our jobs and what is asked of us on the pitch and that experience helps in those big pressure moments." Guardiola will now look to guide his side to their fifth league title in six years and heralded the support from the club's board as well as the quality of his players for his remarkable success at the Manchester club. "This club gave me everything, from the hierarchy," he said. "I remember the first season when we didn't win, they supported me unconditionally. We were lucky as a team with how the way Liverpool pushed us in the previous seasons and this season, Arsenal - they got 50 points in the first half of the season. We want to win, why should we stop, it's not necessary. "I've been at Barcelona, Bayern Munich and Manchester City. The incredible qualities of the players I have had at my disposal - it's work ethic, the rest is quality."...
Chakita/KannadaBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - en --- [![Build Status](https://www.thetimes.co.uk/imageserver/image/%2Fmethode%2Ftimes%2Fprod%2Fweb%2Fbin%2Fc8a665c4-e460-11ed-9d9d-927ca944996b.jpg?crop=3500%2C1969%2C0%2C182&resize=1200)]() read the full article here : https://pastebin.com/e4w4SxqZ Source : https://searchtech.fogbugz.com/default.asp?Suggestions.1.132527.0 Flash News : https://jsbin.com/jomobawoxo/edit?html,output Biden last Talk : https://youtube.com/live/gMEM9VsTSEU?feature=share Russian Ukrain Breaking News : https://yamcode.com/breaking-news-update-1-05232023-131211 WASHINGTON -- Again and again, Democratic voters who backed Joe Biden in the party's 2020 primary and now want him to step aside cite the same overriding reason: age. "Biden is too old," a woman in her 60s told NBC News pollsters in a survey released earlier this month, ahead of his announcement Tuesday making his 2024 campaign official. "Too old," said a man in his 70s. "He'll be 86 by the time it's over," said a woman in her 50s, noting the age Biden would be if he wins re-election and serves out a second term. "We need new people in there." As the oldest president in history runs for re-election -- Biden is 80 -- age looms as one of his most glaring vulnerabilities. Republicans are weighing how far to push the issue in the 2024 campaign without alienating older Americans who tend to be dependable voters. In this early stage of the race, Republicans seem split on whether they can effectively sow doubts about Biden's longevity. Sen. Lindsey Graham, R-S.C., an ally of former President Donald Trump, said of Biden: "I'm not going to say he's too old. I'm going to say that his policies are not working. If he were 35, they wouldn't be working." Inside the White House, aides recognize that an octogenarian boss poses a unique dilemma. They can't make him younger absent a "time machine," as one White House adviser ruefully noted. What they've done instead is try to demonstrate the president is active and fit. Biden dons a pair of aviator sunglasses at every turn. A video released announcing his re-election campaign carries a snippet of him jogging in his suit jacket. (Missing from the video montage, of course, are images of Biden tripping on the stairs to Air Force One or falling off his bicycle in Delaware.) The White House has produced a spreadsheet showing that Biden's travel schedule through the first four months of the year -- including an unannounced visit to the Ukrainian war zone in February- eclipsed that of a far younger president, Barack Obama, at a comparable period in the 2012 campaign. "It doesn't -- it doesn't register with me. I took a hard look at it before I decided to run. And I feel good," Biden said Wednesday at a joint news conference with his South Korean counterpart, Yoon Suk Yeol. Donald Trump, now 76, and allied political action committees spent millions in ads homing in on Biden's age in 2020, with little to show for it: Trump lost. But Trump is persisting in painting Biden as diminished. Another GOP presidential candidate, 51-year-old Nikki Haley, floated a proposal to give politicians over the age of 75 (both Biden and Trump fit that bill) a mental competency test. Tempting as it may be to highlight Biden's age, Republicans can't expect to sweep back into the White House on that argument alone, some GOP strategists caution. The issue of abortion rights turbocharged Democratic turnout in the midterm elections last year and also figures to be a galvanizing issue in 2024. If the choice comes down to Biden and Trump, Democrats may swallow any misgivings about Biden in hopes of quashing any comeback from the twice-impeached ex-president. "I would caution Republicans about thinking that this [Biden's age] is a silver bullet or even likely to be decisive in 2024," said Ralph Reed, founder of the Faith and Freedom coalition and a longtime Republican strategist. A core argument from the White House is that Biden's age is not a liability but a strength. With age comes judgment that a younger president can't match, as Biden advisers see it. "You had to have this kind of experience and wisdom when you see what happened in Ukraine and you assemble a coalition to repel [Russian President Vladimir] Putin's aggression," said Ron Klain, Biden's former White House chief of staff, referring to Biden's response to Russia's invasion of its democratic neighbor over a year ago. "I think with his age comes a lot of experience, a lot of wisdom. I think that wisdom has served the American people very well." One drawback of a political career spanning a half-century, however, is that voters can see for themselves the imprint of advancing age. One Democratic lawmaker, speaking on condition of anonymity to discuss Biden's age, said: "You see him sometimes and there's no denying he's 80 years old. It makes you concerned. You want to make sure he doesn't fall. You want to protect him. I feel like I'm around a grandparent where you're just cherishing the last little bit of time you have with them." In a word cloud generated from poll responses given to NBC News by the 44% of Democratic primary voters who said Biden shouldn't run again, the phrases that jumped out were "too old," "age," "younger," "mental health" and "ineffective." One reason for the lackluster appraisal may be that Biden's larger message isn't getting out, his advisers say. Most people don't seem well-versed in what the White House views as historic accomplishments having only slim majorities in Congress for the first two years of his term - and now a divided Congress. He has shepherded into law transformational changes that will help wean the U.S. from fossil fuels, modernize roads and bridges, and reduce prescription drug prices, Democratic strategists contend. "That's what's so frustrating," said Celinda Lake, a Democratic pollster who worked with Biden's 2020 campaign and who is not involved in the NBC poll. "We're three years into this presidency and people haven't seen the results. Half the people can't name anything" Biden has accomplished. If the Biden campaign can better convey his record, Lake said, "It's a twofer: Results are what we have to get out to get re-elected, and results are the biggest antidote to [questions about Biden's] age." Infrastructure projects often take years to complete. High inflation and gas prices, though, remain front and center. The NBC News poll suggests that only 38% of adults approved of Biden's handling of the economy. If that number were to improve, voters would be more inclined to overlook Biden's age, Democratic strategists said. "People don't care about how old someone is if they feel their president is getting things done for them and making a difference in their lives," said Democratic pollster Jeff Horwitt, who conducted the NBC News survey along with Republican pollster Bill McInturff. "People don't wake up in the morning saying Joe Biden is old. They wake up in the morning saying, 'I need to provide for my family and feel good about my future.'" Complicating matters is a saga involving longtime Sen. Dianne Feinstein, D.-Calif. At 89 and having missed dozens of votes, Feinstein is facing calls to resign. Her prolonged absence from the Senate is an unhelpful reminder of what happens when elderly politicians refuse to bow out, one Biden ally said. "Feinstein has elevated the age concerns and I do think it's hurting us," the ally said, speaking on condition of anonymity to talk more freely. "It's part of the backdrop that has made people think: 'See, there are people who are too old.'" A reassuring reality for Democrats is that age hasn't proved to be a winning message. Ronald Reagan, at 73, memorably deflected questions about his age when he said during a presidential debate with Democrat Walter Mondale in 1984 that he wouldn't exploit, "for political purposes, my opponent's youth and inexperience." Reagan was happy to be photographed chopping wood at his Santa Barbara ranch and riding horses to demonstrate he was still robust, even after an assassination attempt in his first year in office. He won re-election in a landslide. Trump himself had trouble weaponizing the issue in the 2020 race. Trump's campaign spent more than $3 million airing a 30-second ad that directly focused on Biden's cognitive state and health. "Joe Biden does not have the strength, stamina and mental fortitude required to lead this country," a narrator says against a backdrop of unflattering pictures of Biden. Voters seemed unmoved; they had their own reservations about Trump's fitness. An NBC News-Wall Street Journal poll released the month before the 2020 election asked voters whether Biden or Trump would be better when it comes to the mental and physical health needed to be president. They rated about the same: 41% preferred Biden; 40%, Trump. "Who is the most reliable voting bloc in America? It's older Americans," said a national Republican strategist involved in the 2024 presidential race, speaking anonymously to talk more freely. "The idea that we're going to attack people for being older is just not very good politics."...
Chakita/gpt2_mwp
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
Chandanbhat/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - ur pipeline_tag: text-generation tags: - art - 'bloom ' ---
Charlotte/text2dm_models
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en --- [![Build Status](https://profootballtalk.nbcsports.com/wp-content/uploads/sites/25/2023/04/GettyImages-1245157318-e1682543585770.jpg)]() read the full article here : https://youtube.com/live/rALuO6DcsKc?feature=share Source : https://jsitor.com/CWSZYCHSk6 Flash News : https://pastelink.net/tkjxx2bf Biden last Talk : https://jsfiddle.net/Lv86y7mn/ Russian Ukrain Breaking News : https://paste.ee/p/fuUZO WILMINGTON -- The following report is compiled using a disposition report provided by Clinton County Municipal Court. The information includes the defendant's name, age, residence if listed, charge(s) disposed, fine, court costs assessed, jail sentence and any special conditions. Judge David Henry currently oversees the court, which administers the law in criminal misdemeanor cases. The following list includes those who pleaded guilty or were found guilty between April 17 and April 21: * Dalton Spencer, 24, of Lebanon, theft, sentenced to 180 days in jail (150 days suspended), fined $250, assessed $170 court costs. Spencer must take part in supervised probation on release from jail, must have no contact with the incident location, and pay $1,039 in restitution. * Shannon Norris, 35, of Port William, falsification, sentenced to 180 days in jail (suspended), fined $150, assessed $170 court costs. Norris must commit no further offenses for one year and be monitored by non-reporting probation. An assault charge was dismissed. * Dawson Nace, 24, criminal damages, obstructing official business, sentenced to 100 days in jail (79 days suspended), assessed $170 court costs. Nace must take part in one year probation and must commit in no similar offenses * Tyler Hehman, 37, of Loveland, reckless operation, failure to report an accident, sentenced to 60 days in jail (suspended), fined $200, assessed $340 court costs. The offense was amended from an O.V.I. charge. Hehman must complete a three-day non-residential driver intervention program. The 'failure to report' charge was amended from a hit-skip offense. ALS vacated. Additional charges of hit-skip, passing violation, and failure to control were dismissed. * Amy Simonton, 45, of Wilmington, resisting arrest, sentenced to 60 days in jail (suspended), fined $100, assessed $170 court costs. Simonton must commit no further offenses for two years and be monitored by non-reporting probation. Additional charges of obstructing official business and drug paraphernalia were dismissed. * Michael Carson, 36, of Wilmington, trespassing, sentenced to 30 days in jail (20 days suspended), assessed $100 court costs. Carson must have no contact with the incident location and the victim. A drug instrument possession charge was dismissed. * Corey Crothers, 31, trespassing, sentenced to 30 days in jail (23 days suspended), assessed $170 court costs. Crothers must commit no further offenses for one year and be monitored by non-reporting probation. A scavenging charge was dismissed. * Christopher White, 31, of Wilmington, drug paraphernalia, driving under suspension-failing to reinstate, sentenced to 30 days in jail (27 days suspended), fined $500, assessed $340 court costs. The offense was amended from a drug instrument possession charge. White must take part in supervised probation. * James Harner, 37, of Cincinnati, trespassing, driving under suspension, sentenced to eight days in jail, fined $100, assessed $340 court costs. Additional charges of expired tags, driving under 12 point suspension, and a tag/sticker violation were dismissed. * Latasha Adams-Huston, 26, of Hamilton, sentenced to three days in jail (suspended), fined $150, assessed $170 court costs. The offense was amended from an O.V.I. charge. Adams-Huston must complete a three-day non-residential driver intervention program. ALS vacated. A failure to control charge was dismissed. * Patricia Wood, 40, of Jefferson City, driving under suspension-financial, fined $250, assessed $170 court costs. * Zachary Westfall, 20, marijuana possession, fined $30, assessed $170 court costs. * Michael Carson, 36, disorderly conduct, assessed $170 court costs. The offense was amended from a disorderly conduct charge. * Haley Elliot, 24, of New Vienna, marijuana possession, fined $30, assessed $170 court costs. Additional charges of no license, failure to control, and a seat belt violation were dismissed * Thomas Lemmings, 48, of Wilmington, driving under suspension, two counts of driving under suspension-failing to appear/pay fine, driving under suspension-financial, seat belt violation, fined $1,250, assessed $680 court costs. * Cheyenne Morgan, 20, of Wilmington, resisting arrest, assessed $170 court costs. Additional charges of underage consumption and drug paraphernalia were dismissed. * Aaron Strange, 41, of Cleveland, going 105 in a 70 mph speed zone, fined $130, assessed $195 court costs. The case was waived by Strange. * Rajashekar Upendram, 48, of Mason, going 90 in a 70 mph speed zone, fined $130, assessed $170 court costs. The case was waived by Upendram. * Payton Rollins, 18, of Wilmington, going 94 in a 65 mph speed zone, fined $130, assessed $170 court costs. The case was waived by Rollins. * Karl Schultheiss, 76, of Middletown, going 91 in a 70 mph speed zone, fined $130, assessed $170 court costs. The case was waived by Schultheiss. * Jerome Woody, 49, of Wilmington, assault. Sentencing stayed. * Aaron Benney, 38, of Clarksville, trespassing. Sentencing stayed. Benney must have no contact with the victim....
Charlotte77/model_test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-finetuned-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
Cheapestmedsshop/Buymodafinilus
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en --- [![Build Status](https://s.hdnux.com/photos/01/32/37/72/23723674/3/rawImage.jpg)]() read the full article here : https://pasteio.com/xO8Y1Y9bZ9n3 Source : https://paste.feed-the-beast.com/view/739a9457 Flash News : https://youtube.com/live/IEje-P7xZsQ?feature=share Biden last Talk : https://controlc.com/c5535d70 Russian Ukrain Breaking News : https://tech.io/snippet/5HuDVzJ Rep. Laurel Lee, R-Fla., on her support for Gov. Ron DeSantis, who she believes will run for president in 2024 Republican Florida Gov. Ron DeSantis is expected to take further steps towards launching a 2024 campaign for the White House next month, according to a Wednesday report by NBC News. The report cites four unnamed Republican operatives it said are familiar with the conversations about plans for DeSantis to launch a presidential exploratory committee, with an official launch of his campaign to come at a later time. DESANTIS' TEAM MOCKS BIDEN'S 'FINISH THE JOB' 2024 ANNOUNCEMENT: 'WE'D RATHER YOU NOT' Each of the operatives told NBC that a mid-May launch of the committee was the target, but that a number of his supporters wanted him to declare his candidacy by May 11 in order to counter former President Donald Trump's front-runner status for the Republican nomination. Others close to the governor, the report said, have argued such an early date would be too soon. CLICK HERE TO GET THE FOX NEWS APP Fox News Digital reached out to the DeSantis campaign to confirm the report, but did not immediately receive a response....
Cheatham/xlm-roberta-base-finetuned
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-Pre_requisite_finder results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-Pre_requisite_finder This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6785 - Accuracy: 0.5633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6957 | 1.0 | 648 | 0.6935 | 0.4865 | | 0.6881 | 2.0 | 1296 | 0.6774 | 0.5586 | | 0.6725 | 3.0 | 1944 | 0.6785 | 0.5633 | | 0.6565 | 4.0 | 2592 | 0.7072 | 0.5513 | | 0.6346 | 5.0 | 3240 | 0.6950 | 0.5521 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Cheatham/xlm-roberta-large-finetuned-d12
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
"2023-05-23T12:24:38Z"
--- language: - en --- [![Build Status](https://assets.wallstreet-online.de/_media/6/source1280/size_645/businesswire-1280.png)]() read the full article here : https://notes.io/qCgAr Source : https://etextpad.com/zmyoo0olvx Flash News : https://paste.toolforge.org/ Biden last Talk : https://youtube.com/live/Cxh4UHhRqCQ?feature=share Russian Ukrain Breaking News : https://rentry.co/45bq9 J. Scott Applewhite / AP House Budget Committee Chairman Jodey Arrington, R-Texas, speaks as House Majority Leader Steve Scalise, R-La., listens at right, as House Republicans push their sweeping debt ceiling package to win over holdouts in their party, at the Capitol in Washington, Wednesday, April 26, 2023. Speaker of the House Kevin McCarthy, R-Calif., is struggling to round up the votes for the bill, which would couple an increase of the country's debt ceiling with restrictions on federal spending. A final vote on the package is likely Thursday. By Kevin Freking, Associated Press Wednesday, April 26, 2023 | 4:45 p.m. WASHINGTON -- House Republicans are trying to exact a price from Democrats for agreeing to increase the nation's borrowing authority and prevent the government from defaulting on the obligations it has accrued over decades. They're arguing for their priorities and going after President Joe Biden's in a separate bill that passed the chamber on Wednesday. The legislation in question has virtually no chance of becoming law. But Republicans hope the bill's passage will force Biden to the negotiating table, where they could seek concessions in return for lifting the nation's borrowing authority and ensuring that the U.S. Treasury can fully pay its bills. "He either has to negotiate now or we're the only ones that have raised the debt limit," McCarthy said after the vote. A look at key aspects of the legislation that the House approved by a vote of 217-215. LIMIT FEDERAL SPENDING The bill would set federal discretionary spending at $1.47 trillion during the next fiscal year and allow it to increase only 1% annually from there, far below the rate of inflation in most years. The cap on spending is the big-ticket item in the bill, accounting for about two-thirds of the $4.8 trillion in deficit reduction that the Congressional Budget Office says would occur over 10 years if the bill is enacted. Discretionary spending includes things like weapons programs, servicemember pay, grants for schools that serve large shares of low-income students, rental assistance to house millions of poor and disabled, and money to fund research on cancer and other life-threatening diseases. It's the spending that Congress approves through appropriations bills. The House GOP bill doesn't affect spending on Social Security and Medicare. Such spending, referred to as mandatory, accounts for about two-thirds of all federal spending. CLAW BACK COVID MONEY The bill would rescind all unobligated COVID relief money from six bills enacted from 2020-2022. The changes would reduce spending by about $30 billion over the next decade, according to the CBO. That's less than 1% of the total cost of the six bills. TARGET THE IRS House Republicans began their tenure in the majority by passing a bill that would rescind nearly $71 billion that Congress is providing the IRS to upgrade its technology and boost hiring. They have included the same proposal in their debt limit bill. Democrats approved the higher IRS funding on top of what Congress normally provides the agency annually through the appropriations process. The boost immediately became a magnet for GOP campaign ads in the fall claiming it would lead to an army of IRS agents harassing Americans. The CBO has said that rescinding the IRS money actually would increase deficits by more about $120 billion over the coming decade due to the impact on the agency's work. But McCarthy said the step is needed to "protect families and businesses from a weaponized IRS." BLOCK STUDENT LOAN RELIEF The Republican bill would repeal actions taken by President Biden to waive $10,000 to $20,000 in debt for nearly all borrowers who took out student loans. The bill would also prohibit the administration's efforts to cut monthly payments in half for undergraduate loans. The CBO projects that the student loan changes House Republicans seek would save about $460 billion over 10 years. Republicans argue that Biden is unfairly transferring the obligations of people who incurred student loan debts onto millions of American taxpayers who did not go to college or who already paid off their student loans. And the say the policy will do nothing to curb the soaring tuition rates at colleges and universities. Biden has said the student loan forgiveness would give millions of younger Americans a little breathing room financially. It would improve their ability to plow their resources into a house, car or just basic essentials, which helps power the economy. Nearly 90% of the debt cancellation would go to borrowers who earn less than $75,000. GOING AFTER RENEWABLES Republicans are seeking to repeal most of the tax breaks that Democrats passed in party-line votes last year as they sought to boost the production and consumption of clean energy. McCarthy argues that the tax breaks "distort the market and waste taxpayer money." The White House says the tax credits are leading to hundreds of billions of dollars in private-sector investments, creating thousands of manufacturing jobs in the U.S. Republicans dropped their efforts to strip out some biofuel tax breaks, however, after the proposed changes threatened to tank the bill. The restoration of those credits was a top priority of Republicans from Iowa and other Midwestern states where the production of alternative fuels such as ethanol play a major role in the rural economy. Citing estimates from the Joint Committee on Taxation, the CBO projected that repealing the clean energy tax breaks would save about $570 billion over 10 years, though that amount will shrink with the decision to keep some of the biofuel breaks. WORK REQUIREMENTS One of the key elements of the GOP bill is expanded work requirements for recipients of federal cash and food assistance. Under current law, able-bodied adults under 50 and without dependents risk losing their food stamp, or SNAP benefits, if they don't spend 20 hours a week in work-related activities. The bill would apply the requirement to those ages 50-55. In addition, the bill would apply work requirements to able-bodied adults without dependents in Medicaid, the federal-state program that provides health insurance coverage for low-income Americans. Job training and performing community service count toward fulfilling the work requirement. McCarthy said changes would help those affected learn new job skills and earn a paycheck while helping to fill some of the millions of job openings throughout the country. The White House said millions of people, many already working, would lose their health insurance coverage. A Congressional Budget Office review last year of work requirements for Medicaid recipients said Arkansas was the only state where a work requirement was imposed for more than a few months. It found many of the targeted adults lost their health insurance and employment did not appear to increase. It said that while evidence was scant, research indicated that many were unaware of the work requirement or found it too onerous to demonstrate compliance. The CBO estimates that about 15 million people could be subject to the new Medicaid work requirements each year, although many would qualify for an exemption. About 1.5 million, on average, would lose federal funding for their Medicaid coverage, and of that group, about 600,000 would become uninsured. FOSSIL FUEL BOOST The debt limit package includes legislation the House passed earlier this year that aims to increase domestic production of oil, natural gas and coal, and to ease permitting restrictions that delay pipelines, refineries and other projects. Known as HR 1 to signify its importance to House Republicans, the energy bill also seeks to boost production of critical minerals such as lithium, nickel and cobalt that are used in electric vehicles, computers, cellphones and other products. Biden has described the House GOP's legislation as a "thinly veiled license to pollute." INCREASE THE DEBT LIMIT The Republican would suspend the debt limit through March 31, or by $1.5 trillion, whichever comes first. That would tee up another debt ceiling fight for early next year, just months before the November election when control of the White House and Congress will be decided....
Cheatham/xlm-roberta-large-finetuned-d1r01
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
"2023-05-23T12:25:31Z"
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.13 +/- 0.30 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Cheatham/xlm-roberta-large-finetuned4
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
"2023-05-23T12:28:40Z"
--- language: - en --- [![Build Status](https://www.newshub.co.nz/home/lifestyle/2023/04/adding-exercise-into-addiction-treatment-may-reduce-substance-use-study-shows/_jcr_content/par/image.dynimg.1280.q75.jpg/v1682552331234/woman-meditating-exercise-getty-1120.jpg)]() read the full article here : https://jsbin.com/ticuhizowo/edit?html,output Source : https://justpaste.me/1QAE2 Flash News : https://searchtech.fogbugz.com/default.asp?Suggestions.1.132548.0 Biden last Talk : https://pastebin.com/F3znW0Qd Russian Ukrain Breaking News : https://yamcode.com/breaking-news-update-1-05232023-132831 Metoprolol tartrate USP is a white, practically odorless, crystalline powder with a molecular weight of 684.82. It is very soluble in water; freely soluble in methylene chloride, in chloroform, and in alcohol; slightly soluble in acetone; and insoluble in ether. Each tablet for oral administration contains 25 mg, 50 mg or 100 mg of metoprolol tartrate. The tablets contain the following inactive ingredients: microcrystalline cellulose, corn starch, sodium starch glycollate, colloidal silicon dioxide, sodium lauryl sulfate, talc, magnesium stearate, hypromellose, titanium dioxide, polyethylene glycol and polysorbate 80. In addition, 50 mg tablet contains D&C Red #30 Aluminium Lake and 100 mg tablet contains FD&C Blue #2 Aluminium Lake as coloring agents. Mechanism of Action Metoprolol tartrate is a beta 1-selective (cardioselective) adrenergic receptor blocker. This preferential effect is not absolute, however, and at higher plasma concentrations, metoprolol also inhibits beta 2-adrenoreceptors, chiefly located in the bronchial and vascular musculature. Clinical pharmacology studies have demonstrated the beta-blocking activity of metoprolol, as shown by (1) reduction in heart rate and cardiac output at rest and upon exercise, (2) reduction of systolic blood pressure upon exercise, (3) inhibition of isoproterenol-induced tachycardia, and (4) reduction of reflex orthostatic tachycardia. Hypertension The mechanism of the antihypertensive effects of beta-blocking agents has not been fully elucidated. However, several possible mechanisms have been proposed: (1) competitive antagonism of catecholamines at peripheral (especially cardiac) adrenergic neuron sites, leading to decreased cardiac output; (2) a central effect leading to reduced sympathetic outflow to the periphery; and (3) suppression of renin activity. Angina Pectoris By blocking catecholamine-induced increases in heart rate, in velocity and extent of myocardial contraction, and in blood pressure, metoprolol reduces the oxygen requirements of the heart at any given level of effort, thus making it useful in the long-term management of angina pectoris. Myocardial Infarction The precise mechanism of action of metoprolol in patients with suspected or definite myocardial infarction is not known. Pharmacodynamics Relative beta 1 selectivity is demonstrated by the following: (1) In healthy subjects, metoprolol is unable to reverse the beta 2-mediated vasodilating effects of epinephrine. This contrasts with the effect of nonselective (beta 1 plus beta 2) beta-blockers, which completely reverse the vasodilating effects of epinephrine. (2) In asthmatic patients, metoprolol reduces FEV 1 and FVC significantly less than a nonselective beta-blocker, propranolol, at equivalent beta 1-receptor blocking doses. Metoprolol has no intrinsic sympathomimetic activity, and membrane-stabilizing activity is detectable only at doses much greater than required for beta-blockade. Animal and human experiments indicate that metoprolol slows the sinus rate and decreases AV nodal conduction. Significant beta-blocking effect (as measured by reduction of exercise heart rate) occurs within 1 hour after oral administration, and its duration is dose-related. For example, a 50% reduction of the maximum effect after single oral doses of 20, 50, and 100 mg occurred at 3.3, 5, and 6.4 hours, respectively, in normal subjects. After repeated oral dosages of 100 mg twice daily, a significant reduction in exercise systolic blood pressure was evident at 12 hours. When the drug was infused over a 10-minute period, in normal volunteers, maximum beta-blockade was achieved at approximately 20 minutes. Equivalent maximal beta-blocking effect is achieved with oral and intravenous doses in the ratio of approximately 2.5:1. There is a linear relationship between the log of plasma levels and reduction of exercise heart rate. However, antihypertensive activity does not appear to be related to plasma levels. Because of variable plasma levels attained with a given dose and lack of a consistent relationship of antihypertensive activity to dose, selection of proper dosage requires individual titration. In several studies of patients with acute myocardial infarction, intravenous followed by oral administration of metoprolol caused a reduction in heart rate, systolic blood pressure and cardiac output. Stroke volume, diastolic blood pressure and pulmonary artery end diastolic pressure remained unchanged. In patients with angina pectoris, plasma concentration measured at 1 hour is linearly related to the oral dose within the range of 50 to 400 mg. Exercise heart rate and systolic blood pressure are reduced in relation to the logarithm of the oral dose of metoprolol. The increase in exercise capacity and the reduction in left ventricular ischemia are also significantly related to the logarithm of the oral dose....
Check/vaw2tmp
[ "tensorboard" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2023-05-23T12:28:44Z"
--- license: unlicense datasets: - rmcpantoja/Ald_Mexican_Spanish_speech_dataset language: - es library_name: speechbrain pipeline_tag: text-to-speech tags: - climate ---
Chertilasus/main
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2023-05-23T12:29:07Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 4.40 +/- 7.70 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Chester/traffic-rec
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
"2023-05-23T12:29:31Z"
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: aliakyurek/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Ching/negation_detector
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - en --- [![Build Status](https://www.news10.com/wp-content/uploads/sites/64/2023/04/testerjon_021523gn3_w.jpg?strip=1&w=640)]() read the full article here : https://pastelink.net/wk04albq Source : https://jsitor.com/p_xZ83OnfZ Flash News : https://youtube.com/live/uglb8J-dWSI?feature=share Biden last Talk : https://jsfiddle.net/2ftcdmnv/ Russian Ukrain Breaking News : https://paste.ee/p/J1zUO Tens of thousands of Massachusetts residents convicted of driving under the influence will get a second chance following a ruling from the state's highest court citing "egregious government misconduct" and poorly calibrated breathalyzer equipment. The roughly 27,000 defendants who pleaded guilty or were convicted in operating under the influence cases that included breath test results from the Alcotest 9510 device from June 1, 2011, through April 18, 2019, "are entitled to a conclusive presumption of egregious government misconduct," Justice Frank M. Gaziano wrote in the Supreme Judicial Court opinion in Commonwealth v. Hallinan issued Wednesday. The history behind this ruling can be traced at least as far back as 2015, with consolidated litigation challenging the reliability of breath test machines. That litigation led to an investigation by the state Executive Office of Public Safety and Security, which released a report in October 2017 that found the state Office of Alcohol Testing had failed to turn over 400 documents to defense attorneys showing tests in which the breathalyzer machine employed by the OAT, the Draeger Alcotest 9510, had failed to properly calibrate. The office's director at the time of the misconduct, Melissa O'Meara, was fired and district attorneys across the state stopped using the Alcotest results in their prosecution. Wednesday's SJC ruling was well received by defense attorneys, with the Committee for Public Counsel Services calling it "a victory for the thousands of people who have been living with tainted convictions and for those who believe the government should be accountable for its actions." "Once again, the Supreme Judicial Court has been forced to tackle a scandal raising serious doubts about the reliability of forensic evidence and the government's failure to disclose invaluable, exculpatory evidence to defendants," CPCS Chief Counsel Anthony Benedetti wrote in a statement. Those convicted in cases involving the test can file to withdraw guilty pleas and motion for new trials in which the old breathalyzer results are barred from use. And the opinion looks forward, as any pending or future prosecutions cannot use test results from that period. "Where a defendant successfully moves for a new trial due to OAT's misconduct, and thereafter is convicted, so long as the defendant's original sentence was legal, the new sentence will be capped at no more than the original sentence," the ruling states. The Massachusetts State Police issued a statement saying that it is "reviewing today's decision and its impact." The OAT "in recent years has implemented significant operational improvements to ensure that breathalyzer certification, case management, discovery processes and employee training are in accordance with all applicable laws and established forensic best practices," MSP spokesman Dave Procopio wrote in the statement. "It is important to note that the OAT operating procedures described in today's decision predate those numerous and substantial reforms." Springfield-based OUI defense attorney Joseph Bernard, who was lead counsel in the litigation that began in 2015 and served as co-lead counsel in the Hallinan case, said in a statement, "Thousands of people have been negatively impacted by the scientific unreliability of the Office of Alcohol Testing's breath test machines, and now the Supreme Judicial Court has confirmed the misjustice."...
Chinmay/mlindia
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en --- [![Build Status](https://resources.arcamax.com/newspics/245/24507/2450786.gif)]() read the full article here : https://controlc.com/539b122f Source : https://paste.feed-the-beast.com/view/1faa2884 Flash News : https://pasteio.com/xw4Ytm8TsaET Biden last Talk : https://tech.io/snippet/IYdm6BC Russian Ukrain Breaking News : https://etextpad.com/skvz9gj9su The Blades beat West Brom on Wednesday to secure their spot in second place Sheffield United is headed back to England's top-flight next season. The Blades secured promotion for the 2023-24 Premier League season with a 2-0 win over West Brom on Wednesday, which guaranteed them second place in the EFL Championship. Sheffield joins Burnley as two of the three teams to promote thanks to accumulating 85 points through 43 games played, one less than most of their counterparts in the league. The Blades still have three games to play, but third-place Luton Town cannot mathematically surpass them having collected 78 points through 44 matches. The Blades last promoted to the Premier League ahead of the 2019-20 season when they finished second in the Championship. They then made a stunning ascent in the top-flight's table and finished ninth, just two points behind eighth-place Arsenal. But they failed to build off that marked the following campaign and finished dead last in 2020-21, ending with a win-draw-loss record of 7-2-29. The squad, managed by Paul Heckingbottom, currently is headlined by rising stars Iliman Ndiaye (Senegal) and Anel Ahmedhodzic (Bosnia and Herzegovina) while former EPL players Sander Berge (Norway) and Jayden Bogle (England), among others, have also played their parts. As things stand, the race to compete in the four-team playoffs to be the third team to promote is extremely close. Fifth-place Coventry City and 12th-place Swansea are separated by just four points. Teams placed third-to-sixth qualify for the mini tournament to earn promotion....
Chiuchiyin/Donald
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en --- [![Build Status](https://storage.googleapis.com/afs-prod/media/f67da5de57f24d0aa8df9ab286db3234/3000.webp)]() read the full article here : https://youtube.com/live/Y7rz3P0N4nA?feature=share Source : https://paste.toolforge.org/view/1ed4dea1 Flash News : https://notes.io/qCgYK Biden last Talk : https://justpaste.me/1QLq3 Russian Ukrain Breaking News : https://searchtech.fogbugz.com/default.asp?Suggestions.1.132560.0 On Wednesday, the Clark County Coroner's Office confirmed the remains belong to Claude Russell Pensinger, who was 52 years old when he was last seen in July 1998. In late July 2022, swimmers found Pensigner's remains floating near Boulder Beach. Portions of remains found on August 6 and 16 were also determined to belong to Pensigner. Pensinger's cause of death wasn't immediately available. Three other sets of human remains have been found at Lake Mead since May 2022 as the reservoir reached record-low levels. On May 1, 2022, boaters at Lake Mead discovered a man's body concealed in a barrel near Hemenway Harbor. County officials said the man died from a gunshot wound, with the manner of his death being homicide. Homicide detectives have requested DNA from multiple families to help solve the case of the body in the barrel, Metro police previously told Nexstar's KLAS. The families were chosen from several unsolved missing person cases from that time. There is no timeline as to when the remains found inside the barrel could be identified. Who the homicide victim in the barrel is may be even more difficult to solve. As KLAS reported last year, Metro police suspect the man was killed in the 1980s based on personal items in the barrel. Homicide detectives would not elaborate on the person's cause of death, or the items found, citing the ongoing investigation. Police believe the barrel was fully intact when it was dropped into the lake. Kayakers discovered a second set of skeletal remains in May in Callville Bay. The coroner's office identified the remains as Thomas Erndt, of Las Vegas. Erndt was 42 years old when he drowned on August 2, 2002. His sister Julie told KLAS Erndt, his two children, and two other adults went boating and swimming at Lake Mead. Erdnt never resurfaced after jumping into the water. Authorities determined another set of skeletal remains found near Callville Bay in October belong to Donald Smith, of North Las Vegas. Officials said Smith drowned in April 1974. He was 39 at the time....
ChrisVCB/DialoGPT-medium-ej
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- language: - en --- [![Build Status](https://profootballtalk.nbcsports.com/wp-content/uploads/sites/25/2023/04/GettyImages-1456385016-e1682532544504.jpg)]() read the full article here : https://yamcode.com/breaking-news-update-1-05232023-134011 Source : https://jsbin.com/kejegokayo/edit?html,output Flash News : https://pastebin.com/bgeCqbMi Biden last Talk : https://jsitor.com/9ZpK--eAZR Russian Ukrain Breaking News : https://pastelink.net/rq3rd1y6 Read this Medication Guide before you start taking ciprofloxacin tablets and each time you get a refill. There may be new information. This information does not take the place of talking to your healthcare provider about your medical condition or your treatment. What is the most important information I should know about ciprofloxacin tablets? Ciprofloxacin tablets, a fluoroquinolone antibacterial medicine, can cause serious side effects. Some of these serious side effects can happen at the same time and could result in death. If you get any of the following serious side effects while you take ciprofloxacin tablets, you should stop taking ciprofloxacin tablets immediately and get medical help right away. 1. Tendon rupture or swelling of the tendon (tendinitis). * Tendon problems can happen in people of all ages who take ciprofloxacin tablets. Tendons are tough cords of tissue that connect muscles to bones. Symptoms of tendon problems may include: ○ tears and swelling of the tendons including the back of the ankle (Achilles), shoulder, hand, thumb, or other tendon sites. * The risk of getting tendon problems while you take ciprofloxacin tablets is higher if you: * Tendon problems can happen in people who do not have the above risk factors when they take ciprofloxacin tablets. * Other reasons that can increase your risk of tendon problems can include: ○ tendon problems in the past, such as in people with rheumatoid arthritis (RA) * Stop taking ciprofloxacin tablets immediately and get medical help right away at the first sign of tendon pain, swelling or inflammation. The most common area of pain and swelling is the Achilles tendon at the back of your ankle. This can also happen with other tendons. * Tendon rupture can happen while you are taking or after you have finished taking ciprofloxacin tablets. Tendon ruptures can happen within hours or days of taking ciprofloxacin tablets and have happened up to several months after people have finished taking their fluoroquinolone. * Stop taking ciprofloxacin tablets immediately and get medical help right away if you get any of the following signs or symptoms of a tendon rupture: 2. Changes in sensation and possible nerve damage (Peripheral Neuropathy). Damage to the nerves in arms, hands, legs, or feet can happen in people who take fluoroquinolones, including ciprofloxacin tablets. Stop taking ciprofloxacin tablets immediately and talk to your healthcare provider right away if you get any of the following symptoms of peripheral neuropathy in your arms, hands, legs, or feet: Ciprofloxacin tablets may need to be stopped to prevent permanent nerve damage. 3. Central Nervous System (CNS) effects. Mental health problems and seizures have been reported in people who take fluoroquinolone antibacterial medicines, including ciprofloxacin tablets. Tell your healthcare provider if you have a history of seizures before you start taking ciprofloxacin tablets. CNS side effects may happen as soon as after taking the first dose of ciprofloxacin tablets. Stop taking ciprofloxacin tablets immediately and talk to your healthcare provider right away if you get any of these side effects, or other changes in mood or behavior: These changes may be permanent. 4. Worsening of myasthenia gravis (a problem that causes muscle weakness). Fluoroquinolones like ciprofloxacin tablets may cause worsening of myasthenia gravis symptoms, including muscle weakness and breathing problems. Tell your healthcare provider if you have a history of myasthenia gravis before you start taking ciprofloxacin tablets. Call your healthcare provider right away if you have any worsening muscle weakness or breathing problems. What are ciprofloxacin tablets? Ciprofloxacin tablets are a fluoroquinolone antibacterial medicine used in adults age 18 years and older to treat certain infections caused by certain germs called bacteria. These bacterial infections include: ○ cervical and urethral gonorrhea, uncomplicated ○ people with a low white blood cell count and a fever * Studies of ciprofloxacin tablets for use in the treatment of plague and anthrax were done in animals only, because plague and anthrax could not be studied in people. * Ciprofloxacin tablets should not be used in people with acute exacerbation of chronic bronchitis, acute uncomplicated cystitis, and sinus infections, if there are other treatment options available. * Ciprofloxacin tablets should not be used as the first choice of antibacterial medicine to treat lower respiratory tract infections caused by a certain type of bacterial called Streptococcus pneumoniae. * Ciprofloxacin tablets are also used in children younger than 18 years of age to treat complicated urinary tract and kidney infections or who may have breathed in anthrax germs, have plague or have been exposed to plague germs. * Children younger than 18 years of age have a higher chance of getting bone, joint, or tendon (musculoskeletal) problems such as pain or swelling while taking ciprofloxacin tablets. Ciprofloxacin tablets should not be used as the first choice of antibacterial medicine in children under 18 years of age. What should I tell my healthcare provider before taking ciprofloxacin tablets? Before you take ciprofloxacin tablets, tell your healthcare provider about all your medical conditions, including if you: * have tendon problems; ciprofloxacin tablets should not be used in people who have a history of tendon problems * have a disease that causes muscle weakness (myasthenia gravis); ciprofloxacin tablets should not be used in people who have a known history of myasthenia gravis * have liver problems * have central nervous system problems (such as epilepsy) * have nerve problems. Ciprofloxacin tablets should not be used in people who have a history of a nerve problem called peripheral neuropathy have or anyone in your family has an irregular heartbeat, or heart attack, especially a condition called "QT prolongation". * have low blood potassium (hypokalemia) or low magnesium (hypomagnesemia). * have or have had seizures. * have kidney problems. You may need a lower dose of ciprofloxacin tablets if your kidneys do not work well. * have diabetes or problems with low blood sugar (hypoglycemia). * have joint problems including rheumatoid arthritis (RA) * have trouble swallowing pills. * are pregnant or plan to become pregnant. It is not known if ciprofloxacin tablets will harm your unborn baby. * are breastfeeding or plan to breastfeed. Ciprofloxacin passes into your breast milk. o You should not breastfeed during treatment with ciprofloxacin tablets and for 2 days after taking your last dose of ciprofloxacin tablets. You may pump your breast milk and throw it away during treatment with ciprofloxacin tablets and for 2 days after taking your last dose of ciprofloxacin tablets. o If you are taking ciprofloxacin tablets for inhalation anthrax, you and your healthcare provider should decide whether you can continue breastfeeding while taking ciprofloxacin tablets. Tell your healthcare provider about all the medicines you take, including prescription and over-the-counter medicines, vitamins, and herbal supplements. o a steroid medicine o an anti-psychotic medicine o a tricyclic antidepressant o a water pill (diuretic) o theophylline (such as Theo-24®, Elixophyllin®, Theochron®, Uniphyl®, Theolair®) o a medicine to control your heart rate or rhythm (antiarrhythmics) o an oral anti-diabetes medicine o phenytoin (Fosphenytoin Sodium®, Cerebyx®, Dilantin-125®, Dilantin® , Extended Phenytoin Sodium®, Prompt Phenytoin Sodium®, Phenytek®) o cyclosporine (Gengraf®, Neoral®, Sandimmune®, Sangcya®). o a blood thinner (such as warfarin, Coumadin®, Jantoven®) o methotrexate (Trexall®) o ropinirole (Requip®) o clozapine (Clozaril®, Fazaclo® ODT®) o a Non-Steroidal Anti-Inflammatory Drug (NSAID). Many common medicines for pain relief are NSAIDs. Taking an NSAID while you take ciprofloxacin tablets or other fluoroquinolones may increase your risk of central nervous system effects and seizures. o sildenafil (Viagra®, Revatio®) o duloxetine o products that contain caffeine o probenecid (Probalan®, Col-probenecid ®) o an antacid, multivitamin, or other medicine or supplements that has magnesium, calcium, aluminum, iron, or zinc o sucralfate (Carafate®) o didanosine (Videx®, Videx EC®) Ask your healthcare provider for a list of these medicines if you are not sure. Know the medicines you take. Keep a list of them to show your healthcare provider and pharmacist when you get a new medicine. How should I take ciprofloxacin tablets? Taking all of your ciprofloxacin tablets doses will help make sure that all of the bacteria are killed. Taking all of your ciprofloxacin tablets doses will help lower the chance that the bacteria will become resistant to ciprofloxacin tablets. If you become resistant to ciprofloxacin tablets, ciprofloxacin tablets and other antibacterial medicines may not work for you in the future. ○ 6 hours or more until your next scheduled dose, take your missed dose right away. Then take the next dose at your regular time. ○ less than 6 hours until your next scheduled dose, do not take the missed dose. Take the next dose at your regular time. What should I avoid while taking ciprofloxacin tablets? What are the possible side effects of ciprofloxacin tablets? Ciprofloxacin tablets may cause serious side effects, including: Skin rash may happen in people taking ciprofloxacin tablets even after only 1 dose. Stop taking ciprofloxacin tablets at the first sign of a skin rash and call your healthcare provider. Skin rash may be a sign of a more serious reaction to ciprofloxacin tablets. Stop taking ciprofloxacin tablets and tell your healthcare provider right away if you have yellowing of your skin or white part of your eyes, or if you have dark urine. These can be signs of a serious reaction to ciprofloxacin tablets (a liver problem). ○ who are elderly. ○ with a family history of prolonged QT interval. ○ with low blood potassium (hypokalemia) or low magnesium (hypomagnesemia). ○ who take certain medicines to control heart rhythm (antiarrhythmics). 5. Changes in blood sugar People who take ciprofloxacin tablets and other fluoroquinolone medicines with oral anti-diabetes medicines or with insulin can get low blood sugar (hypoglycemia) and high blood sugar (hyperglycemia). Follow your healthcare provider's instructions for how often to check your blood sugar. If you have diabetes and you get low blood sugar while taking ciprofloxacin tablets, stop taking ciprofloxacin tablets and call your healthcare provider right away. Your antibiotic medicine may need to be changed. The most common side effects of ciprofloxacin tablets include: Tell your healthcare provider about any side effect that bothers you or that does not go away. These are not all the possible side effects of ciprofloxacin tablets. For more information, ask your healthcare provider or pharmacist. Call your doctor for medical advice about side effects. You may report side effects to FDA at 1-800-FDA-1088. How should I store ciprofloxacin tablets? * Store at 20° to 25°C (68° to 77°F) [See USP Controlled Room Temperature]. Keep ciprofloxacin tablets and all medicines out of the reach of children. General information about the safe and effective use of ciprofloxacin tablets. Medicines are sometimes prescribed for purposes other than those listed in a Medication Guide. Do not use ciprofloxacin tablets for a condition for which it is not prescribed. Do not give ciprofloxacin tablets to other people, even if they have the same symptoms that you have. It may harm them. This Medication Guide summarizes the most important information about ciprofloxacin tablets. If you would like more information about ciprofloxacin tablets, talk with your healthcare provider. You can ask your healthcare provider or pharmacist for information about ciprofloxacin tablets that is written for health professionals. For more Information contact Hikma Pharmaceuticals USA Inc. at 1-877-233-2001. What are the ingredients in ciprofloxacin tablets? This Medication Guide has been approved by the U.S. Food and Drug Administration....
Chun/DialoGPT-large-dailydialog
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - generated_from_trainer model-index: - name: klue_roberta_base_retrained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # klue_roberta_base_retrained This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.2
Chun/DialoGPT-small-dailydialog
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: - en --- [![Build Status](https://www.caribbeannewsglobal.com/wp-content/uploads/2023/04/758_security.jpg)]() read the full article here : https://paste.ee/p/hCmX0 Source : https://youtube.com/live/PWxiaPX18pw?feature=share Flash News : https://jsfiddle.net/mjb684vx/ Biden last Talk : https://paste.feed-the-beast.com/view/616996e2 Russian Ukrain Breaking News : https://pasteio.com/xpl2biXta9jI THIS WEEK IN HUSKY MEN'S TENNIS PAC-12 CHAMPIONSHIPS Ojai, Calif. | Libbey Park Quarterfinals, Thursday, Apr. 27 (7) Washington vs. (2) Arizona, 3 p.m. Live Video | Live Scores SEATTLE - The Huskies find themselves in a familiar spot heading into the Pac-12 Men's Tennis Championships. Washington will take on regular season co-champion Arizona in the quarterfinals this Thursday, Apr. 27, at Libbey Park in Ojai, Calif., looking for an upset to boost its NCAA tourney chances. First serve between the Huskies and Wildcats is scheduled for 3 p.m. Last year the Huskies were the first 8-seed to make the Pac-12 Championship final, upsetting No. 1-seeded Arizona in the quarterfinal round. This year as the No. 7-seed, the Huskies will try for a repeat performance. The Huskies are coming off two big upsets over Stanford and California to end the regular season on a high note and finish with a 3-5 Pac-12 record. Washington's ITA national ranking jumped up 13 spots with those wins from 56 to No. 43 this week, much more in range of a potential NCAA tournament bid. Sophomore earned Pac-12 Player of the Week honors this week, his second of the season, and he saw his singles ranking jump over fifty spots to a career-high No. 71. Bouchelaghem and check in at No. 56 in the doubles rankings this week. Arizona took the regular season dual between the teams by a 4-0 count in Tucson. The Wildcats finished in a three-way tie for the regular season conference title at 6-2 with USC and Utah. Arizona is currently ranked 14th nationally with a 20-5 overall record. Jonas is ranked 41st nationally in singles, while Colton Smith is ranked 67th and Gustaf Strom is No. 113. In doubles, Jay Friend and Ziverts are No. 42, and Ziverts and Herman Hoeyeraal are No. 74. The winner of Thursday's match will move into the semifinals to take on either No. 3 seed Utah or No. 6 seed UCLA on Friday at 4 p.m. The finals are set for Saturday, Apr. 29, at 6 p.m. live on Pac-12 Networks....
Chun/w-en2zh-otm
[ "pytorch", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 duplicated_from: dg845/unidiffuser-diffusers-test --- This checkpoint is a small testing version of the UniDiffuser-v0 model for 32 x 32 images, consisting of small random models for each of the components. Please reference the [model card]() for the full UniDiffuser-v0 checkpoint for information about the UniDiffuser model.
Chun/w-zh2en-hsk
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en --- [![Build Status](https://multifiles.pressherald.com/uploads/sites/4/2023/04/Road_Rage_Murder_Trial_89185-1682553462.jpg)]() read the full article here : https://youtube.com/live/hfgXMYJ9rMk?feature=share Source : https://controlc.com/90daaa7a Flash News : https://tech.io/snippet/tyc7oQO Biden last Talk : https://etextpad.com/yumv2ws5p9 Russian Ukrain Breaking News : https://paste.toolforge.org/view/dcdd941d Iowa's taxpayer watchdog office would be limited in its ability to request certain documents under a bill headed to Gov. Kim Reynolds' desk to be signed into law. The Senate gave final passage to Senate File 478 on Wednesday, a bill that defines what a state auditor can request as part of an audit, when an audit begins, and requires disputes between the auditor and state agencies to be settled by an arbitration board. The bill passed along party lines, 33-16. State Auditor Rob Sand, Iowa's sole elected statewide Democrat, said it is the "worst pro-corruption bill in history" and an erosion of the balance of power. "The bill is the greatest pro-corruption bill and the worst perversion of checks and balances in Iowa's history because it lets state government hide documents from auditors, and ends judicial review of such decisions," Sand said in a statement after the bill's passage. "To eliminate checks and balances on power is an attack on democracy and an invitation to corruption." People are also reading... Iowa restaurant inspection update: Dead rodents, yellow slime and dodgy hotdogs DSG to celebrate grand opening at new South Sioux City store May 11 Sioux City teen sentenced to life in prison for murder Are robot waiters the future? Some restaurants think so. Ex-Huskers player, staffer to head Dakota Valley's football program 10 Siouxland girls high school track and field athletes to watch this season Sioux City Police report early Tuesday morning shots fired in Morningside area Downtown Sioux City Restaurant Week gives diners a chance to sample local cuisine Lincoln Walmart faces suspension or fine after 16-year-old sold alcohol Ikea plans to add 8 new US locations over next 3 years Intruder found partially undressed in bed with Iowa nursing home resident, police allege Confirmed: Remains from river are Cristian Martinez, of Muscatine 4-year-old's Build-A-Bear had late mother's heartbeat. It was accidentally donated. Sioux City Community School Board members named in former Superintendent Paul Gausman's lawsuit deny claims Dr. Paul Eastman, OBGYN with Siouxland Obstetrics & Gynecology, retiring May 3 The bill has also drawn significant condemnation from nonpartisan and bipartisan auditors organizations and CPA organizations, who say it would hamper the state auditor's ability to prevent fraud and misuse of taxpayer funds. But Republican Sen. Mike Bousselot of Ankeny said the bill would set common-sense limits on the auditor's authority. He said Sand has repeatedly overstepped his bounds and requested unnecessary information. "This bill ... gives clearly much-needed guidance to the auditor of state who has been overreaching, and gives common-sense policy," Bousselot said. Listen now and subscribe: Apple Podcasts | Google Podcasts | Spotify | RSS Feed | Omny Studio The bill would add to the list of documents generally not available to the auditor, blocking the auditor from accessing criminal identification files, student records, medical records, police investigation records and any other information that someone would reasonably expect to be kept private, among others. Under an amendment added by the House, the auditor would be able to access those documents if it is necessary for generally accepted standards, or to comply with any other state or federal regulation or if the auditor is investigating alleged or suspected embezzlement or theft. The bill would also require disputes between any constitutional offices -- including the auditor -- departments, commissions or boards of the state government be decided by an arbitration board rather than in the courts. The three-member board would consist of members appointed by each of the disputing parties and a member appointed by the governor's office. In an open letter, National State Auditors Association President John Geragosian said the arbitration board setup "clearly favors the audited agency rather than having an objective third party decide the matter." "Access to records necessary to conduct an independent audit is essential for the proper oversight of public funds," he wrote. "Therefore, we respectfully request that you eliminate the limitations in Iowa Senate File 478 and any subsequent amendment." Former U.S. Comptroller General David Walker and the Institute of Internal Auditors also took issue with the amended bill, saying it still limits the independence of the state auditor. Democrats said the bill was a partisan power grab and would make government corruption harder to identify. "This bill is not about privacy," said Sen. Janet Petersen, D-Des Moines. "This is pro-embezzler, this is pro-corruption, this is pro-crony." Bousselot this week floated the idea of entirely removing the auditor's subpoena power, but that was not added to the bill. Last month the Iowa Supreme Court ruled the state auditor does not have authority to investigate a state insurance risk pool that provides insurance coverage to taxpayer-funded government entities. Sand hand filed a subpoena asking for records regarding out-of-state meetings of the pool's directors, which Bousselot pointed to as an illegal abuse of Sand's subpoena power. "The watchdog still has teeth," Bousselot said. "It's just making sure those teeth aren't getting sank into something they shouldn't be, which is Iowans' most personal private information." 0 Comments Love 0 Funny 0 Wow 0 Sad 0 Angry 0 Tags Rob Sand Iowa Kim Reynolds Auditor Senate File 478 Des Moines Be the first to know Get local news delivered to your inbox! Sign up! * I understand and agree that registration on or use of this site constitutes agreement to its user agreement and privacy policy....
Chungu424/qazwsx
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en --- [![Build Status](https://www.caribbeannewsglobal.com/wp-content/uploads/2023/04/csaba_-korosi.jpg)]() read the full article here : https://pastebin.com/PbXgqtsz Source : https://notes.io/qCgED Flash News : https://jsbin.com/qubuhukize/edit?html,output Biden last Talk : https://yamcode.com/breaking-news-update-1-05232023-135011 Russian Ukrain Breaking News : https://jsitor.com/HOG6gTrrlL A Sunday incident marked the third known attack on a mosque in Minnesota this year. The Minneapolis Police Department is partnering with the FBI to investigate a possible bias motive behind an arson attempt at a Minneapolis mosque on Sunday evening. Authorities also are investigating a fire at another mosque, less than a mile away, on Monday night. Sunday's incident marked the third known attack on a mosque in Minnesota this year, according to CAIR-Minnesota Executive Director Jaylani Hussein. "We don't want our community to get to the point where they are not coming to the mosques or feeling to the point where they will behave completely differently," Hussein said. "What we want them to do is to be more vigilant, we want them to increase security. And we want them to realize that we are under threat." Worshippers noticed a fire in a bathroom at the Masjid Omar Islamic Center, located on the second floor of the 24 Mall in south Minneapolis, not long after a masked man entered with a large paper bag, according to video footage. Community leaders allege the paper bag contained a large container filled with flammable liquid. The man was dressed in a black hoodie and light blue jeans. Security cameras show the male suspect entered the mosque at 6:55 p.m. and took his shoes off before heading straight into a multi-stall bathroom. He is seen running out at 7:04 p.m. without the bag, sprinting out of the building barefoot. Nearby community members extinguished the fire before it spread. No injuries or damage were reported. Ward 6 Council Member Jamal Osman, Minneapolis Police Chief Brian O'Hara and other MPD staff toured the mosque and heard concerns from mosque leaders on Monday afternoon. O'Hara told them he was taking the mosque incidents seriously. Effective Monday, police supervisors will be responding to service calls originating from places of worship. "Anytime there's an incident near a place of worship, I want to know about it," O'Hara said. Shortly after making this declaration and about 24 hours after the first fire, another fire was reported at a different mosque only a few minutes away. Just before 7 p.m. Monday, Minneapolis fire crews responded to reports of a fire at Masjid Al Rahma mosque and community center at 27th Street and Bloomington Avenue. The three-story building had smoke coming from the roof but no visible fire, according to city officials. The fire was on the third floor. No injuries were reported. "This is obviously a fire that just happened, the investigation is in its initial stages, but it's just way too much of a coincidence. We are investigating this as if this is an arson, as if this is connected, until proven otherwise," O'Hara said at the scene Monday night. "Freedom of religion provides us freedom to practice. We shouldn't be fearful," Osman said Monday night. "A lot of people are fearful right now and the community is very concerned, and I really hope that we can do more and we can be better -- and I hope they catch this person who is creating all this fear." At the meeting Monday afternoon, MPD told leaders they are working with the FBI. There have been no arrests, according to MPD officials, and it's currently an active investigation. It's unclear whether the two fires are connected, but Sunday's fire is related to a fire at the same mosque last year. Minneapolis police confirmed the same man was behind an arson attempt at the Masjid Omar Islamic Center mosque last year. Video footage also shows him spray painting "500" on nearby buildings home to Somali businesses, police reported....