modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
435M
likes
int64
0
6.52k
library_name
stringclasses
345 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
51 values
createdAt
unknown
card
stringlengths
1
913k
kowalsky/llama-2-test
kowalsky
"2024-04-05T12:34:22Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T12:34:22Z"
Entry not found
katkout2313/lora_model
katkout2313
"2024-04-05T12:40:07Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-05T12:39:50Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** katkout2313 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
samratghosh291/depression_detection
samratghosh291
"2024-04-05T12:41:50Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-05T12:40:07Z"
--- license: apache-2.0 ---
ayousanz/style-bert-vits2-pretrained-model-ver2
ayousanz
"2024-09-09T14:47:20Z"
0
10
null
[ "Style-Bert-VITS2", "ja", "license:other", "region:us" ]
null
"2024-04-05T12:40:14Z"
--- license: other language: - ja tags: - Style-Bert-VITS2 --- # Style-Bert-VITS2向けの事前学習モデル [Style-Bert-VITS2](https://github.com/litagin02/Style-Bert-VITS2)で使用できる以下の学習データで学習を行ったクリーンな(*1)事前学習データになります (*1) ここでいうクリーンは事前学習に使用した学習データが明記されていることを指しています ## 学習データセット * [つくよみちゃんコーパス│声優統計コーパス(JVSコーパス準拠)](https://tyc.rei-yumesaki.net/material/corpus/) * [みんなで作るJSUTコーパスbasic5000 BASIC5000_0001~BASIC5000_0600](https://tyc.rei-yumesaki.net/material/minnade-jsut/) (夢前黎担当部分を許可を得て使用) * [黄鏡博人さん](https://twitter.com/KikyoHiloto) からボイスデータをご提供いただきました * [あみたろの声素材工房](https://amitaro.net/) ## 学習パラメータ * 学習ステップ数 : 600k step * bfloat16 : false **config.json** ```json { "model_name": "pretraing", "train": { "log_interval": 200, "eval_interval": 2000, "seed": 42, "epochs": 2100, "learning_rate": 0.0001, "betas": [ 0.8, 0.99 ], "eps": 1e-09, "batch_size": 8, "bf16_run": false, "fp16_run": false, "lr_decay": 0.99996, "segment_size": 16384, "init_lr_ratio": 1, "warmup_epochs": 0, "c_mel": 45, "c_kl": 1.0, "c_commit": 100, "skip_optimizer": false, "freeze_ZH_bert": false, "freeze_JP_bert": false, "freeze_EN_bert": false, "freeze_emo": false, "freeze_style": false, "freeze_decoder": false }, "data": { "use_jp_extra": true, "training_files": "Data/pretraing/train.list", "validation_files": "Data/pretraing/val.list", "max_wav_value": 32768.0, "sampling_rate": 44100, "filter_length": 2048, "hop_length": 512, "win_length": 2048, "n_mel_channels": 128, "mel_fmin": 0.0, "mel_fmax": null, "add_blank": true, "n_speakers": 1, "cleaned_text": true, "spk2id": { "pretraing": 0 } }, "model": { "use_spk_conditioned_encoder": true, "use_noise_scaled_mas": true, "use_mel_posterior_encoder": false, "use_duration_discriminator": false, "use_wavlm_discriminator": true, "inter_channels": 192, "hidden_channels": 192, "filter_channels": 768, "n_heads": 2, "n_layers": 6, "kernel_size": 3, "p_dropout": 0.1, "resblock": "1", "resblock_kernel_sizes": [ 3, 7, 11 ], "resblock_dilation_sizes": [ [ 1, 3, 5 ], [ 1, 3, 5 ], [ 1, 3, 5 ] ], "upsample_rates": [ 8, 8, 2, 2, 2 ], "upsample_initial_channel": 512, "upsample_kernel_sizes": [ 16, 16, 8, 2, 2 ], "n_layers_q": 3, "use_spectral_norm": false, "gin_channels": 512, "slm": { "model": "./slm/wavlm-base-plus", "sr": 16000, "hidden": 768, "nlayers": 13, "initial_channel": 64 } }, "version": "2.4.1-JP-Extra" } ``` ## SpeechMOSによる自然性評価 mos_pretraing.csvも同封しています ![](mos_pretraing.png) # ライセンス ライセンスは、以下に準じます * [つくよみちゃんコーパス│声優統計コーパス(JVSコーパス準拠)](https://tyc.rei-yumesaki.net/material/corpus/) * [あみたろの声素材工房(https://amitaro.net/) フリー声素材ご利用規約](https://amitaro.net/voice/faq/#index_id6)
ChocolateBlack/outputs_mistral_7b_v0.2_roleplay_finetune
ChocolateBlack
"2024-04-06T12:51:00Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-04-05T12:44:09Z"
Entry not found
vananhle/git-large-pokemon
vananhle
"2024-04-05T12:45:20Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T12:45:20Z"
Entry not found
EssamDad/llama-2-7b-miniguanaco_last2
EssamDad
"2024-04-05T12:46:59Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T12:46:59Z"
Entry not found
oneandahalfcats/onlyhalfacat1
oneandahalfcats
"2024-04-05T13:13:34Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T12:49:29Z"
Entry not found
ailabturkiye/ozgurozel
ailabturkiye
"2024-04-05T12:51:55Z"
0
1
null
[ "license:openrail", "region:us" ]
null
"2024-04-05T12:49:40Z"
--- license: openrail ---
akashmaggon/deberta-finetuned-ner-learning-lab-adapter-additionaldataset
akashmaggon
"2024-04-05T13:45:15Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-04-05T12:50:49Z"
Entry not found
ysr/deepseek-1.3b-rust-lora-64
ysr
"2024-04-05T13:06:54Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T12:54:59Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alchemab/fabcon-medium
alchemab
"2024-05-27T13:23:02Z"
0
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "biology", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-05T12:57:22Z"
--- extra_gated_heading: You need to share contact information with Alchemab to access this model extra_gated_prompt: >- ### FAbCon Terms of Use FAbCon models follow a [modified Apache 2.0 license](https://huggingface.co/alchemab/fabcon-large/blob/main/LICENSE.md) extra_gated_fields: First Name: text Last Name: text Email: text Organization: text By clicking 'Submit' below, I accept the terms of the license, agree to share contact information with Alchemab: checkbox I agree to being contacted about future products, services, and/or partnership opportunities: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed, and shared in accordance with the [Alchemab Privacy Notice](https://www.alchemab.com/privacy-policy/). extra_gated_button_content: Submit license: other widget: - text: "ḢQVQLE" tags: - biology --- ## FAbCon-medium 🦅🧬 FAbCon is a generative, antibody-specific language model based on the [Falcon model](https://huggingface.co/tiiuae/falcon-7b). It is pre-trained using causal language modelling, and is suitable for a range of tasks. FAbCon-small, FAbCon-medium, and FAbCon-large are available for non-commercial use via a modified Apache 2.0 license. For any users seeking commercial use of our models (and license for generated antibodies from all FAbCon models), please contact us. | Model variant | Parameters | Config | License | | ------------- | ---------- | ------ | ------- | | [FAbCon-small](https://huggingface.co/alchemab/fabcon-small) | 144M | 24L, 12H, 768d | Modified Apache 2.0 | | [FAbCon-medium](https://huggingface.co/alchemab/fabcon-medium) | 297M | 28L, 16H, 1024d | Modified Apache 2.0 | | [FAbCon-large](https://huggingface.co/alchemab/fabcon-large) | 2.4B | 56L, 32H, 2048d | Modified Apache 2.0 | ## Usage example - generation Generating sequences can be done using HuggingFace's built-in `model.generate` method, ``` from transformers import ( PreTrainedTokenizerFast, FalconForCausalLM ) >>> tokenizer = PreTrainedTokenizerFast.from_pretrained("alchemab/fabcon-medium") >>> model = FalconForCausalLM.from_pretrained("alchemab/fabcon-medium") >>> o = model.generate( tokenizer("Ḣ", return_tensors='pt')['input_ids'][:, :-1], max_new_tokens=..., top_k = ..., temperature = ... ) >>> decoded_seq = tokenizer.batch_decode(o) ``` ## Usage example - sequence property prediction Use the `transformers` built-in SequenceClassification classes ``` from transformers import ( PreTrainedTokenizerFast, FalconForSequenceClassification ) >>> tokenizer = PreTrainedTokenizerFast.from_pretrained("alchemab/fabcon-medium") >>> model = FalconForSequenceClassification.from_pretrained("alchemab/fabcon-medium") >>> o = model(input_ids=tokenizer("Ḣ", return_tensors='pt')['input_ids'], attention_mask=tokenizer("Ḣ", return_tensors='pt')['attention_mask']) ```
shrimalrishika/pubmed-sum
shrimalrishika
"2024-04-05T13:05:59Z"
0
0
peft
[ "peft", "safetensors", "region:us" ]
null
"2024-04-05T12:57:42Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
samratghosh291/salary_prediction
samratghosh291
"2024-04-05T13:01:46Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-05T13:01:10Z"
--- license: apache-2.0 ---
davidshtian/Mixtral-8x7B-Instruct-v0.1-neuron-1x2048-16-cores-2.18
davidshtian
"2024-04-05T13:01:36Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-05T13:01:35Z"
--- license: apache-2.0 ---
THESUNOK/Tinkoff
THESUNOK
"2024-04-05T13:04:17Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:03:27Z"
Entry not found
alchemab/fabcon-large
alchemab
"2024-05-27T13:23:19Z"
0
3
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "biology", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-05T13:03:27Z"
--- extra_gated_heading: You need to share contact information with Alchemab to access this model extra_gated_prompt: >- ### FAbCon Terms of Use FAbCon models follow a [modified Apache 2.0 license](https://huggingface.co/alchemab/fabcon-large/blob/main/LICENSE.md) extra_gated_fields: First Name: text Last Name: text Email: text Organization: text By clicking 'Submit' below, I accept the terms of the license, agree to share contact information with Alchemab: checkbox I agree to being contacted about future products, services, and/or partnership opportunities: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed, and shared in accordance with the [Alchemab Privacy Notice](https://www.alchemab.com/privacy-policy/). extra_gated_button_content: Submit license: other widget: - text: "ḢQVQLE" tags: - biology --- ## FAbCon-large 🦅🧬 FAbCon is a generative, antibody-specific language model based on the Falcon model. It is pre-trained using causal language modelling, and is suitable for a range of tasks. FAbCon-small, FAbCon-medium, and FAbCon-large are available for non-commercial use via a modified Apache 2.0 license. For any users seeking commercial use of our models (and license for generated antibodies from all FAbCon models), please contact us. | Model variant | Parameters | Config | License | | ------------- | ---------- | ------ | ------- | | [FAbCon-small](https://huggingface.co/alchemab/fabcon-small) | 144M | 24L, 12H, 768d | Modified Apache 2.0 | | [FAbCon-medium](https://huggingface.co/alchemab/fabcon-medium) | 297M | 28L, 16H, 1024d | Modified Apache 2.0 | | [FAbCon-large](https://huggingface.co/alchemab/fabcon-large) | 2.4B | 56L, 32H, 2048d | Modified Apache 2.0 | ## Usage example - generation Generating sequences can be done using HuggingFace's built-in `model.generate` method, ``` from transformers import ( PreTrainedTokenizerFast, FalconForCausalLM ) >>> tokenizer = PreTrainedTokenizerFast.from_pretrained("alchemab/fabcon-large") >>> model = FalconForCausalLM.from_pretrained("alchemab/fabcon-large") >>> o = model.generate( tokenizer("Ḣ", return_tensors='pt')['input_ids'][:, :-1], max_new_tokens=..., top_k = ..., temperature = ... ) >>> decoded_seq = tokenizer.batch_decode(o) ``` ## Usage example - sequence property prediction Use the `transformers` built-in SequenceClassification classes ``` from transformers import ( PreTrainedTokenizerFast, FalconForSequenceClassification ) >>> tokenizer = PreTrainedTokenizerFast.from_pretrained("alchemab/fabcon-large") >>> model = FalconForSequenceClassification.from_pretrained("alchemab/fabcon-large") >>> o = model(input_ids=tokenizer("Ḣ", return_tensors='pt')['input_ids'], attention_mask=tokenizer("Ḣ", return_tensors='pt')['attention_mask']) ```
JackWong0911/timesformer-base-finetuned-k400-finetuned-kinetic400-subset-epoch6-num_frame_10_spatial_only
JackWong0911
"2024-04-05T13:41:20Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "timesformer", "video-classification", "generated_from_trainer", "base_model:facebook/timesformer-base-finetuned-k400", "base_model:finetune:facebook/timesformer-base-finetuned-k400", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2024-04-05T13:04:28Z"
--- license: cc-by-nc-4.0 base_model: facebook/timesformer-base-finetuned-k400 tags: - generated_from_trainer metrics: - accuracy model-index: - name: timesformer-base-finetuned-k400-finetuned-kinetic400-subset-epoch6-num_frame_10_spatial_only results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # timesformer-base-finetuned-k400-finetuned-kinetic400-subset-epoch6-num_frame_10_spatial_only This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7364 - Accuracy: 0.8382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0337 | 0.17 | 50 | 0.6661 | 0.8333 | | 0.0027 | 1.17 | 100 | 0.9499 | 0.7778 | | 0.0224 | 2.17 | 150 | 0.3809 | 0.9028 | | 0.0006 | 3.17 | 200 | 0.4298 | 0.875 | | 0.0003 | 4.17 | 250 | 0.4078 | 0.875 | | 0.0009 | 5.17 | 300 | 0.4182 | 0.875 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
MrezaPRZ/CodeLLama-NL2SQL-34B
MrezaPRZ
"2024-04-05T13:09:35Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:04:40Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
remzloev/brianmaps
remzloev
"2024-04-05T14:43:36Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-05T13:08:06Z"
--- license: openrail --- # голос брайн мапса скоро
LeoPhan000/brain
LeoPhan000
"2024-04-06T02:09:27Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:14:14Z"
Entry not found
Bjornrun/mistral-bjorn
Bjornrun
"2024-04-05T13:15:36Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-05T13:15:36Z"
--- license: mit ---
blevlabs/ml_ferret_13b
blevlabs
"2024-04-05T13:36:51Z"
0
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:21:31Z"
Entry not found
diliash/detr-2024-04-05-13-03-08
diliash
"2024-04-05T13:22:15Z"
0
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
"2024-04-05T13:21:52Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mohit19906/falcon-7b-instruct-SBCQNAUserAssist
mohit19906
"2024-04-05T15:35:55Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:22:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dnau666153/chpt1000
dnau666153
"2024-04-05T13:27:38Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-05T13:27:38Z"
--- license: mit ---
karrot45/GrayColorHentaiMix
karrot45
"2024-04-05T13:40:48Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:29:29Z"
Entry not found
viniciushiga/modelo_portugues
viniciushiga
"2024-04-05T13:29:49Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-05T13:29:45Z"
--- license: mit ---
Kryptone/ilariaG_D_Fix
Kryptone
"2024-04-05T13:40:00Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:30:53Z"
Entry not found
chaerene/kim
chaerene
"2024-04-05T13:37:04Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-05T13:32:02Z"
--- license: openrail ---
t0m1ab/alphazero-tictactoe
t0m1ab
"2024-04-05T13:32:43Z"
0
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:32:36Z"
Entry not found
crazyup37/Modi_rvc_voice_1000e
crazyup37
"2024-04-05T13:35:03Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:34:36Z"
Entry not found
Aymeric29bzh/TTS-01-55000
Aymeric29bzh
"2024-04-05T13:34:51Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:34:42Z"
Entry not found
Ishva24/mistral-finetuned-alpaca
Ishva24
"2024-04-06T05:08:54Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-04-05T13:34:43Z"
Entry not found
Abhaykoul/idefics-9b-doodles
Abhaykoul
"2024-04-05T13:40:18Z"
0
0
transformers
[ "transformers", "safetensors", "idefics", "pretraining", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-04-05T13:34:52Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
katkout2313/lora_model1
katkout2313
"2024-04-05T13:35:19Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:34:53Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** katkout2313 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yuezih/llava-v1.5-7b-basemodel
yuezih
"2024-04-05T13:41:44Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:41:44Z"
Entry not found
ShaharAdar/best-model-try
ShaharAdar
"2024-04-05T14:41:57Z"
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
"2024-04-05T13:41:58Z"
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | learning_rate | 9.999999747378752e-06 | | decay | 0.0 | | beta_1 | 0.8999999761581421 | | beta_2 | 0.9990000128746033 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
HiteshMehta/Price_Prediction
HiteshMehta
"2024-04-05T13:43:20Z"
0
0
null
[ "license:cc", "region:us" ]
null
"2024-04-05T13:43:20Z"
--- license: cc ---
ETH-HELIOS-AI/helios-314b-alpha
ETH-HELIOS-AI
"2024-06-25T05:59:40Z"
0
1
null
[ "finetuned", "grok-1", "license:apache-2.0", "region:us" ]
null
"2024-04-05T13:43:41Z"
--- license: apache-2.0 tags: - finetuned - grok-1 --- # helios-314b-alpha This repository contains JAX example code for loading and running the Helios-314B-Alpha open-weights model. The Helios-314B-Alpha model is a trained version of the Grok-V1 open source model released by X.AI Corp.<br /> We have fine-tuned the model to perform on crypto-related queries.<br /> It achieves the following results on the evaluation set:<br /><br /> Loss: 0.0052<br /> F1: 0.9969 Make sure to download the checkpoint and place the `ckpt-0` directory in `checkpoints` Then, run ```shell pip install -r requirements.txt python run.py ``` to test the code. The script loads the checkpoint and samples from the model on a test input. Due to the large size of the model (314B parameters), a machine with enough GPU memory is required to test the model with the example code. The implementation of the MoE layer in this repository is not efficient. The implementation was chosen to avoid the need for custom kernels to validate the correctness of the model. # Model Specifications Helios is currently designed with the following specifications: - **Parameters:** 314B - **Architecture:** Mixture of 8 Experts (MoE) - **Experts Utilization:** 2 experts used per token - **Layers:** 64 - **Attention Heads:** 48 for queries, 8 for keys/values - **Embedding Size:** 6,144 - **Tokenization:** SentencePiece tokenizer with 131,072 tokens - **Additional Features:** - Rotary embeddings (RoPE) - Supports activation sharding and 8-bit quantization - **Maximum Sequence Length (context):** 8,192 tokens # License The code and weights for the Helios-314B-Alpha model are licensed under the apache-2.0 open source license
kazma1/unsupervise_roberta_small
kazma1
"2024-04-05T13:48:11Z"
0
0
transformers
[ "transformers", "pytorch", "roberta", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:46:20Z"
--- license: mit ---
HiteshMehta/Ewaste_price_prediction
HiteshMehta
"2024-04-05T16:55:49Z"
0
0
null
[ "license:cc", "region:us" ]
null
"2024-04-05T13:47:26Z"
--- license: cc ---
bjoernp/leo_bude
bjoernp
"2024-04-05T13:57:14Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-05T13:50:03Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
brugmark/all-MiniLM-L6-v2-personal-project-default-2024-04-05
brugmark
"2024-04-05T13:51:23Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T13:51:22Z"
Entry not found
Weni/WeniGPT-Agents-Zephyr-1.0.0-KTO
Weni
"2024-04-05T14:28:04Z"
0
0
trl
[ "trl", "safetensors", "KTO", "WeniGPT", "pt", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:finetune:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
"2024-04-05T13:51:24Z"
--- license: mit library_name: "trl" tags: - KTO - WeniGPT base_model: HuggingFaceH4/zephyr-7b-beta model-index: - name: Weni/WeniGPT-Agents-Zephyr-1.0.0-KTO results: [] language: ['pt'] --- # Weni/WeniGPT-Agents-Zephyr-1.0.0-KTO This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta] on the dataset Weni/wenigpt-agent-1.2.0 with the KTO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/). Description: testing kto dataset during training It achieves the following results on the evaluation set: {'eval_loss': 0.35778722167015076, 'eval_runtime': 141.7574, 'eval_samples_per_second': 2.116, 'eval_steps_per_second': 0.529, 'eval_rewards/chosen': 3.2568461894989014, 'eval_logps/chosen': -249.1678009033203, 'eval_rewards/rejected': -1.2835873365402222, 'eval_logps/rejected': -277.8927001953125, 'eval_kl': 14.828611373901367, 'eval_rewards/margins': 4.38193941116333, 'epoch': 0.99} ## Intended uses & limitations This model has not been trained to avoid specific intructions. ## Training procedure Finetuning was done on the model HuggingFaceH4/zephyr-7b-beta with the following prompt: ``` --------------------- System_prompt: Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma: {instructions_formatted} Na sua memória você tem esse contexto: {context} Lista de requisitos: - Responda de forma natural, mas nunca fale sobre um assunto fora do contexto. - Nunca traga informações do seu próprio conhecimento. - Repito é crucial que você responda usando apenas informações do contexto. - Nunca mencione o contexto fornecido. - Nunca mencione a pergunta fornecida. - Gere a resposta mais útil possível para a pergunta usando informações do conexto acima. - Nunca elabore sobre o porque e como você fez a tarefa, apenas responda. --------------------- Question: {question} --------------------- Response: {answer} --------------------- ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - per_device_train_batch_size: 4 - per_device_eval_batch_size: 4 - gradient_accumulation_steps: 4 - num_gpus: 1 - total_train_batch_size: 16 - optimizer: AdamW - lr_scheduler_type: cosine - num_steps: 145 - quantization_type: bitsandbytes - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",) ### Training results ### Framework versions - transformers==4.39.1 - datasets==2.18.0 - peft==0.10.0 - safetensors==0.4.2 - evaluate==0.4.1 - bitsandbytes==0.43 - huggingface_hub==0.20.3 - seqeval==1.2.2 - optimum==1.17.1 - auto-gptq==0.7.1 - gpustat==1.1.1 - deepspeed==0.14.0 - wandb==0.16.3 - # trl==0.8.1 - git+https://github.com/kawine/trl.git#egg=trl - accelerate==0.28.0 - coloredlogs==15.0.1 - traitlets==5.14.1 - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl ### Hardware - Cloud provided: runpod.io
kazma1/unsupervise_roberta_base
kazma1
"2024-04-05T13:53:32Z"
0
0
transformers
[ "transformers", "pytorch", "roberta", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:51:31Z"
--- license: mit ---
funnyNeurons/gemma_text
funnyNeurons
"2024-04-05T13:55:29Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-05T13:55:18Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-bnb-4bit --- # Uploaded model - **Developed by:** funnyNeurons - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mitchyAI/yejimchy
mitchyAI
"2024-04-05T14:02:54Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-04-05T14:02:04Z"
--- license: creativeml-openrail-m ---
JackWong0911/timesformer-base-finetuned-k400-finetuned-kinetic400-subset-epoch6-num_frame_10_full_time_space
JackWong0911
"2024-04-05T14:23:43Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "timesformer", "video-classification", "generated_from_trainer", "base_model:facebook/timesformer-base-finetuned-k400", "base_model:finetune:facebook/timesformer-base-finetuned-k400", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2024-04-05T14:02:21Z"
--- license: cc-by-nc-4.0 base_model: facebook/timesformer-base-finetuned-k400 tags: - generated_from_trainer metrics: - accuracy model-index: - name: timesformer-base-finetuned-k400-finetuned-kinetic400-subset-epoch6-num_frame_10_full_time_space results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # timesformer-base-finetuned-k400-finetuned-kinetic400-subset-epoch6-num_frame_10_full_time_space This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4630 - Accuracy: 0.8971 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5915 | 0.17 | 50 | 0.6170 | 0.8611 | | 0.2742 | 1.17 | 100 | 0.4572 | 0.8333 | | 0.0174 | 2.17 | 150 | 0.3006 | 0.8611 | | 0.0309 | 3.17 | 200 | 0.3319 | 0.8889 | | 0.0006 | 4.17 | 250 | 0.3438 | 0.8889 | | 0.0008 | 5.17 | 300 | 0.3266 | 0.8889 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Tokenizers 0.15.2
ilancml/Gameboxcsv_v0
ilancml
"2024-04-05T15:35:29Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-05T14:02:32Z"
--- license: apache-2.0 ---
nlux/Medilora-Mistral-7B_disease
nlux
"2024-04-07T10:32:38Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
"2024-04-05T14:03:26Z"
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - generator model-index: - name: Medilora-Mistral-7B_disease results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Medilora-Mistral-7B_disease This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.4.0.dev20240326+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
mitchyAI/kannahashimotomchy
mitchyAI
"2024-04-05T14:07:49Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-04-05T14:06:37Z"
--- license: creativeml-openrail-m ---
walterg777/marian-finetuned-kde4-en-to-fr
walterg777
"2024-04-05T14:09:22Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:09:19Z"
Entry not found
deadjack133/testone
deadjack133
"2024-04-05T15:33:16Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-05T14:09:21Z"
--- license: openrail ---
hossamamer12/MB2_1090C_AugV2
hossamamer12
"2024-04-05T14:11:23Z"
0
0
transformers
[ "transformers", "pytorch", "mobilebert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-05T14:10:50Z"
Entry not found
za17/biomedical_text_model
za17
"2024-04-05T14:12:56Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:12:55Z"
Entry not found
Nillman/JisooXL
Nillman
"2024-04-05T14:18:21Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:17:16Z"
Entry not found
silencer107/sn3
silencer107
"2024-04-05T14:19:00Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:18:50Z"
Entry not found
anishhs001/biology-falcon-7b
anishhs001
"2024-04-07T08:55:40Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T14:20:49Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pei-li/sentiment_classifier
pei-li
"2024-04-05T14:21:19Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:21:18Z"
Entry not found
silencer107/bobik0
silencer107
"2024-04-05T14:24:55Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:24:44Z"
Entry not found
anishhs001/generalist-falcon-7b
anishhs001
"2024-04-10T21:46:59Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T14:27:52Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wendlerc/gsm8k-mistral-7b-v2-qlora
wendlerc
"2024-04-05T14:34:39Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-05T14:34:24Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** wendlerc - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
naver-ai/rdnet_upernet_ade20k_160k
naver-ai
"2024-04-05T14:37:20Z"
0
1
null
[ "region:us" ]
null
"2024-04-05T14:35:20Z"
Entry not found
thuralinhtut/minecraft
thuralinhtut
"2024-04-05T14:36:11Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-05T14:36:11Z"
--- license: mit ---
3534t853y7/Wiutrapezioia
3534t853y7
"2024-04-05T14:39:27Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-05T14:37:12Z"
--- license: openrail ---
David19930/whisper-small-hi
David19930
"2024-04-06T18:17:15Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hi", "en", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-04-05T14:38:12Z"
--- license: mit language: - hi - en ---
erfan1380/taxi-v3
erfan1380
"2024-04-05T14:42:08Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-04-05T14:41:59Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="erfan1380/taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ygaci/whisper-base-fr_common_voice_16_1
ygaci
"2024-04-09T14:09:21Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "whisper", "arxiv:1910.09700", "base_model:openai/whisper-base", "base_model:adapter:openai/whisper-base", "region:us" ]
null
"2024-04-05T14:43:48Z"
--- library_name: peft base_model: openai/whisper-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
DustinEwan/decepticon
DustinEwan
"2024-04-07T03:48:37Z"
0
0
null
[ "pytorch", "tensorboard", "region:us" ]
null
"2024-04-05T14:44:08Z"
Entry not found
lognat0704/corgy_dog_LoRA
lognat0704
"2024-04-05T14:46:27Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:46:27Z"
Entry not found
mizoru/whisper-small-ru-ORD_0.4_0.2
mizoru
"2024-04-05T14:49:52Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:49:52Z"
Entry not found
oneandahalfcats/roflcopter
oneandahalfcats
"2024-04-05T14:52:34Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:52:28Z"
Entry not found
oneandahalfcats/roflcopterlol
oneandahalfcats
"2024-04-05T14:54:32Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:54:21Z"
Entry not found
mehran98/imdb-truncated-extra
mehran98
"2024-04-06T07:02:58Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T14:55:20Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oneandahalfcats/sharkdog
oneandahalfcats
"2024-04-05T14:56:42Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:56:33Z"
Entry not found
akunnya/test-drock
akunnya
"2024-04-05T15:01:54Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:57:41Z"
Entry not found
superwelp/enzo
superwelp
"2024-04-05T14:59:20Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T14:58:36Z"
Entry not found
dmcooller/neural-matia-phi-ft
dmcooller
"2024-04-05T15:02:13Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T15:02:06Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ctam8736/bertopic-20-newsgroups
ctam8736
"2024-04-05T15:03:17Z"
0
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
"2024-04-05T15:02:17Z"
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # bertopic-20-newsgroups This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("ctam8736/bertopic-20-newsgroups") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 135 * Number of training documents: 11314 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | article - information - subject - re - what | 10 | -1_article_information_subject_re | | 0 | scsi - scsi2 - scsi1 - drives - bios | 3737 | 0_scsi_scsi2_scsi1_drives | | 1 | nhl - puck - leafs - flyers - pitching | 976 | 1_nhl_puck_leafs_flyers | | 2 | firearm - firearms - handgun - guns - gun | 918 | 2_firearm_firearms_handgun_guns | | 3 | ford - honda - nissan - bmw - dealer | 409 | 3_ford_honda_nissan_bmw | | 4 | encryption - encrypted - crypto - nsa - chip | 387 | 4_encryption_encrypted_crypto_nsa | | 5 | atheism - atheist - atheists - christianity - belief | 377 | 5_atheism_atheist_atheists_christianity | | 6 | hezbollah - gaza - lebanon - palestinians - lebanese | 342 | 6_hezbollah_gaza_lebanon_palestinians | | 7 | window - x11r5 - openwindows - x11 - x11r4 | 249 | 7_window_x11r5_openwindows_x11 | | 8 | modems - modem - mouse - ports - port | 243 | 8_modems_modem_mouse_ports | | 9 | anonymity - anonymous - mailing - usenet - newsgroups | 151 | 9_anonymity_anonymous_mailing_usenet | | 10 | armenians - armenia - armenian - azerbaijani - azerbaijan | 147 | 10_armenians_armenia_armenian_azerbaijani | | 11 | clinton - stephanopoulos - secretary - president - congress | 135 | 11_clinton_stephanopoulos_secretary_president | | 12 | os - windows - win32 - microsoft - win31 | 133 | 12_os_windows_win32_microsoft | | 13 | diseases - disease - candida - infection - infections | 113 | 13_diseases_disease_candida_infection | | 14 | superstition - msg - sensitivity - glutamate - causes | 100 | 14_superstition_msg_sensitivity_glutamate | | 15 | laserjet - inkjet - printers - bubblejet - bubblejets | 86 | 15_laserjet_inkjet_printers_bubblejet | | 16 | billboard - billboards - nasa - space - advertising | 75 | 16_billboard_billboards_nasa_space | | 17 | radar - detectors - detector - detecting - radarjust | 68 | 17_radar_detectors_detector_detecting | | 18 | speeding - speeds - mph - speed - driving | 64 | 18_speeding_speeds_mph_speed | | 19 | ssto - moonbase - moon - lunar - billion | 63 | 19_ssto_moonbase_moon_lunar | | 20 | station - nasa - redesign - space - shuttle | 61 | 20_station_nasa_redesign_space | | 21 | eternity - afterlife - heaven - hell - judgement | 49 | 21_eternity_afterlife_heaven_hell | | 22 | testament - manuscripts - scripture - bible - hebrew | 47 | 22_testament_manuscripts_scripture_bible | | 23 | homosexuality - heterosexual - homosexual - homosexuals - gays | 45 | 23_homosexuality_heterosexual_homosexual_homosexuals | | 24 | libertarians - libertarian - libertarianism - regulation - governments | 44 | 24_libertarians_libertarian_libertarianism_regulation | | 25 | islamic - muslim - islam - muslims - koran | 44 | 25_islamic_muslim_islam_muslims | | 26 | tax - taxes - vat - deficits - income | 44 | 26_tax_taxes_vat_deficits | | 27 | oil - drain - engine - fuel - dumping | 44 | 27_oil_drain_engine_fuel | | 28 | helmet - helmets - head - protection - gloves | 43 | 28_helmet_helmets_head_protection | | 29 | fonts - font - ttfonts - truetype - printing | 42 | 29_fonts_font_ttfonts_truetype | | 30 | morality - moral - morals - instinctive - immoral | 39 | 30_morality_moral_morals_instinctive | | 31 | colormaps - colourmap - colormap - xalloccolor - cwcolormap | 39 | 31_colormaps_colourmap_colormap_xalloccolor | | 32 | homosexuals - molesters - homosexual - homosexuality - pedophilia | 38 | 32_homosexuals_molesters_homosexual_homosexuality | | 33 | migraine - migraines - headache - headaches - analgesics | 37 | 33_migraine_migraines_headache_headaches | | 34 | resurrection - gospels - tomb - testament - jesuss | 37 | 34_resurrection_gospels_tomb_testament | | 35 | graphics - copyright - images - siggraph - image | 37 | 35_graphics_copyright_images_siggraph | | 36 | mormon - mormons - lds - brigham - utah | 35 | 36_mormon_mormons_lds_brigham | | 37 | scientific - scipsychology - scientist - science - methodology | 34 | 37_scientific_scipsychology_scientist_science | | 38 | tapes - tape - backup - copy - floppy | 34 | 38_tapes_tape_backup_copy | | 39 | drugs - marijuana - drug - legalizing - legalization | 34 | 39_drugs_marijuana_drug_legalizing | | 40 | punishment - punish - murder - penalty - murderer | 34 | 40_punishment_punish_murder_penalty | | 41 | sphere - globe - radius - pointstruct - circle | 34 | 41_sphere_globe_radius_pointstruct | | 42 | surgery - patients - hernia - massager - pain | 33 | 42_surgery_patients_hernia_massager | | 43 | genocide - bosnia - atheism - serbs - christians | 32 | 43_genocide_bosnia_atheism_serbs | | 44 | insurance - liability - insureyear - deductible - accident | 32 | 44_insurance_liability_insureyear_deductible | | 45 | polygon - polygons - triangulation - hexagons - polyn | 30 | 45_polygon_polygons_triangulation_hexagons | | 46 | spacecraft - galileo - galileos - mission - magellan | 29 | 46_spacecraft_galileo_galileos_mission | | 47 | countersteering - countersteeringfaq - countersteer - riding - bikes | 29 | 47_countersteering_countersteeringfaq_countersteer_riding | | 48 | antenna - antennas - transmitters - transmitting - radios | 28 | 48_antenna_antennas_transmitters_transmitting | | 49 | canine - dogs - dog - spaniel - springer | 28 | 49_canine_dogs_dog_spaniel | | 50 | batteries - battery - electrolyte - galvanized - zinc | 28 | 50_batteries_battery_electrolyte_galvanized | | 51 | oscilloscope - scopes - scope - oscilliscopes - digital | 27 | 51_oscilloscope_scopes_scope_oscilliscopes | | 52 | xgrabkey - definekeys - accelerators - accelerator - shiftkeyq | 27 | 52_xgrabkey_definekeys_accelerators_accelerator | | 53 | protoncentaur - centaur - proton - accelerator - nuclear | 27 | 53_protoncentaur_centaur_proton_accelerator | | 54 | telephone - dial - phone - call - lines | 26 | 54_telephone_dial_phone_call | | 55 | marriages - wedding - vows - weddings - marriage | 25 | 55_marriages_wedding_vows_weddings | | 56 | ibm - levels - level - nasa - software | 25 | 56_ibm_levels_level_nasa | | 57 | nasa - aerospace - astronomy - spacecraft - astronomical | 24 | 57_nasa_aerospace_astronomy_spacecraft | | 58 | motif - neosoft - unix - platforms - software | 24 | 58_motif_neosoft_unix_platforms | | 59 | nuclear - cooling - reactor - tower - towers | 23 | 59_nuclear_cooling_reactor_tower | | 60 | injuries - struck - snot - rocks - warningplease | 23 | 60_injuries_struck_snot_rocks | | 61 | transmissions - shifter - automatics - autos - auto | 22 | 61_transmissions_shifter_automatics_autos | | 62 | lzr1260 - printing - mwt9caxaxaxaxaxaxaxaxaxaxaxaxax - m9l0qaxaxaxaxaxaxaxaxaxaxaxaxaxax - mi68qaxaxaxaxaxaxaxaxaxaxaxaxaxax | 22 | 62_lzr1260_printing_mwt9caxaxaxaxaxaxaxaxaxaxaxaxax_m9l0qaxaxaxaxaxaxaxaxaxaxaxaxaxax | | 63 | cview - files - directory - file - tmp | 21 | 63_cview_files_directory_file | | 64 | immaculate - mary - marys - conception - catholics | 21 | 64_immaculate_mary_marys_conception | | 65 | cryptology - cryptanalyst - crypt - cryptanalysis - ciphers | 20 | 65_cryptology_cryptanalyst_crypt_cryptanalysis | | 66 | hotelco - hotels - resorts - hotel - tickets | 20 | 66_hotelco_hotels_resorts_hotel | | 67 | 3dos - 3do - 3ds - 3d - 3dstudio | 20 | 67_3dos_3do_3ds_3d | | 68 | comet - comets - jupiter - asteroids - jovian | 20 | 68_comet_comets_jupiter_asteroids | | 69 | polishing - scratches - paint - rubbing - glaze | 20 | 69_polishing_scratches_paint_rubbing | | 70 | newsgroup - groups - groupsplit - group - split | 20 | 70_newsgroup_groups_groupsplit_group | | 71 | koresh - koreshs - david - sermon - biblical | 20 | 71_koresh_koreshs_david_sermon | | 72 | parking - parked - liability - unsafe - stickers | 20 | 72_parking_parked_liability_unsafe | | 73 | trumpet - tcp - windows - winqvtnet - winsock | 19 | 73_trumpet_tcp_windows_winqvtnet | | 74 | freon - heater - coolant - r12 - vents | 19 | 74_freon_heater_coolant_r12 | | 75 | sabbath - commandments - sunday - worship - church | 19 | 75_sabbath_commandments_sunday_worship | | 76 | geekdom - computer - fourdcom - csws18icsunysbedu - psychnet | 19 | 76_geekdom_computer_fourdcom_csws18icsunysbedu | | 77 | bosnia - serbs - sanctions - somalia - war | 18 | 77_bosnia_serbs_sanctions_somalia | | 78 | soundblaster - midi - midimapper - soundexe - wavfiles | 18 | 78_soundblaster_midi_midimapper_soundexe | | 79 | condo - remodeled - townhome - bedroom - rent | 18 | 79_condo_remodeled_townhome_bedroom | | 80 | odometers - odometer - sensor - mileage - sensors | 18 | 80_odometers_odometer_sensor_mileage | | 81 | joystick - joysticks - joyport - joyread - hardware | 17 | 81_joystick_joysticks_joyport_joyread | | 82 | abortion - abortions - roe - proabortion - fetus | 17 | 82_abortion_abortions_roe_proabortion | | 83 | seizures - seizure - allergies - corn - cereal | 17 | 83_seizures_seizure_allergies_corn | | 84 | sobriety - sober - drinking - drink - drinks | 17 | 84_sobriety_sober_drinking_drink | | 85 | nubus - lciiipowerpc - pds - powerpcs - powerpc | 17 | 85_nubus_lciiipowerpc_pds_powerpcs | | 86 | mining - miners - minerals - miner - mineral | 17 | 86_mining_miners_minerals_miner | | 87 | outlets - outlet - electrical - wiring - grounded | 16 | 87_outlets_outlet_electrical_wiring | | 88 | rosicrucianum - rosicrucian - orders - order - organization | 16 | 88_rosicrucianum_rosicrucian_orders_order | | 89 | tempest - shielding - surveillance - encryption - electromagnetic | 16 | 89_tempest_shielding_surveillance_encryption | | 90 | monitor - monitors - screen - scrolling - display | 16 | 90_monitor_monitors_screen_scrolling | | 91 | krillean - photographs - photography - kirlian - pictures | 16 | 91_krillean_photographs_photography_kirlian | | 92 | scanner - scanners - scanning - scans - scanman | 16 | 92_scanner_scanners_scanning_scans | | 93 | sexism - sexist - extramarital - islamic - marriage | 16 | 93_sexism_sexist_extramarital_islamic | | 94 | noisy - noise - noises - rattled - quiets | 16 | 94_noisy_noise_noises_rattled | | 95 | orion - astronomy - museum - prototype - space | 15 | 95_orion_astronomy_museum_prototype | | 96 | easter - pagan - celebrating - feast - celebration | 15 | 96_easter_pagan_celebrating_feast | | 97 | batf - assault - waco - blasting - blast | 15 | 97_batf_assault_waco_blasting | | 98 | batchfile - ini - updating - file - winfileini | 15 | 98_batchfile_ini_updating_file | | 99 | copyprotect - copying - protected - copy - protection | 15 | 99_copyprotect_copying_protected_copy | | 100 | 42 - tiff - tiff6 - significance - universe | 14 | 100_42_tiff_tiff6_significance | | 101 | stove - stoves - splitfires - splitfire - burns | 14 | 101_stove_stoves_splitfires_splitfire | | 102 | automotive - backing - lights - corvette - reverse | 14 | 102_automotive_backing_lights_corvette | | 103 | dock - docks - minidocks - portable - minidock | 14 | 103_dock_docks_minidocks_portable | | 104 | cdaudio - stereo - audio - soundbase - speakers | 14 | 104_cdaudio_stereo_audio_soundbase | | 105 | uv - flashlight - houselights - fluorescent - lamps | 14 | 105_uv_flashlight_houselights_fluorescent | | 106 | papal - papacy - pope - popes - schism | 14 | 106_papal_papacy_pope_popes | | 107 | scsi - quadra - quadras - quadraspecific - firmware | 14 | 107_scsi_quadra_quadras_quadraspecific | | 108 | crohns - colitis - dietary - gastroenterology - diet | 13 | 108_crohns_colitis_dietary_gastroenterology | | 109 | crashes - powerbook - plugged - corrupted - duos | 13 | 109_crashes_powerbook_plugged_corrupted | | 110 | eyedness - handedness - righteye - righthandedness - eyes | 13 | 110_eyedness_handedness_righteye_righthandedness | | 111 | wrench - pliers - tool - tools - srb | 13 | 111_wrench_pliers_tool_tools | | 112 | scripture - scriptures - prophecy - revelation - revelations | 13 | 112_scripture_scriptures_prophecy_revelation | | 113 | nikon - lens - lenses - olympus - 35mm | 13 | 113_nikon_lens_lenses_olympus | | 114 | prosecution - suspects - encrypted - defendant - incriminate | 13 | 114_prosecution_suspects_encrypted_defendant | | 115 | wheel - shaftdrives - wheelies - wheelie - shaftdrive | 12 | 115_wheel_shaftdrives_wheelies_wheelie | | 116 | obesity - rebound - dieting - diet - metabolism | 12 | 116_obesity_rebound_dieting_diet | | 117 | adl - adls - spying - fbi - investigation | 12 | 117_adl_adls_spying_fbi | | 118 | lunar - moon - exploration - attend - conference | 12 | 118_lunar_moon_exploration_attend | | 119 | draftees - draft - selective - military - abolished | 12 | 119_draftees_draft_selective_military | | 120 | sunrise - sunset - daylight - algorithm - astronomical | 12 | 120_sunrise_sunset_daylight_algorithm | | 121 | octopus - octopuses - octopi - squid - octapus | 12 | 121_octopus_octopuses_octopi_squid | | 122 | gassing - explosion - gas - explode - explosive | 11 | 122_gassing_explosion_gas_explode | | 123 | tutorial - handbook - chemistry - paperback - books | 11 | 123_tutorial_handbook_chemistry_paperback | | 124 | amp - decibels - current - ampere - db | 11 | 124_amp_decibels_current_ampere | | 125 | uniforms - jerseys - uniform - mets - reds | 11 | 125_uniforms_jerseys_uniform_mets | | 126 | eugenics - eugenic - geneticallyengineered - genetic - genetically | 11 | 126_eugenics_eugenic_geneticallyengineered_genetic | | 127 | fractals - fractal - fractally - compression - pascalfractals | 11 | 127_fractals_fractal_fractally_compression | | 128 | sunview - xputimage - pixmap - pixmaps - ximage | 11 | 128_sunview_xputimage_pixmap_pixmaps | | 129 | waving - wave - waves - bikers - bikes | 11 | 129_waving_wave_waves_bikers | | 130 | vocoder - compressionalgorithms - compression - modems - cryptophones | 11 | 130_vocoder_compressionalgorithms_compression_modems | | 131 | mouse - jumpiness - mousecom - mouseits - jumps | 11 | 131_mouse_jumpiness_mousecom_mouseits | | 132 | netware - lan - workgroup - workgroups - w4wg | 10 | 132_netware_lan_workgroup_workgroups | | 133 | timers - timer - ultralong - clock - oscillator | 10 | 133_timers_timer_ultralong_clock | </details> ## Training hyperparameters * calculate_probabilities: False * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: auto * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.23.5 * HDBSCAN: 0.8.33 * UMAP: 0.5.5 * Pandas: 2.2.1 * Scikit-Learn: 1.3.1 * Sentence-transformers: 2.5.1 * Transformers: 4.37.0.dev0 * Numba: 0.59.1 * Plotly: 5.20.0 * Python: 3.10.4
lognat0704/mystic_beer_mug_LoRA
lognat0704
"2024-04-05T15:08:30Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:08:30Z"
Entry not found
Erik/adapter-super_greta_plan_saludable-8bit-mistral-7b-conversacion-es
Erik
"2024-04-05T15:11:42Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:cognitivecomputations/samantha-1.2-mistral-7b", "base_model:adapter:cognitivecomputations/samantha-1.2-mistral-7b", "region:us" ]
null
"2024-04-05T15:08:32Z"
--- library_name: peft base_model: cognitivecomputations/samantha-1.2-mistral-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Loren85/C64-Voice-sam
Loren85
"2024-04-05T15:11:08Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-05T15:10:17Z"
--- license: openrail ---
ashishp-wiai/vit-base-patch16-224-in21k-finetune-os-lr_new
ashishp-wiai
"2024-04-05T15:52:11Z"
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-04-05T15:10:47Z"
Entry not found
ilancml/Gamebox_v0
ilancml
"2024-04-05T15:29:15Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-05T15:11:02Z"
--- license: apache-2.0 ---
crrodrvi/blindness-image-classification
crrodrvi
"2024-04-05T15:28:48Z"
0
0
fastai
[ "fastai", "region:us" ]
null
"2024-04-05T15:15:46Z"
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
casque/Queen_Marika-v1.2.1-lc
casque
"2024-04-05T15:29:30Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-04-05T15:15:46Z"
--- license: creativeml-openrail-m ---
jainswati02/SJTestModel
jainswati02
"2024-04-05T15:16:32Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-04-05T15:16:31Z"
--- license: other license_name: sj-test-licence license_link: LICENSE ---
IvanD2002/falcon-test
IvanD2002
"2024-04-05T15:18:43Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-05T15:18:41Z"
--- license: apache-2.0 ---
yehyouzeng/finetuning-sentiment-model-3000-samples
yehyouzeng
"2024-04-05T20:58:56Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-05T15:19:52Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3110 - Accuracy: 0.89 - F1: 0.8911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
axel-rda/ARIA-70B-V3-bnb-4bit-nf4-bfloat16-qlora-sft-qlora-sft-ft_num-2-adapters
axel-rda
"2024-04-05T15:21:43Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-05T15:20:29Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ethan615/gemma2bft
Ethan615
"2024-04-06T02:55:35Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2b-it", "base_model:adapter:google/gemma-2b-it", "license:gemma", "region:us" ]
null
"2024-04-05T15:31:11Z"
--- license: gemma library_name: peft tags: - trl - sft - generated_from_trainer base_model: google/gemma-2b-it model-index: - name: gemma2bft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma2bft This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
klea7/HumanOrNot
klea7
"2024-04-05T15:32:38Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:32:34Z"
Entry not found
fantor/test
fantor
"2024-04-05T15:34:40Z"
0
0
null
[ "license:afl-3.0", "region:us" ]
null
"2024-04-05T15:34:37Z"
--- license: afl-3.0 ---
GeronimoYeah/MonicaFranco
GeronimoYeah
"2024-04-05T15:39:35Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:35:28Z"
Entry not found
muaota/OysteinAarseth
muaota
"2024-04-05T15:41:47Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:41:41Z"
Entry not found
madroid/Qwen1.5-0.5B-Chat-4bit
madroid
"2024-04-05T15:42:33Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:42:30Z"
Entry not found
Rizyukaito21111/Suisei
Rizyukaito21111
"2024-04-05T21:03:12Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:42:50Z"
Entry not found
SyedShadab/VTO_demo_dresses
SyedShadab
"2024-04-05T15:46:32Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:44:13Z"
Entry not found
oneandahalfcats/morecats
oneandahalfcats
"2024-04-05T15:45:12Z"
0
0
null
[ "region:us" ]
null
"2024-04-05T15:45:05Z"
Entry not found