modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
75.3M
likes
int64
0
10.6k
library_name
stringclasses
189 values
tags
sequencelengths
1
1.84k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
wing946/tasca7
wing946
"2024-04-14T18:43:53Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-14T17:48:09Z"
--- license: mit ---
zzttbrdd/sn6_09m
zzttbrdd
"2024-04-14T17:53:17Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T17:48:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abbenedek/whisper-base-finetuned2
abbenedek
"2024-04-14T19:28:40Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-04-14T17:48:51Z"
--- license: apache-2.0 tags: - generated_from_trainer base_model: openai/whisper-base metrics: - wer model-index: - name: whisper-base-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-finetuned This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0089 - Wer: 1.125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4641 | 0.2 | 10 | 0.2328 | 7.1250 | | 0.1312 | 0.4 | 20 | 0.0801 | 4.0 | | 0.0477 | 0.6 | 30 | 0.0390 | 2.25 | | 0.0213 | 0.8 | 40 | 0.0232 | 1.875 | | 0.0101 | 1.0 | 50 | 0.0157 | 1.875 | | 0.0073 | 1.2 | 60 | 0.0126 | 1.25 | | 0.0056 | 1.4 | 70 | 0.0109 | 1.25 | | 0.005 | 1.6 | 80 | 0.0096 | 1.125 | | 0.0048 | 1.8 | 90 | 0.0091 | 1.125 | | 0.0049 | 2.0 | 100 | 0.0089 | 1.125 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.14.5 - Tokenizers 0.15.2
farhananis005/mistral7b__roneneldan-TinyStories7
farhananis005
"2024-04-14T17:52:49Z"
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-14T17:50:40Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** farhananis005 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
2tucupkrockk00/vg
2tucupkrockk00
"2024-04-14T17:51:19Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-14T17:51:18Z"
--- license: apache-2.0 ---
Min-Jaewon/pokemon-lora
Min-Jaewon
"2024-04-14T17:51:44Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T17:51:43Z"
Entry not found
2tucupkrockk00/bbb
2tucupkrockk00
"2024-04-14T17:52:10Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-14T17:52:09Z"
--- license: openrail ---
JeremiahZ/reddit_lda
JeremiahZ
"2024-04-14T17:56:49Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T17:52:40Z"
Entry not found
mradermacher/Phind-CodeLlama-34B-v2-GGUF
mradermacher
"2024-04-15T00:08:38Z"
0
0
transformers
[ "transformers", "gguf", "code llama", "en", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-04-14T17:52:42Z"
--- exported_from: Phind/Phind-CodeLlama-34B-v2 language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - code llama --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Phind/Phind-CodeLlama-34B-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q2_K.gguf) | Q2_K | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.IQ3_XS.gguf) | IQ3_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q3_K_S.gguf) | Q3_K_S | 14.7 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.IQ3_S.gguf) | IQ3_S | 14.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.IQ3_M.gguf) | IQ3_M | 15.3 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q3_K_M.gguf) | Q3_K_M | 16.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q3_K_L.gguf) | Q3_K_L | 17.9 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.IQ4_XS.gguf) | IQ4_XS | 18.3 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q4_K_S.gguf) | Q4_K_S | 19.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q4_K_M.gguf) | Q4_K_M | 20.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q5_K_S.gguf) | Q5_K_S | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q5_K_M.gguf) | Q5_K_M | 23.9 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q6_K.gguf) | Q6_K | 27.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v2-GGUF/resolve/main/Phind-CodeLlama-34B-v2.Q8_0.gguf) | Q8_0 | 36.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Weni/WeniGPT-Agents-Mistral-1.0.7-SFT-AWQ
Weni
"2024-04-14T18:04:42Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-04-14T17:53:13Z"
Entry not found
ShenaoZ/0.001_idpo_same_scratch_iter_3
ShenaoZ
"2024-04-14T21:24:31Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T17:53:31Z"
--- license: mit base_model: HuggingFaceH4/mistral-7b-sft-beta tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.001_idpo_same_scratch_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_idpo_same_scratch_iter_3 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
yuufong/vi_en_envit5-base_docs_news_train
yuufong
"2024-04-14T18:47:59Z"
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/envit5-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2024-04-14T17:54:05Z"
--- license: mit base_model: VietAI/envit5-base tags: - generated_from_trainer model-index: - name: vi_en_envit5-base_docs_news_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vi_en_envit5-base_docs_news_train This model is a fine-tuned version of [VietAI/envit5-base](https://huggingface.co/VietAI/envit5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.37.2 - Pytorch 1.12.1+cu116 - Datasets 2.18.0 - Tokenizers 0.15.1
mkeohane01/mistral-instruct-590
mkeohane01
"2024-04-15T02:06:01Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
"2024-04-14T17:54:11Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aliciiavs/sentiment-analysis-whatsapp2
aliciiavs
"2024-04-14T17:59:22Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
text-classification
"2024-04-14T17:54:15Z"
--- license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer model-index: - name: sentiment-analysis-whatsapp2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-analysis-whatsapp2 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1871 - eval_accuracy: {'accuracy': 0.9445} - eval_f1_macro: 0.9442 - eval_runtime: 4.3692 - eval_samples_per_second: 457.751 - eval_steps_per_second: 7.324 - epoch: 2.0 - step: 500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
glenn2/whisper-small-b2
glenn2
"2024-04-15T00:28:57Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-04-14T17:56:21Z"
--- language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small En 3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 3.0 type: mozilla-foundation/common_voice_11_0 args: 'config: hi, split: test' metrics: - name: Wer type: wer value: 103.04743574321344 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small En 3 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 3.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5263 - Wer: 103.0474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7717 | 1.34 | 1000 | 0.3452 | 49.7815 | | 0.4884 | 2.67 | 2000 | 0.3808 | 61.0810 | | 0.4106 | 4.01 | 3000 | 0.4805 | 93.7648 | | 0.2197 | 5.34 | 4000 | 0.5263 | 103.0474 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
MikkelWK/whisper-medium-Danish15000_eval50
MikkelWK
"2024-04-14T17:56:33Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T17:56:33Z"
Invalid username or password.
IamYash/VA-LLM-h9kdku8n
IamYash
"2024-04-14T17:56:50Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T17:56:49Z"
Entry not found
gotzmann/v0.8w-adapter
gotzmann
"2024-04-14T18:02:56Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-04-14T17:57:18Z"
Entry not found
horyekhunley/squad_qa_model
horyekhunley
"2024-04-14T18:25:30Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
"2024-04-14T17:57:50Z"
--- tags: - generated_from_trainer model-index: - name: squad_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad_qa_model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 1.5047 | | 1.1276 | 2.0 | 500 | 1.5379 | | 1.1276 | 3.0 | 750 | 1.5874 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Vishnou/lcm-lora-sdv1-5
Vishnou
"2024-04-14T17:59:01Z"
0
0
diffusers
[ "diffusers", "safetensors", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
"2024-04-14T17:58:07Z"
Entry not found
glenn2/whisper-small-t
glenn2
"2024-04-14T17:58:23Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-14T17:58:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mirrory/asd
Mirrory
"2024-04-14T17:58:36Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T17:58:35Z"
Entry not found
nlp-group/sindi-bert
nlp-group
"2024-04-14T17:59:00Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T17:59:00Z"
Entry not found
cleopatro/Entity_Rec
cleopatro
"2024-04-14T22:47:48Z"
0
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-04-14T17:59:13Z"
Entry not found
rmrafailov/TLDR-Pythia6.9B-SFT
rmrafailov
"2024-04-14T18:07:05Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:00:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChafikAiEng/multilingual-e5-base-finetuned-studytours
ChafikAiEng
"2024-04-14T18:03:48Z"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-14T18:01:36Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # ChafikAiEng/multilingual-e5-base-finetuned-studytours This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ChafikAiEng/multilingual-e5-base-finetuned-studytours') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ChafikAiEng/multilingual-e5-base-finetuned-studytours) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 45 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Nikhil058/ppo-pyramidtraining
Nikhil058
"2024-04-14T18:02:30Z"
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
"2024-04-14T18:02:02Z"
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Nikhil058/ppo-pyramidtraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Koko32/majima
Koko32
"2024-04-14T18:03:07Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:03:07Z"
Entry not found
Aneesha18/FineTuned
Aneesha18
"2024-04-14T18:05:00Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-14T18:04:59Z"
--- license: mit ---
TheNetherWatcher/results
TheNetherWatcher
"2024-04-14T18:05:15Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:05:13Z"
Entry not found
amine-01/LunarLander-v2
amine-01
"2024-04-14T18:05:52Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-04-14T18:05:20Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 277.02 +/- 16.18 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
oneandahalfcats/30428
oneandahalfcats
"2024-04-14T18:06:23Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:06:18Z"
Entry not found
codegood/gemma-2b-it-Q4_K_M-GGUF
codegood
"2024-04-14T18:06:31Z"
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "license:gemma", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:06:26Z"
--- license: gemma library_name: transformers tags: - llama-cpp - gguf-my-repo widget: - messages: - role: user content: How does the brain work? inference: parameters: max_new_tokens: 200 extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # codegood/gemma-2b-it-Q4_K_M-GGUF This model was converted to GGUF format from [`google/gemma-2b-it`](https://huggingface.co/google/gemma-2b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/gemma-2b-it) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo codegood/gemma-2b-it-Q4_K_M-GGUF --model gemma-2b-it.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo codegood/gemma-2b-it-Q4_K_M-GGUF --model gemma-2b-it.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-2b-it.Q4_K_M.gguf -n 128 ```
kokoso/BielikBeagle-slerp-7B-Inst-v0.1-Q8_0-GGUF
kokoso
"2024-04-14T18:08:24Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:mlabonne/NeuralBeagle14-7B", "base_model:speakleash/Bielik-7B-Instruct-v0.1", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:06:59Z"
--- library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: - mlabonne/NeuralBeagle14-7B - speakleash/Bielik-7B-Instruct-v0.1 --- # kokoso/BielikBeagle-slerp-7B-Inst-v0.1-Q8_0-GGUF This model was converted to GGUF format from [`kokoso/BielikBeagle-slerp-7B-Inst-v0.1`](https://huggingface.co/kokoso/BielikBeagle-slerp-7B-Inst-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/kokoso/BielikBeagle-slerp-7B-Inst-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo kokoso/BielikBeagle-slerp-7B-Inst-v0.1-Q8_0-GGUF --model bielikbeagle-slerp-7b-inst-v0.1.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo kokoso/BielikBeagle-slerp-7B-Inst-v0.1-Q8_0-GGUF --model bielikbeagle-slerp-7b-inst-v0.1.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bielikbeagle-slerp-7b-inst-v0.1.Q8_0.gguf -n 128 ```
IamYash/VA-LLM-72911tog
IamYash
"2024-04-14T19:30:55Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:07:05Z"
Entry not found
otaviomayrink/DooyeweerdandI
otaviomayrink
"2024-04-14T18:08:33Z"
0
0
null
[ "license:ecl-2.0", "region:us" ]
null
"2024-04-14T18:07:31Z"
--- license: ecl-2.0 ---
maryamufti18/results
maryamufti18
"2024-04-14T18:07:33Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:07:32Z"
Entry not found
sindibejko/results
sindibejko
"2024-04-14T18:31:57Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2024-04-14T18:09:01Z"
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0 | 1.0 | 15330 | 0.0000 | | 0.0 | 2.0 | 30660 | 0.0000 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
jul-fls/autotrain-fbyte-l9vfz
jul-fls
"2024-04-14T18:09:50Z"
0
0
null
[ "joblib", "autotrain", "tabular", "regression", "tabular-regression", "dataset:autotrain-fbyte-l9vfz/autotrain-data", "region:us" ]
tabular-regression
"2024-04-14T18:09:46Z"
--- tags: - autotrain - tabular - regression - tabular-regression datasets: - autotrain-fbyte-l9vfz/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Tabular regression ## Validation Metrics - r2: 0.33913886288678097 - mse: 0.13878083879377598 - mae: 0.2991213083267212 - rmse: 0.37253300363025016 - rmsle: 0.15062628429771513 - loss: 0.37253300363025016 ## Best Params - learning_rate: 0.017092100292696658 - reg_lambda: 2.08790995148619e-05 - reg_alpha: 5.763917500537152e-06 - subsample: 0.38707603768089427 - colsample_bytree: 0.547982260603956 - max_depth: 1 - early_stopping_rounds: 102 - n_estimators: 20000 - eval_metric: rmse ## Usage ```python import json import joblib import pandas as pd model = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] predictions = model.predict(data) # or model.predict_proba(data) # predictions can be converted to original labels using label_encoders.pkl ```
kaidens/WHaK_AIv1
kaidens
"2024-04-15T01:02:25Z"
0
0
peft
[ "peft", "pytorch", "safetensors", "mistral", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "4-bit", "region:us" ]
null
"2024-04-14T18:11:45Z"
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
handler-bird/movie_genre_multi_classification
handler-bird
"2024-04-14T18:52:07Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-14T18:12:53Z"
Entry not found
itay-nakash/model_764499efa2
itay-nakash
"2024-04-14T19:03:11Z"
0
0
transformers
[ "transformers", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:15:53Z"
Entry not found
RayRuiboChen/LLaVA-Self-Filter-Stage2-CLIP
RayRuiboChen
"2024-04-14T19:08:54Z"
0
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T18:16:15Z"
--- license: apache-2.0 ---
RayRuiboChen/LLaVA-Self-Filter-Stage2-Scores
RayRuiboChen
"2024-04-14T19:22:44Z"
0
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T18:16:27Z"
--- license: apache-2.0 ---
Noureddinesa/Output_LayoutLMv3_v8
Noureddinesa
"2024-04-14T18:45:30Z"
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-04-14T18:16:59Z"
Entry not found
NassimB/mistral-7b-hf-platypus_vxxiii-chat-corrected
NassimB
"2024-04-14T18:26:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:18:45Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RayRuiboChen/LLaVA-Self-Filter-Stage1-CLIP
RayRuiboChen
"2024-04-14T21:46:51Z"
0
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T18:19:08Z"
--- license: apache-2.0 ---
reboot94/mistral_7b_payslip
reboot94
"2024-04-14T18:20:13Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:19:47Z"
Invalid username or password.
lancer59/gemma7bit_adapter_490
lancer59
"2024-04-14T18:20:19Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:19:55Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-it-bnb-4bit --- # Uploaded model - **Developed by:** lancer59 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lancer59/gemma7bit_adapter_490_tokenizer
lancer59
"2024-04-14T18:20:29Z"
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:20:26Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EthanRhys/Sonic-Frontiers
EthanRhys
"2024-04-14T18:22:07Z"
0
0
null
[ "license:openrail++", "region:us" ]
null
"2024-04-14T18:20:43Z"
--- license: openrail++ ---
kaidens/modeltokenizer
kaidens
"2024-04-14T18:21:00Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:20:55Z"
Entry not found
ahmedheakl/arsql-codegemma-7b
ahmedheakl
"2024-04-14T23:43:08Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-04-14T18:21:05Z"
Entry not found
rmrafailov/TLDR-Pythia1B-SFT
rmrafailov
"2024-04-14T18:24:51Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:22:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bn22/llava-1.5-7b-hf-ft-mix-vsft
bn22
"2024-04-14T18:22:56Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:22:55Z"
Entry not found
RayRuiboChen/LLaVA-Self-Filter-Stage1-Scores
RayRuiboChen
"2024-04-14T19:13:18Z"
0
0
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T18:29:19Z"
--- license: apache-2.0 ---
arcee-ai/Patent-Base-Barcenas-Orca-2-7B-Slerp
arcee-ai
"2024-04-14T18:33:22Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:arcee-ai/Patent-Base-7b", "base_model:Danielbrdz/Barcenas-Orca-2-7b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:30:14Z"
--- base_model: - arcee-ai/Patent-Base-7b - Danielbrdz/Barcenas-Orca-2-7b library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [arcee-ai/Patent-Base-7b](https://huggingface.co/arcee-ai/Patent-Base-7b) * [Danielbrdz/Barcenas-Orca-2-7b](https://huggingface.co/Danielbrdz/Barcenas-Orca-2-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: arcee-ai/Patent-Base-7b layer_range: [0, 32] - model: Danielbrdz/Barcenas-Orca-2-7b layer_range: [0, 32] merge_method: slerp base_model: arcee-ai/Patent-Base-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Defetya/qwen-1.8B-saiga
Defetya
"2024-04-14T18:31:45Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-04-14T18:31:44Z"
--- license: apache-2.0 ---
ashishp-wiai/ClipArt_LoRA_20-2024-04-14
ashishp-wiai
"2024-04-14T19:47:07Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-04-14T18:32:13Z"
Entry not found
EdBerg/opt-6.1b-lora
EdBerg
"2024-04-14T18:34:53Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:34:50Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrIvanTheGreat/Makikofox_original
MrIvanTheGreat
"2024-04-14T18:36:03Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-04-14T18:35:27Z"
--- license: openrail ---
sergeantson/MobileBertSentimentClassifier
sergeantson
"2024-04-14T23:26:34Z"
0
0
transformers
[ "transformers", "pytorch", "safetensors", "Sentiment-Analysis", "en", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:36:27Z"
--- language: - en tags: - Sentiment-Analysis metrics: - accuracy --- This model is for the [Glassdoor-job-reviews-dataset]](https://www.kaggle.com/datasets/davidgauthier/glassdoor-job-reviews/code) sentiment analysis with MobileBert
RuoxiL/style-dailymed-from-gorilla
RuoxiL
"2024-04-14T18:37:04Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:37:04Z"
Entry not found
IamYash/VA-LLM-gmw8hqsw
IamYash
"2024-04-14T19:50:21Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:37:17Z"
Entry not found
synthetica/lumaV1
synthetica
"2024-04-14T18:37:29Z"
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:Kvikontent/midjourney-v6", "region:us" ]
text-to-image
"2024-04-14T18:37:23Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- A flock of paper airplanes flutters through a dense jungle, weaving around trees as if they were migrating birds. output: url: >- images/A flock of paper airplanes flutters through a dense jungle, weaving around trees as if they were migrating birds..png base_model: Kvikontent/midjourney-v6 instance_prompt: null --- # LuMa <Gallery /> ## Model description Inside. ## Download model Weights for this model are available in Safetensors format. [Download](/synthetica/lumaV1/tree/main) them in the Files & versions tab.
atasoglu/roberta-small-turkish-clean-uncased-nli-stsb-tr
atasoglu
"2024-04-14T18:52:22Z"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "tr", "dataset:nli_tr", "dataset:emrecan/stsb-mt-turkish", "base_model:burakaytan/roberta-small-turkish-clean-uncased", "license:mit", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-14T18:38:28Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: mit datasets: - nli_tr - emrecan/stsb-mt-turkish language: - tr base_model: burakaytan/roberta-small-turkish-clean-uncased --- # atasoglu/roberta-small-turkish-clean-uncased-nli-stsb-tr This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model was adapted from [burakaytan/roberta-small-turkish-clean-uncased](https://huggingface.co/burakaytan/roberta-small-turkish-clean-uncased) and fine-tuned on these datasets: - [nli_tr](https://huggingface.co/datasets/nli_tr) - [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) **All texts were manually lowercased** as shown below: ```python text.replace("I", "ı").lower() ``` ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('atasoglu/roberta-small-turkish-clean-uncased-nli-stsb-tr') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('atasoglu/roberta-small-turkish-clean-uncased-nli-stsb-tr') model = AutoModel.from_pretrained('atasoglu/roberta-small-turkish-clean-uncased-nli-stsb-tr') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below: ```txt Cosine-Similarity : Pearson: 0.8027 Spearman: 0.8012 Manhattan-Distance: Pearson: 0.7922 Spearman: 0.7892 Euclidean-Distance: Pearson: 0.7919 Spearman: 0.7890 Dot-Product-Similarity: Pearson: 0.7547 Spearman: 0.7445 ``` ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 180 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 18, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 45, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
chandc/whisper-small-hi
chandc
"2024-04-14T18:38:35Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:38:35Z"
Entry not found
synthetica/LuMaV1.1
synthetica
"2024-04-14T18:40:43Z"
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "region:us", "has_space" ]
text-to-image
"2024-04-14T18:40:37Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: An photo of a desert output: url: images/A photo of a sunny desert beach.jpg base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null --- # LuMa V.1.1 <Gallery /> ## Model description inside.1 ## Download model Weights for this model are available in Safetensors format. [Download](/synthetica/LuMaV1.1/tree/main) them in the Files & versions tab.
orpo-explorers/mistral-7b-orpo-v0.0
orpo-explorers
"2024-04-14T23:00:17Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "dataset:argilla/ultrafeedback-binarized-preferences-cleaned", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:41:02Z"
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - alignment-handbook - trl - orpo - generated_from_trainer - trl - orpo - generated_from_trainer datasets: - argilla/distilabel-capybara-dpo-7k-binarized - argilla/ultrafeedback-binarized-preferences-cleaned model-index: - name: mistral-7b-orpo-v0.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-orpo-v0.0 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the argilla/distilabel-capybara-dpo-7k-binarized and the argilla/ultrafeedback-binarized-preferences-cleaned datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
adamjweintraut/bart-finetuned-kwsylgen-64-simple_input
adamjweintraut
"2024-04-14T21:06:20Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-14T18:42:08Z"
Entry not found
orpo-explorers/mistral-7b-orpo-v1.0
orpo-explorers
"2024-04-14T20:07:08Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/distilabel-capybara-dpo-7k-binarized", "dataset:HuggingFaceH4/orca_dpo_pairs", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:44:13Z"
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - alignment-handbook - trl - orpo - generated_from_trainer - trl - orpo - generated_from_trainer datasets: - HuggingFaceH4/distilabel-capybara-dpo-7k-binarized - HuggingFaceH4/orca_dpo_pairs model-index: - name: mistral-7b-orpo-v1.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-orpo-v1.0 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/distilabel-capybara-dpo-7k-binarized and the HuggingFaceH4/orca_dpo_pairs datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
oswinso/whisper-small-hi
oswinso
"2024-04-14T18:44:55Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:44:55Z"
Entry not found
jetx/buajs7x
jetx
"2024-04-14T18:46:57Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T18:45:08Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wasjaip/my_tree_model_v1_allk_rus
wasjaip
"2024-04-14T19:08:31Z"
0
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-14T18:45:28Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 48 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jetx/y6cnymo
jetx
"2024-04-14T18:48:20Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T18:46:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/aegolius-acadicus-v1-30b-i1-GGUF
mradermacher
"2024-04-14T23:51:15Z"
0
1
null
[ "gguf", "region:us" ]
null
"2024-04-14T18:47:07Z"
--- exported_from: ibivibiv/aegolius-acadicus-v1-30b language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - moe - moerge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/ibivibiv/aegolius-acadicus-v1-30b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ1_S.gguf) | i1-IQ1_S | 6.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ1_M.gguf) | i1-IQ1_M | 6.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ2_S.gguf) | i1-IQ2_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ2_M.gguf) | i1-IQ2_M | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q2_K.gguf) | i1-Q2_K | 11.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ3_S.gguf) | i1-IQ3_S | 13.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ3_M.gguf) | i1-IQ3_M | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 15.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q4_0.gguf) | i1-Q4_0 | 17.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 20.6 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-v1-30b-i1-GGUF/resolve/main/aegolius-acadicus-v1-30b.i1-Q6_K.gguf) | i1-Q6_K | 24.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
zsqzz/predictor_6
zsqzz
"2024-04-14T20:04:21Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-14T18:47:13Z"
--- license: mit ---
unrented5443/9mqeby3
unrented5443
"2024-04-14T18:50:50Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T18:48:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Abhishek107/AppMetric
Abhishek107
"2024-04-14T18:52:13Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:49:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fatgong/melotts7217
fatgong
"2024-04-15T00:34:30Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:50:53Z"
Entry not found
CalvinM04/wav2vec
CalvinM04
"2024-04-14T18:50:55Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-04-14T18:50:54Z"
--- license: mit ---
fatgong/melotts999
fatgong
"2024-04-15T00:34:46Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:51:01Z"
Entry not found
fate3439/3d-icon-sdxl-lora
fate3439
"2024-04-14T18:51:53Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:51:53Z"
Entry not found
orpo-explorers/mistral-7b-orpo-v2.0
orpo-explorers
"2024-04-14T23:53:38Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/distilabel-capybara-dpo-7k-binarized", "dataset:HuggingFaceH4/orca_dpo_pairs", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:52:12Z"
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - alignment-handbook - trl - orpo - generated_from_trainer - trl - orpo - generated_from_trainer datasets: - HuggingFaceH4/distilabel-capybara-dpo-7k-binarized - HuggingFaceH4/orca_dpo_pairs - HuggingFaceH4/ultrafeedback_binarized model-index: - name: mistral-7b-orpo-v2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-orpo-v2.0 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/distilabel-capybara-dpo-7k-binarized, the HuggingFaceH4/orca_dpo_pairs and the HuggingFaceH4/ultrafeedback_binarized datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
L33tcode/Llama-2-7b-chat-sahaj
L33tcode
"2024-04-14T19:01:37Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:54:58Z"
Entry not found
nlp-group/sindi-bert-final
nlp-group
"2024-04-14T19:07:52Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2024-04-14T18:55:17Z"
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0 | 1.0 | 15330 | 0.0000 | | 0.0 | 2.0 | 30660 | 0.0000 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
rubertmi00/flan-t5-healthcoach-base
rubertmi00
"2024-04-14T18:55:34Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-14T18:55:30Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yuufong/vi_en_envit5-base_half_doc_news_train
yuufong
"2024-04-14T19:37:50Z"
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/envit5-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2024-04-14T18:55:32Z"
--- license: mit base_model: VietAI/envit5-base tags: - generated_from_trainer model-index: - name: vi_en_envit5-base_half_doc_news_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vi_en_envit5-base_half_doc_news_train This model is a fine-tuned version of [VietAI/envit5-base](https://huggingface.co/VietAI/envit5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.37.2 - Pytorch 1.12.1+cu116 - Datasets 2.18.0 - Tokenizers 0.15.1
KolaGang/AWQ_MEMO
KolaGang
"2024-04-14T23:51:42Z"
0
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-04-14T18:56:12Z"
Entry not found
orpo-explorers/mistral-7b-orpo-v3.0
orpo-explorers
"2024-04-14T20:09:59Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/distilabel-capybara-dpo-7k-binarized", "dataset:HuggingFaceH4/OpenHermesPreferences-10k", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:56:14Z"
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - alignment-handbook - trl - orpo - generated_from_trainer - trl - orpo - generated_from_trainer datasets: - HuggingFaceH4/distilabel-capybara-dpo-7k-binarized - HuggingFaceH4/OpenHermesPreferences-10k model-index: - name: mistral-7b-orpo-v3.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-orpo-v3.0 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/distilabel-capybara-dpo-7k-binarized and the HuggingFaceH4/OpenHermesPreferences-10k datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
LongRiver/transformer_QAVi
LongRiver
"2024-04-14T18:56:20Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:56:20Z"
Entry not found
ttkhang202/peft-starcoder-lora-a100
ttkhang202
"2024-04-14T18:57:53Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:57:53Z"
Entry not found
lanzv/ClinicalBERTQA_70_54
lanzv
"2024-04-14T19:22:48Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
"2024-04-14T18:58:01Z"
Entry not found
Adignite/MailSense_Classifier-chat-llama7b
Adignite
"2024-04-14T19:06:14Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T18:58:07Z"
Entry not found
RuoxiL/style-dailymed-from-facebook-2.7b
RuoxiL
"2024-04-14T18:59:10Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T18:59:10Z"
Entry not found
solidrust/bagel-7b-v0.5-AWQ
solidrust
"2024-04-14T20:31:39Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "quantized", "4-bit", "AWQ", "pytorch", "instruct", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "dataset:ai2_arc", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "dataset:jondurbin/airoboros-3.2", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:camel-ai/biology", "dataset:camel-ai/chemistry", "dataset:camel-ai/math", "dataset:camel-ai/physics", "dataset:jondurbin/contextual-dpo-v0.1", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:jondurbin/py-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:WizardLM/WizardLM_evol_instruct_70k", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:lmsys/lmsys-chat-1m", "dataset:ParisNeo/lollms_aware_dataset", "dataset:TIGER-Lab/MathInstruct", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:kingbri/PIPPA-shareGPT", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:ropes", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:b-mc2/sql-create-context", "dataset:squad_v2", "dataset:mattpscott/airoboros-summarization", "dataset:migtissera/Synthia-v1.3", "dataset:unalignment/toxic-dpo-v0.2", "dataset:WhiteRabbitNeo/WRN-Chapter-1", "dataset:WhiteRabbitNeo/WRN-Chapter-2", "dataset:winogrande", "base_model:alpindale/Mistral-7B-v0.2-hf", "license:apache-2.0" ]
text-generation
"2024-04-14T18:59:23Z"
--- tags: - finetuned - quantized - 4-bit - AWQ - transformers - pytorch - mistral - instruct - text-generation - conversational - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation license: apache-2.0 datasets: - ai2_arc - allenai/ultrafeedback_binarized_cleaned - argilla/distilabel-intel-orca-dpo-pairs - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - bluemoon-fandom-1-1-rp-cleaned - boolq - camel-ai/biology - camel-ai/chemistry - camel-ai/math - camel-ai/physics - jondurbin/contextual-dpo-v0.1 - jondurbin/gutenberg-dpo-v0.1 - jondurbin/py-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - WizardLM/WizardLM_evol_instruct_70k - glaiveai/glaive-function-calling-v2 - jondurbin/gutenberg-dpo-v0.1 - grimulkan/LimaRP-augmented - lmsys/lmsys-chat-1m - ParisNeo/lollms_aware_dataset - TIGER-Lab/MathInstruct - Muennighoff/natural-instructions - openbookqa - kingbri/PIPPA-shareGPT - piqa - Vezora/Tested-22k-Python-Alpaca - ropes - cakiki/rosetta-code - Open-Orca/SlimOrca - b-mc2/sql-create-context - squad_v2 - mattpscott/airoboros-summarization - migtissera/Synthia-v1.3 - unalignment/toxic-dpo-v0.2 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - winogrande model_name: bagel-7b-v0.5 base_model: alpindale/Mistral-7B-v0.2-hf quantized_by: Suparious pipeline_tag: text-generation model_creator: jondurbin inference: false prompt_template: '{bos}<|im_start|>{role} {text} <|im_end|>{eos} ' --- # jondurbin/bagel-7b-v0.5 AWQ - Model creator: [jondurbin](https://huggingface.co/jondurbin) - Original model: [bagel-dpo-7b-v0.4](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4) ![bagel](bagel.png) ## Model Summary This is a fine-tune of mistral-7b-v0.2 using the bagel v0.5 dataset. See [bagel](https://github.com/jondurbin/bagel) for additional details on the datasets. The DPO version is available [here](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/bagel-7b-v0.5-AWQ" system_message = "You are Bagel, incarnated a powerful AI with everything." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
SSahas/codegen_e7
SSahas
"2024-04-14T19:01:15Z"
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:Salesforce/codegen-350M-mono", "license:bsd-3-clause", "region:us" ]
null
"2024-04-14T19:01:11Z"
--- license: bsd-3-clause library_name: peft tags: - trl - sft - generated_from_trainer base_model: Salesforce/codegen-350M-mono model-index: - name: codegen_e7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codegen_e7 This model is a fine-tuned version of [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.38.2 - Pytorch 2.0.0 - Datasets 2.18.0 - Tokenizers 0.15.2
donutsan/FinalMixtral-Instruct
donutsan
"2024-04-14T19:06:32Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-04-14T19:01:14Z"
Entry not found
cilantro9246/8i2pgcx
cilantro9246
"2024-04-14T19:03:29Z"
0
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-14T19:01:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fatgong/melotts3103
fatgong
"2024-04-15T00:35:19Z"
0
0
null
[ "region:us" ]
null
"2024-04-14T19:03:00Z"
Entry not found