modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
754M
likes
int64
0
10.9k
library_name
stringclasses
243 values
tags
sequencelengths
1
2.16k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
BIFOLD-BigEarthNetv2-0/BENv2-vit_base_patch8_224-s1-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T17:58:52Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T21:16:58Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
stojchet/kto-python-6k-bad-ds
stojchet
"2024-06-16T22:28:49Z"
0
0
peft
[ "peft", "safetensors", "trl", "kto", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-1.3b-base", "license:other", "region:us" ]
null
"2024-06-16T21:18:23Z"
--- base_model: deepseek-ai/deepseek-coder-1.3b-base library_name: peft license: other tags: - trl - kto - generated_from_trainer model-index: - name: kto-python-6k-bad-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kto-python-6k-bad-ds This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.42.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
mago18/ava-50
mago18
"2024-06-16T21:22:20Z"
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-16T21:21:08Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adnaan525/opensi_mistral_3tasks
adnaan525
"2024-06-16T21:44:18Z"
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-16T21:22:09Z"
--- license: apache-2.0 ---
Dani3lRg/sentiment-analysis-distilbert-2
Dani3lRg
"2024-06-16T21:23:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T21:23:21Z"
Entry not found
BIFOLD-BigEarthNetv2-0/BENv2-mixer_b16_224-s2-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T15:16:53Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T21:29:41Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
stojchet/python-sft-50k-good-r64-a16-d0.05-e1
stojchet
"2024-06-17T02:44:16Z"
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:deepseek-ai/deepseek-coder-1.3b-base", "license:other", "region:us" ]
null
"2024-06-16T21:30:04Z"
--- base_model: deepseek-ai/deepseek-coder-1.3b-base datasets: - generator library_name: peft license: other tags: - trl - sft - generated_from_trainer model-index: - name: python-sft-50k-good-r64-a16-d0.05-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/k3avju5j) # python-sft-50k-good-r64-a16-d0.05-e1 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.42.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
numen-tech/Hermes-2-Theta-Llama-3-8B-w3a16g40sym
numen-tech
"2024-06-16T21:36:53Z"
0
0
null
[ "arxiv:2308.13137", "license:apache-2.0", "region:us" ]
null
"2024-06-16T21:32:59Z"
--- license: apache-2.0 --- 3-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B).
numen-tech/Hermes-2-Theta-Llama-3-8B-w4a16g128asym
numen-tech
"2024-06-16T21:36:57Z"
0
0
null
[ "arxiv:2308.13137", "license:apache-2.0", "region:us" ]
null
"2024-06-16T21:33:13Z"
--- license: apache-2.0 --- 4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B).
SaffalPoosh/medicine_epill_id
SaffalPoosh
"2024-06-16T21:41:39Z"
0
0
null
[ "onnx", "region:us" ]
null
"2024-06-16T21:33:23Z"
Entry not found
stojchet/python-sft-50k-good-e3-r64-a16-d0.05-e3
stojchet
"2024-06-17T13:15:54Z"
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:deepseek-ai/deepseek-coder-1.3b-base", "license:other", "region:us" ]
null
"2024-06-16T21:34:38Z"
--- base_model: deepseek-ai/deepseek-coder-1.3b-base datasets: - generator library_name: peft license: other tags: - trl - sft - generated_from_trainer model-index: - name: python-sft-50k-good-e3-r64-a16-d0.05-e3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/29l0c196) # python-sft-50k-good-e3-r64-a16-d0.05-e3 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.42.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Cas-Warehouse/Llama-3-Depressed-Therapist-8B
Cas-Warehouse
"2024-06-16T21:43:52Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B", "base_model:zementalist/llama-3-8B-chat-psychotherapist", "base_model:PrahmodhRaj/Llama-3_Psychiatrist_Chat", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-16T21:35:07Z"
--- base_model: - Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B - zementalist/llama-3-8B-chat-psychotherapist - Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B - PrahmodhRaj/Llama-3_Psychiatrist_Chat - Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B) as a base. ### Models Merged The following models were included in the merge: * [Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B) + [zementalist/llama-3-8B-chat-psychotherapist](https://huggingface.co/zementalist/llama-3-8B-chat-psychotherapist) * [Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B) + [PrahmodhRaj/Llama-3_Psychiatrist_Chat](https://huggingface.co/PrahmodhRaj/Llama-3_Psychiatrist_Chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B+PrahmodhRaj/Llama-3_Psychiatrist_Chat - model: Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B+zementalist/llama-3-8B-chat-psychotherapist merge_method: model_stock base_model: Casual-Autopsy/Llama-3-SOVL-MopeyMule-Blackroot-8B dtype: bfloat16 ```
hilop/stutsu
hilop
"2024-06-16T21:36:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T21:35:10Z"
Entry not found
numen-tech/LLaMA3-iterative-DPO-final-w4a16g128asym
numen-tech
"2024-06-16T21:42:03Z"
0
0
null
[ "arxiv:2308.13137", "license:llama3", "region:us" ]
null
"2024-06-16T21:36:14Z"
--- license: llama3 --- 3-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [LLaMA3-iterative-DPO-final](https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final).
numen-tech/LLaMA3-iterative-DPO-final-w3a16g40sym
numen-tech
"2024-06-16T21:42:00Z"
0
0
null
[ "arxiv:2308.13137", "license:llama3", "region:us" ]
null
"2024-06-16T21:36:18Z"
--- license: llama3 --- 3-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [LLaMA3-iterative-DPO-final](https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final).
A01638116/finetuning-bert-masters
A01638116
"2024-06-16T21:36:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T21:36:59Z"
Entry not found
stojchet/python-sft-all-good-r64-a16-d0.05-e1
stojchet
"2024-06-17T05:19:00Z"
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-1.3b-base", "license:other", "region:us" ]
null
"2024-06-16T21:41:43Z"
--- base_model: deepseek-ai/deepseek-coder-1.3b-base library_name: peft license: other tags: - trl - sft - generated_from_trainer model-index: - name: python-sft-all-good-r64-a16-d0.05-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/do2aroqn) # python-sft-all-good-r64-a16-d0.05-e1 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.42.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
NAYEONCEot9cover/SANAALLROUNDVER
NAYEONCEot9cover
"2024-06-16T21:44:03Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-16T21:41:54Z"
--- license: openrail ---
Astarossa/Quavo
Astarossa
"2024-06-16T21:48:40Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T21:42:06Z"
Entry not found
2024myai/myaitest
2024myai
"2024-06-16T21:47:51Z"
0
0
allennlp
[ "allennlp", "legal", "text-classification", "sl", "dataset:OpenGVLab/ShareGPT-4o", "license:unknown", "region:us" ]
text-classification
"2024-06-16T21:46:01Z"
--- license: unknown datasets: - OpenGVLab/ShareGPT-4o language: - sl metrics: - accuracy library_name: allennlp pipeline_tag: text-classification tags: - legal ---
jg-silva/modelo_gabi
jg-silva
"2024-06-16T21:47:24Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-16T21:46:44Z"
--- license: openrail ---
farkray/finetuning-ai-master
farkray
"2024-06-16T21:47:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T21:47:44Z"
Entry not found
stojchet/python-sft-150k-bad-r64-a16-d0.05-e1
stojchet
"2024-06-17T19:08:30Z"
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:deepseek-ai/deepseek-coder-1.3b-base", "license:other", "region:us" ]
null
"2024-06-16T21:48:16Z"
--- base_model: deepseek-ai/deepseek-coder-1.3b-base datasets: - generator library_name: peft license: other tags: - trl - sft - generated_from_trainer model-index: - name: python-sft-150k-bad-r64-a16-d0.05-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/ux8sj6ft) # python-sft-150k-bad-r64-a16-d0.05-e1 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.42.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
imrezar/my_awesome_model
imrezar
"2024-06-16T21:48:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T21:48:57Z"
Entry not found
DoctorNoSense/yoda_chatbot_phi3_model
DoctorNoSense
"2024-06-16T21:50:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T21:50:29Z"
Entry not found
tedcochran/llama3-8b-cosmic-fusion-dynamics-lora
tedcochran
"2024-06-16T22:21:57Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-16T21:55:31Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** tedcochran - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mohammed-majeed/mistral-7b-v0.3-bnb-4bit-Unsloth-chunk-7-0.5-1
Mohammed-majeed
"2024-06-16T21:58:49Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-16T21:58:09Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-v0.3-bnb-4bit --- # Uploaded model - **Developed by:** Mohammed-majeed - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BIFOLD-BigEarthNetv2-0/BENv2-vit_base_patch8_224-s2-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T17:22:11Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:02:02Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
tedcochran/llama3-8b-cosmic-fusion-dynamics-f16-gguf
tedcochran
"2024-06-16T22:04:54Z"
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:04:53Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** tedcochran - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mithil01/gpt2_clm-model
Mithil01
"2024-06-16T22:04:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:04:56Z"
Entry not found
Phzinskyed/1
Phzinskyed
"2024-06-16T22:05:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:05:54Z"
Entry not found
kevin36524/ymail_search_phi3_model
kevin36524
"2024-06-16T22:06:25Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:06:09Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** kevin36524 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Rannaaaa/FaheemTech
Rannaaaa
"2024-06-16T22:06:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:06:12Z"
Entry not found
aurelvu/test_model
aurelvu
"2024-06-17T15:41:59Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:06:46Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kdcyberdude/whisper_large_v3_pa_v0.1
kdcyberdude
"2024-06-17T09:55:41Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-16T22:09:49Z"
--- license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer model-index: - name: whisper_large_v3_pa_v0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_large_v3_pa_v0.1 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 13 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.3.0+cu121 - Datasets 2.20.1.dev0 - Tokenizers 0.15.2
stephenshank/repo_name
stephenshank
"2024-06-16T22:17:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:17:52Z"
Entry not found
Paco4365483/Finetune_13-b
Paco4365483
"2024-06-16T22:19:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:19:21Z"
Entry not found
niv6395/Reinforce-CartPole-v1
niv6395
"2024-06-16T22:20:20Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-16T22:20:16Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 222.20 +/- 56.16 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Dani3lRg/sentiment-analysis-distilbert-3
Dani3lRg
"2024-06-16T22:21:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:21:58Z"
Entry not found
MubarakB/b99Pw9770AfRtmJwV2i1
MubarakB
"2024-06-16T22:22:26Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-06-16T22:22:22Z"
--- base_model: NousResearch/Llama-2-7b-chat-hf library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Squarestone/Bok
Squarestone
"2024-06-16T22:24:16Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-16T22:24:15Z"
--- license: unknown ---
Pulk1t/unsloth_llama_for_VOC
Pulk1t
"2024-06-23T23:32:34Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "text-generation", "en", "dataset:Pulk1t/test_data_for_VOC", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-16T22:28:04Z"
--- datasets: - Pulk1t/test_data_for_VOC language: - en library_name: transformers pipeline_tag: text-generation tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** unsloth/llama-3-8b-bnb-4bit ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/unsloth/llama-3-8b-bnb-4bit - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This is a basic LLM implementation using a custom dataset for customer VOC. Further integrating in chatbot environment for ticket generation and inference. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Dataset used - https://huggingface.co/datasets/Pulk1t/test_data_for_VOC ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> https://huggingface.co/datasets/Pulk1t/test_data_for_VOC #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Dani3lRg/sentiment-analysis-distilbert-4
Dani3lRg
"2024-06-16T22:28:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:28:22Z"
Entry not found
Dani3lRg/sentiment-analysis-distilbert-5
Dani3lRg
"2024-06-16T22:28:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:28:44Z"
Entry not found
Dani3lRg/sentiment-analysis-distilbert-6
Dani3lRg
"2024-06-16T22:32:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:32:10Z"
Entry not found
Dani3lRg/sentiment-analysis-distilbert-7
Dani3lRg
"2024-06-16T22:33:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:33:51Z"
Entry not found
PhoenixAscencio/HWtagger
PhoenixAscencio
"2024-06-17T08:49:28Z"
0
2
keras
[ "keras", "onnx", "license:gpl-3.0", "region:us" ]
null
"2024-06-16T22:34:17Z"
--- license: gpl-3.0 --- This is the collection of the taggers that I use in this app: https://github.com/HaW-Tagger/HWtagger You can install directly in the cmd using this command: Make sure you have git-lfs installed (https://git-lfs.com) git lfs install git-lfs clone https://huggingface.co/PhoenixAscencio/HWtagger Put the models folder here: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6400b340f4ff62c2616f73f9/ben0E-FeubL-qjoEHoYS5.png) This is only a repository used for convenience purposes, all models are available there: SmilingWolf amazing taggers: https://huggingface.co/SmilingWolf Metaformer Caformer taggers: https://github.com/sail-sg/metaformer/tree/main DeepGHS for scorer, classifier, etc: https://huggingface.co/deepghs
flamenyo99/pix_lightning
flamenyo99
"2024-06-23T22:57:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:34:27Z"
Entry not found
SampleTheory/distilbert-base-uncased-finetuned-squad
SampleTheory
"2024-06-16T22:50:46Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2024-06-16T22:35:29Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 1.8433 | | No log | 2.0 | 126 | 1.8556 | | No log | 3.0 | 189 | 1.9122 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
BIFOLD-BigEarthNetv2-0/BENv2-mobilevit_s-s1-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T18:10:30Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:37:51Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
breno1996/breniov
breno1996
"2024-06-16T22:40:36Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-16T22:40:10Z"
--- license: openrail ---
EbubeJ/yuh
EbubeJ
"2024-06-16T22:44:05Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T22:44:05Z"
Entry not found
jimmycarter/pixart-900m-2k
jimmycarter
"2024-06-16T23:02:56Z"
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-16T22:46:01Z"
Entry not found
jimmycarter/pixart-900m-1024
jimmycarter
"2024-06-16T23:04:50Z"
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-16T22:46:31Z"
Entry not found
SiguienteGlobal/mexa-7b-0.1
SiguienteGlobal
"2024-06-16T23:32:55Z"
0
0
null
[ "es", "dataset:SiguienteGlobal/Open-Hermes-ES", "doi:10.57967/hf/2560", "license:apache-2.0", "region:us" ]
null
"2024-06-16T22:48:32Z"
--- license: apache-2.0 language: - es datasets: - SiguienteGlobal/Open-Hermes-ES ---
paulok/mistral_7b_wahl-o-mat
paulok
"2024-06-16T22:54:01Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:53:50Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BIFOLD-BigEarthNetv2-0/BENv2-mobilevit_s-all-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T16:59:59Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:55:58Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
Ramikan-BR/TiamaPY-LORA-v28
Ramikan-BR
"2024-06-16T23:00:02Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:59:18Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
noid-one/newsanchor_01
noid-one
"2024-06-16T22:59:42Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-16T22:59:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rohith123abc/quiz
Rohith123abc
"2024-06-16T23:02:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:02:26Z"
Entry not found
shuyuej/MedLLaMA3-2048-SLR
shuyuej
"2024-06-16T23:31:16Z"
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-16T23:03:34Z"
--- license: apache-2.0 ---
Xiaolihai/Mistral-7B-v0.3_MeDistill_28_Mistral-7B_ep10
Xiaolihai
"2024-06-16T23:06:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:06:19Z"
Entry not found
raghavdixit99/finetuned-gemma7b-math-instructions
raghavdixit99
"2024-06-16T23:12:40Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-16T23:10:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Holmeister/Mistral_AAID_new_mixed_train_final
Holmeister
"2024-06-17T03:21:26Z"
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.3", "region:us" ]
null
"2024-06-16T23:11:28Z"
--- library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: mistralai/Mistral-7B-v0.3 model-index: - name: Mistral_AAID_new_mixed_train_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral_AAID_new_mixed_train_final This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the AAID_new_mixed dataset. It achieves the following results on the evaluation set: - Loss: 0.5190 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8548 | 0.0109 | 10 | 0.5337 | | 0.4005 | 0.0219 | 20 | 0.5784 | | 0.3521 | 0.0328 | 30 | 0.5616 | | 0.3565 | 0.0438 | 40 | 0.5677 | | 0.3418 | 0.0547 | 50 | 0.5387 | | 0.3298 | 0.0656 | 60 | 0.5525 | | 0.3209 | 0.0766 | 70 | 0.5433 | | 0.3198 | 0.0875 | 80 | 0.5507 | | 0.3239 | 0.0984 | 90 | 0.5350 | | 0.2953 | 0.1094 | 100 | 0.5464 | | 0.303 | 0.1203 | 110 | 0.5190 | | 0.2986 | 0.1313 | 120 | 0.5514 | | 0.2995 | 0.1422 | 130 | 0.5639 | | 0.2917 | 0.1531 | 140 | 0.5352 | | 0.3001 | 0.1641 | 150 | 0.5431 | | 0.2872 | 0.1750 | 160 | 0.5582 | | 0.2902 | 0.1859 | 170 | 0.5445 | | 0.2819 | 0.1969 | 180 | 0.5385 | | 0.2909 | 0.2078 | 190 | 0.5399 | | 0.2864 | 0.2188 | 200 | 0.5452 | | 0.2656 | 0.2297 | 210 | 0.5379 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
lemuser1/sentiment-bert-text-v2
lemuser1
"2024-06-16T23:14:42Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-16T23:13:00Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shuyuej/MedMistral-1024-SLR
shuyuej
"2024-06-16T23:33:46Z"
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-16T23:14:21Z"
--- license: apache-2.0 ---
Dani3lRg/sentiment-analysis-distilbert-8
Dani3lRg
"2024-06-16T23:15:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:15:34Z"
Entry not found
Meforgers/Turkish-LLama3
Meforgers
"2024-06-17T08:10:04Z"
0
0
peft
[ "peft", "safetensors", "code", "biology", "chemistry", "text-generation-inference", "en", "tr", "base_model:nvidia/Llama3-ChatQA-1.5-8B", "license:apache-2.0", "region:us" ]
null
"2024-06-16T23:16:58Z"
--- library_name: peft base_model: nvidia/Llama3-ChatQA-1.5-8B license: apache-2.0 language: - en - tr tags: - code - biology - chemistry - text-generation-inference --- ### Model Description - **Developed by:** Meforgers - **Language(s) (NLP):** Eng, Turkish - **License:** Apache License 2.0 - **Finetuned from model nvidia/Llama3-ChatQA-1.5-8B:**
Dani3lRg/sentiment-analysis-distilbert-9
Dani3lRg
"2024-06-16T23:18:52Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:18:52Z"
Entry not found
BIFOLD-BigEarthNetv2-0/BENv2-mobilevit_s-s2-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T18:48:32Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T23:19:44Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
Dani3lRg/sentiment-analysis-distilbert-10
Dani3lRg
"2024-06-16T23:20:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:20:44Z"
Entry not found
Dani3lRg/sentiment-analysis-distilbert-11
Dani3lRg
"2024-06-16T23:22:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:22:09Z"
Entry not found
Dani3lRg/sentiment-analysis-distilbert-12
Dani3lRg
"2024-06-16T23:23:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:23:32Z"
Entry not found
siegfriedgm/amazonbaby-5000
siegfriedgm
"2024-06-18T00:29:23Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-16T23:24:03Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: amazonbaby-5000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazonbaby-5000 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2480 - Accuracy: 0.9053 - F1: 0.9378 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Vladislav1986/L1
Vladislav1986
"2024-06-16T23:25:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:25:54Z"
Entry not found
bmehrba/Llama-2-7b-chat-hf-fine-tuned-adapters_Llama2_7b_rephrasetesting_train_100epochs
bmehrba
"2024-06-16T23:27:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:27:50Z"
Entry not found
zhangduo4610/zephyr-7b-sft-full
zhangduo4610
"2024-06-16T23:30:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:30:10Z"
Entry not found
EthanSun/llama3-8B-AKT-IER
EthanSun
"2024-06-18T00:45:19Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-16T23:30:11Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Aitimeok/Lacey
Aitimeok
"2024-06-16T23:31:30Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:31:30Z"
Entry not found
Xiaolihai/BioMistral-7B_MeDistill_28_Mistral-7B_ep10
Xiaolihai
"2024-06-16T23:33:41Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:33:41Z"
Entry not found
lolbdizveejv/Juhigiv
lolbdizveejv
"2024-06-16T23:34:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:34:09Z"
Entry not found
winstxnhdw/llama3-tokeniser
winstxnhdw
"2024-06-16T23:43:19Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:39:54Z"
Entry not found
nguyenkhanh87/CHiLPhi3m4-sc
nguyenkhanh87
"2024-06-16T23:44:29Z"
0
0
peft
[ "peft", "safetensors", "phi3", "trl", "sft", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-06-16T23:41:45Z"
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/Phi-3-mini-4k-instruct model-index: - name: CHiLPhi3m4-sc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CHiLPhi3m4-sc This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 342 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Shadow-AI/Playboi_Carti_MUSIC_Baby_Voice_750_Epochs_RVC_V2
Shadow-AI
"2024-06-16T23:45:45Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-16T23:43:01Z"
--- license: openrail ---
invisietch/Rain-v0.3-8B
invisietch
"2024-06-16T23:43:20Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:43:20Z"
Entry not found
BIFOLD-BigEarthNetv2-0/BENv2-convnextv2_base-s1-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T21:15:08Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T23:46:48Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
BIFOLD-BigEarthNetv2-0/BENv2-convmixer_768_32-all-v0.1.1
BIFOLD-BigEarthNetv2-0
"2024-06-19T22:41:18Z"
0
0
transformers
[ "transformers", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "endpoints_compatible", "region:us" ]
null
"2024-06-16T23:47:20Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
muesli1253/uk_to_de
muesli1253
"2024-06-16T23:57:59Z"
0
0
null
[ "region:us" ]
null
"2024-06-16T23:57:59Z"
Entry not found
dsatya6/sentiment-bert-text-v1
dsatya6
"2024-06-16T23:58:37Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-16T23:58:35Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Musix/Flor
Musix
"2024-06-17T00:01:34Z"
0
0
null
[ "region:us" ]
null
"2024-06-17T00:00:50Z"
Entry not found
pangjh3/allm-juhao-txllarge-fullatt
pangjh3
"2024-06-17T00:04:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-17T00:04:13Z"
Entry not found
BIGHEIGHTS/LAYLA
BIGHEIGHTS
"2024-06-17T00:09:37Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-17T00:07:18Z"
--- license: openrail ---
TheMindExpansionNetwork/1024x1024-m1nd3xpand3r-sd3-lora
TheMindExpansionNetwork
"2024-06-17T00:10:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-17T00:10:56Z"
Entry not found
criperm19/Henrique
criperm19
"2024-06-23T22:31:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-17T00:13:10Z"
Entry not found
Z3K3/musicgen-large
Z3K3
"2024-06-17T01:34:57Z"
0
0
peft
[ "peft", "safetensors", "musicgen_melody", "text-to-audio", "ylacombe/tiny-punk", "generated_from_trainer", "base_model:facebook/musicgen-melody", "license:cc-by-nc-4.0", "region:us" ]
text-to-audio
"2024-06-17T00:13:23Z"
--- base_model: facebook/musicgen-melody library_name: peft license: cc-by-nc-4.0 tags: - text-to-audio - ylacombe/tiny-punk - generated_from_trainer model-index: - name: musicgen-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # musicgen-large This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the YLACOMBE/TINY-PUNK - DEFAULT dataset. It achieves the following results on the evaluation set: - Loss: 4.5011 - Clap: -0.0131 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 1 - seed: 123 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Clap | |:-------------:|:------:|:----:|:---------------:|:-------:| | 9.4039 | 0.8571 | 3 | 4.4428 | -0.0070 | | 8.8992 | 2.0 | 7 | 4.4435 | 0.0117 | | 8.5373 | 2.8571 | 10 | 4.4463 | -0.0288 | | 8.0105 | 4.0 | 14 | 4.4552 | -0.0111 | | 7.8068 | 4.8571 | 17 | 4.4682 | -0.0194 | | 7.6881 | 6.0 | 21 | 4.4913 | 0.0071 | | 7.6672 | 6.8571 | 24 | 4.5011 | -0.0222 | ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.20.0 - Tokenizers 0.19.1
ThingUroboros/joe-model-tokenizer-fixed
ThingUroboros
"2024-06-17T00:16:47Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-17T00:16:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
REILX/Qwen2-7B-Code-Instruct
REILX
"2024-06-21T12:33:22Z"
0
0
null
[ "safetensors", "text-generation-inference", "code", "chat", "qwen2", "sft", "text-generation", "conversational", "en", "zh", "dataset:Replete-AI/code_bagel", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-17T00:19:49Z"
--- license: apache-2.0 datasets: - Replete-AI/code_bagel language: - en - zh pipeline_tag: text-generation tags: - text-generation-inference - code - chat - qwen2 - sft --- ### Python demo User: 给我写一个Python代码用于实现建立ES索引,给出示例代码</br> Assistant: 以下是一个简单的Python代码示例,用于实现建立ES(Elasticsearch)索引: ```python from elasticsearch import Elasticsearch # 创建Elasticsearch客户端 es = Elasticsearch() # 创建索引 def create_index(index_name): # 创建索引映射 mapping = { "mappings": { "properties": { "name": {"type": "text"}, "age": {"type": "integer"} } } } # 创建索引 es = es.create(index=index_name, body=mappinging) # 打印结果 print("创建索引结果:") print(res) # 删除索引 def delete_index(index_name): # 删除索引 res = es.delete(index=index_name) # 打印结果 print("删除索引结果") print(res) # 更新索引 def update_index(index_name, doc_type, id, doc): # 更新索引 res = es.update(index=index_name, doc_type=doc_type, id=id, body=doc) # 打印结果 print("更新索引结果") print(res) # 插入索引 def insert_index(index_name, doc_type, id, doc): # 插入索引 res = es.index(index=index_name, doc_type=doc_type, id=id, body=doc) # 打印结果 print("插入索引结果") print(res) # 查询索引 def query_index(index_name, doc_type, id): # 查询索引 res = es.get(index=index_name, doc_type=doc_type, id=id) # 打印结果 print("查询索引结果") print(res) # 创建索引 create_index("my_index") # 查询索引 query_index("my_index", "my_type", "1") # 插入索引 insert_index("my_index", "my_type", "1", {"name": "John", "age": 30}) # 更新索引 update_index("my_index", "my_type", "1", {"doc": "John has updated his profile."}) # 删除索引 delete_index("my_index") ``` 注意: 以上代码只是一个基本示例,实际使用时需要根据具体需求进行修改和 调整。例如,你可能需要使用 User: 写一个Python代码用于实现扫描固定目录下的,指定文件,我将输入文件名和目录,如果在每日的8点没有生成文件,则通过xml方式调用短信接口SM001发通知。</br> Assistant: 以下是一个Python代码示例,用于实现扫描固定目录下的指定文件。如果在每日的8点没有生成文件,则通过XML方式调用短信接口SM001发通知。 ```python import os import datetime import xml.etree.ElementTree as ET import requests # 输入文件名和 目录 file_name = "example.txt" directory = "/path/to/directory/" # 检查文件是否存在 def check_file_exists(file_name, directory): return os.path.isfile(os.path.join(directory, file_name)) # 检查是否为8点 def is_it_eight(): return datetime.datetime.now().hour == 8 # 发送短信 def send_sms(message): # 这里需要你设置短信接口SM001的API接口和 �key url = "http://sms.com/api/send" data = {"message": message} response = requests.post(url, data=data) return response # 主程序 def main(): # 检查文件是否存在 if check_file_exists(file_name, directory): # 文件存在,不发送短信 print("文件存在,不发送短信") else else: # 文件不存在,发送短信 if is_it_eight(): # 在是8点,发送短信 message = f"文件{file_name}在{directory}目录下不存在,请检查" send_sms(message) else else: # 不是8点,不发送短信 print("不是8点,不发送短信") if __name__ == "__main__": main() ``` 注意: 以上代码只是一个基本示例,实际使用时需要根据具体需求进行修改和 调整。例如,你可能需要使用真实的API接口,真实的的短信接口key等. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0
Dani3lRg/sentiment-analysis-distilbert-13
Dani3lRg
"2024-06-17T00:21:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-17T00:21:00Z"
Entry not found
rizla/meow_0.5b
rizla
"2024-06-17T00:25:08Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2-0.5B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-17T00:24:57Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl base_model: unsloth/Qwen2-0.5B --- # Uploaded model - **Developed by:** rizla - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-0.5B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
zaddyzaddy/outputs
zaddyzaddy
"2024-06-18T02:31:02Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-17T00:27:39Z"
Entry not found