modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ninyx/Mistral-7B-Instruct-v0.3-advisegpt-v0.3 | ninyx | "2024-06-13T03:44:23Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-12T09:33:16Z" | Entry not found |
tranthaihoa/bm25_sbert_gemma_k3_evidence | tranthaihoa | "2024-06-12T09:36:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T09:36:20Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Boostaro155/Nexalyn | Boostaro155 | "2024-06-12T09:41:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:37:27Z" | # Nexalyn Anmeldelser Dosis og Virker - Nexalyn Danmark Erfaringer Ingredienser Pris, Køb
Nexalyn Anmeldelser Dosis og Virker At forbrænde fedt i besværlige områder er en udfordring for mange mennesker på deres vægttabsrejse. Dette stædige kropsfedt kan være frustrerende og svært at målrette mod med kost og motion alene. Nexaslim-tillægget kan dog give den løsning, du har ledt efter.
## **[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://slim-gummies-deutschland.de/nexalyn-dk)**
## Fordele du får ved at bruge Nexalyn Me Supplement Piller - Forventede resultater
### Øget libido
En øget libido er resultatet af Nexalyns forbedring af ophidselse og lyst. Der kan opstå en mærkbar stigning i brugernes ønske om nærhed, hvilket fører til større iver og entusiasme for at have samleje med deres partnere.
### Forbedret erektil funktion
Med Nexalyn kan brugerne forvente erektioner, der er stærkere, mere komplekse og varer længere. I soveværelset fører denne forbedring af erektil funktion til mere tilfredsstillende og behagelige romantiske møder, som øger selvtilliden og selvværdet.
### Forbedret virilitet
Brugere af Nexalyn føler sig mere magtfulde og maskuline, da stoffet tilskynder til større virilitet. Nexalyn hjælper mænd med at projicere selvtillid og styrke under intime interaktioner ved at fremme hormonbalancen og forbedre den romantiske præstation.
## Mere intense orgasmer
Nexalyn kan få brugere til at få mere potente og intense orgasmer. Tilskuddet arbejder for at øge følelsen af følsomhed og nydelse under samleje, hvilket resulterer i forbedrede oplevelser og øget klimakstilfredshed.
### Øget energiniveau
Nexalyn giver kunderne øget energi, så de kan blive sent oppe. Forbedret udholdenhed og udholdenhed gør det muligt for brugerne at deltage i længerevarende romantiske aktiviteter uden at opleve træthed eller udmattelse, hvilket letter mere behagelige og givende møder.
### Forbedret romantisk selvtillid
Dem, der tager Nexalyn, kan føle sig mere sikre på deres intimitetsevner på grund af lægemidlets forbedrede effekt på det fysiske forhold. En øget følelse af kompetence og empowerment i soveværelset kan øge nærhed og bånd til deres partnere.
### Forbedret forholdstilfredshed
Den overordnede nydelse af et forhold kan drage fordel af Nexalyns evne til at forbedre romantisk sundhed og funktion. Nexalyn hjælper med at opretholde relationer og øge følelsesmæssig nærhed ved at tilskynde til større intimitet og forbindelse mellem par.
### Bedre generel velvære
Fordi det tilskynder til et sundt og aktivt romantisk liv, kan det at tage Nexalyn forbedre det generelle velvære. Et lykkeligere og mere opfyldt liv er resultatet af forbedret intim sundhed, bedre vitalitet og højere lykke, som alle har indflydelse uden for soveværelset.
## **[Klik her for at købe nu fra Nexalyns officielle hjemmeside](https://slim-gummies-deutschland.de/nexalyn-dk)** |
mollysama/rwkv-mobile-models | mollysama | "2024-07-02T03:23:12Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-12T09:37:48Z" | ---
license: apache-2.0
---
|
tranthaihoa/bm25_sbert_gemma_k2_evidence | tranthaihoa | "2024-06-12T09:39:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T09:39:21Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hunter1214/ppo-Huggy | Hunter1214 | "2024-06-12T09:39:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:39:57Z" | Entry not found |
Anmous/woman-sdxl-lora | Anmous | "2024-06-12T09:40:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:40:41Z" | Entry not found |
1024m/WASSA2024-3B-LLAMA3-70B-Ints-t-Main | 1024m | "2024-06-12T09:41:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T09:41:35Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-70b-bnb-4bit
---
# Uploaded model
- **Developed by:** 1024m
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Maxdarkrose/PomPom | Maxdarkrose | "2024-06-12T09:43:30Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-12T09:43:30Z" | ---
license: apache-2.0
---
|
pw907/testing-baseline-64 | pw907 | "2024-06-12T22:25:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2024-06-12T09:46:36Z" | Entry not found |
EnergyandRecovery/EnergyandRecovery | EnergyandRecovery | "2024-06-12T09:48:43Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-12T09:46:42Z" | ---
license: apache-2.0
---
Energy and Recovery কি?
Energy and Recovery বড়ি হল একটি বিশেষায়িত পুরুষের স্বাস্থ্য ক্যাপসুল যা যৌন স্বাস্থ্যকে সমর্থন এবং উন্নত করার জন্য ডিজাইন করা হয়েছে। শক্তিশালী প্রাকৃতিক উপাদানের সংমিশ্রণে তৈরি, Energy and Recovery ক্যাপসুল এর লক্ষ্য পুরুষের যৌন কর্মক্ষমতার বিভিন্ন দিক উন্নত করা, যার মধ্যে লিবিডো, স্ট্যামিনা এবং সামগ্রিক প্রজনন স্বাস্থ্য। এই সম্পূরকটি তাদের যৌন সুস্থতা এবং আত্মবিশ্বাস বাড়ানোর জন্য একটি প্রাকৃতিক সমাধান খুঁজছেন পুরুষদের জন্য আদর্শ।
সরকারী ওয়েবসাইট:<a href="https://www.nutritionsee.com/energyrecban">www.EnergyandRecovery.com</a>
<p><a href="https://www.nutritionsee.com/energyrecban"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/06/Energy-and-Recovery-Bangladesh.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/energyrecban">এখন কেন!! আরও তথ্যের জন্য নীচের লিঙ্কে ক্লিক করুন এবং এখনই 50% ছাড় পান... তাড়াতাড়ি করুন
</a>
সরকারী ওয়েবসাইট:<a href="https://www.nutritionsee.com/energyrecban">www.EnergyandRecovery.com</a> |
acl-srw-2024/phi3-14b-unsloth-sft-quip-2bit-pt | acl-srw-2024 | "2024-06-12T10:06:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:47:42Z" | Entry not found |
DanielDobro/super-cool-model | DanielDobro | "2024-06-12T09:47:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:47:54Z" | Entry not found |
MarPla/HealthPrincipalMainPegasus | MarPla | "2024-06-12T09:54:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-12T09:53:08Z" | ---
base_model: google/pegasus-large
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: HealthPrincipalMainPegasus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HealthPrincipalMainPegasus
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0343
- Rouge1: 51.1056
- Rouge2: 17.2499
- Rougel: 33.8193
- Rougelsum: 47.8453
- Bertscore Precision: 80.2471
- Bertscore Recall: 82.3517
- Bertscore F1: 81.2824
- Bleu: 0.1256
- Gen Len: 233.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore Precision | Bertscore Recall | Bertscore F1 | Bleu | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------------------:|:----------------:|:------------:|:------:|:--------:|
| 6.5043 | 0.0835 | 100 | 6.1043 | 39.8446 | 11.121 | 25.4982 | 36.4742 | 76.5079 | 80.1477 | 78.2789 | 0.0801 | 233.9958 |
| 5.9911 | 0.1671 | 200 | 5.7625 | 44.9139 | 13.8953 | 29.2395 | 41.9312 | 78.5034 | 81.0686 | 79.7606 | 0.0984 | 233.9958 |
| 5.8802 | 0.2506 | 300 | 5.5925 | 45.7626 | 14.8524 | 30.2239 | 42.6984 | 78.7715 | 81.3496 | 80.0356 | 0.1063 | 233.9958 |
| 5.708 | 0.3342 | 400 | 5.4492 | 47.5481 | 15.4828 | 31.1939 | 44.4724 | 79.2119 | 81.535 | 80.3531 | 0.1099 | 233.9958 |
| 5.4908 | 0.4177 | 500 | 5.3144 | 49.3891 | 16.3343 | 32.4471 | 46.2974 | 79.6037 | 81.8018 | 80.6843 | 0.1159 | 233.9958 |
| 5.5082 | 0.5013 | 600 | 5.2235 | 49.2315 | 16.3591 | 32.6255 | 46.1221 | 79.5967 | 81.9095 | 80.733 | 0.1184 | 233.9958 |
| 5.4192 | 0.5848 | 700 | 5.1577 | 50.8099 | 16.929 | 33.2596 | 47.5073 | 79.9416 | 82.1638 | 81.0339 | 0.1226 | 233.9958 |
| 5.4327 | 0.6684 | 800 | 5.1134 | 51.0419 | 17.0275 | 33.4839 | 47.8258 | 80.0834 | 82.1836 | 81.1165 | 0.1228 | 233.9958 |
| 5.3311 | 0.7519 | 900 | 5.0760 | 50.6545 | 17.1249 | 33.5043 | 47.4752 | 80.0946 | 82.2579 | 81.1584 | 0.1242 | 233.9958 |
| 5.3244 | 0.8355 | 1000 | 5.0510 | 51.2619 | 17.2114 | 33.7881 | 47.9991 | 80.254 | 82.3319 | 81.2763 | 0.1247 | 233.9958 |
| 5.2486 | 0.9190 | 1100 | 5.0343 | 51.1056 | 17.2499 | 33.8193 | 47.8453 | 80.2471 | 82.3517 | 81.2824 | 0.1256 | 233.9958 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Hunter1214/rl-Huggy | Hunter1214 | "2024-06-12T09:53:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:53:20Z" | Entry not found |
Likalto4/from_all_to_all-bs_32 | Likalto4 | "2024-06-12T09:54:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:54:25Z" | Entry not found |
zFFFFF/igor_new | zFFFFF | "2024-06-12T10:49:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:54:48Z" | Entry not found |
mfurkanatac/whisper-small-hi | mfurkanatac | "2024-06-13T08:38:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-12T09:54:50Z" | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_17_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
DBangshu/GPT2_3_4 | DBangshu | "2024-06-12T09:55:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-12T09:55:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SachaEL/test_0 | SachaEL | "2024-06-12T09:55:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:55:35Z" | Entry not found |
abc101011/house_price_prediction | abc101011 | "2024-06-12T09:56:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:56:49Z" | Entry not found |
Lakoc/ED_small_cv_en_continue | Lakoc | "2024-06-12T09:57:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T09:57:02Z" | Entry not found |
genglezhaoliang/zlllm | genglezhaoliang | "2024-06-12T10:02:04Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-12T10:02:04Z" | ---
license: apache-2.0
---
|
abdoy/code-llama-7b-text-to-sql | abdoy | "2024-06-12T10:04:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:04:33Z" | Entry not found |
Brucezelda/DataVizTool | Brucezelda | "2024-06-12T10:05:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:05:26Z" | Entry not found |
futurelarning/Fenrir | futurelarning | "2024-06-12T10:08:35Z" | 0 | 0 | null | [
"en",
"dataset:HuggingFaceFW/fineweb",
"license:openrail",
"region:us"
] | null | "2024-06-12T10:08:10Z" | ---
license: openrail
datasets:
- HuggingFaceFW/fineweb
language:
- en
metrics:
- bleurt
--- |
aengusl/R2D2_6-12-eps1pt5-lr2e-5-checkpoint-200 | aengusl | "2024-06-12T10:10:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T10:10:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rajika12/Thar | Rajika12 | "2024-06-12T10:13:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:12:11Z" | Entry not found |
toanvulcanlabs/ai_upscaler | toanvulcanlabs | "2024-06-12T12:23:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:12:12Z" | Entry not found |
Musharraf11/Pawsome-AI | Musharraf11 | "2024-06-12T10:15:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:13:56Z" | Entry not found |
jhjgfg/NedoGPT | jhjgfg | "2024-06-12T10:20:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:20:23Z" | Entry not found |
DBangshu/GPT2_4_4 | DBangshu | "2024-06-12T10:21:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-12T10:20:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blockblockblock/Qwen2-72B-Instruct-bpw5-exl2 | blockblockblock | "2024-06-12T10:26:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"exl2",
"region:us"
] | text-generation | "2024-06-12T10:21:13Z" | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2-72B-Instruct
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
Qwen2-72B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-72B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-72B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
1. **Install vLLM**: You can install vLLM by running the following command.
```bash
pip install "vllm>=0.4.3"
```
Or you can install vLLM from [source](https://github.com/vllm-project/vllm/).
2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
```json
{
"architectures": [
"Qwen2ForCausalLM"
],
// ...
"vocab_size": 152064,
// adding the following snippets
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
This snippet enable YARN to support longer contexts.
3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
```bash
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-72B-Instruct --model path/to/weights
```
Then you can access the Chat API by:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen2-72B-Instruct",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Your Long Input Here."}
]
}'
```
For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2).
**Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation
We briefly compare Qwen2-72B-Instruct with similar-sized instruction-tuned LLMs, including our previous Qwen1.5-72B-Chat. The results are shown as follows:
| Datasets | Llama-3-70B-Instruct | Qwen1.5-72B-Chat | **Qwen2-72B-Instruct** |
| :--- | :---: | :---: | :---: |
| _**English**_ | | | |
| MMLU | 82.0 | 75.6 | **82.3** |
| MMLU-Pro | 56.2 | 51.7 | **64.4** |
| GPQA | 41.9 | 39.4 | **42.4** |
| TheroemQA | 42.5 | 28.8 | **44.4** |
| MT-Bench | 8.95 | 8.61 | **9.12** |
| Arena-Hard | 41.1 | 36.1 | **48.1** |
| IFEval (Prompt Strict-Acc.) | 77.3 | 55.8 | **77.6** |
| _**Coding**_ | | | |
| HumanEval | 81.7 | 71.3 | **86.0** |
| MBPP | **82.3** | 71.9 | 80.2 |
| MultiPL-E | 63.4 | 48.1 | **69.2** |
| EvalPlus | 75.2 | 66.9 | **79.0** |
| LiveCodeBench | 29.3 | 17.9 | **35.7** |
| _**Mathematics**_ | | | |
| GSM8K | **93.0** | 82.7 | 91.1 |
| MATH | 50.4 | 42.5 | **59.7** |
| _**Chinese**_ | | | |
| C-Eval | 61.6 | 76.1 | **83.8** |
| AlignBench | 7.42 | 7.28 | **8.27** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
haturusinghe/xlm_r_large-baseline_model-v2-revived-fog-6 | haturusinghe | "2024-06-12T10:21:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:21:31Z" | Entry not found |
WhaleFood/git-base-VR_Hand-Gesture | WhaleFood | "2024-06-12T10:22:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:22:28Z" | Entry not found |
shuxing79/butterfly-128-ft | shuxing79 | "2024-06-12T10:40:48Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-06-12T10:23:09Z" | Entry not found |
aleoaaaa/t5-base-fr-finetuned_1334offres_uniforme | aleoaaaa | "2024-06-12T10:36:25Z" | 0 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:plguillou/t5-base-fr-sum-cnndm",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-12T10:23:43Z" | ---
base_model: plguillou/t5-base-fr-sum-cnndm
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-fr-sum-cnndm_finetuned_12_06_10_23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-fr-sum-cnndm_finetuned_12_06_10_23
This model is a fine-tuned version of [plguillou/t5-base-fr-sum-cnndm](https://huggingface.co/plguillou/t5-base-fr-sum-cnndm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8407
- Rouge1: 0.1595
- Rouge2: 0.0349
- Rougel: 0.1304
- Rougelsum: 0.1303
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.0448 | 1.0 | 1334 | 1.8407 | 0.1595 | 0.0349 | 0.1304 | 0.1303 | 20.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
LiangZiqiang2003/1 | LiangZiqiang2003 | "2024-06-12T10:25:01Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-12T10:25:01Z" | ---
license: mit
---
|
ddegeus/TAPPS | ddegeus | "2024-06-14T14:16:38Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-12T10:31:13Z" | ---
license: mit
---
# Task-Aligned Part-aware Panoptic Segmentation (TAPPS)
[[Paper](https://openaccess.thecvf.com/content/CVPR2024/papers/de_Geus_Task-aligned_Part-aware_Panoptic_Segmentation_through_Joint_Object-Part_Representations_CVPR_2024_paper.pdf)] [[Project page](http://tue-mps.github.io/tapps)] [[Code](https://github.com/tue-mps/tapps/)]
We provide the models for the part-aware panoptic segmentation task, as presented in our CVPR 2024 paper: [Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations](https://openaccess.thecvf.com/content/CVPR2024/papers/de_Geus_Task-aligned_Part-aware_Panoptic_Segmentation_through_Joint_Object-Part_Representations_CVPR_2024_paper.pdf).
For the code, see [https://github.com/tue-mps/tapps/](https://github.com/tue-mps/tapps/).
Please consider citing our work if it is useful for your research.
```
@inproceedings{degeus2024tapps,
title={{Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations}},
author={{de Geus}, Daan and Dubbelman, Gijs},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
``` |
thliang01/3d-icon-sdxl-lora-1000 | thliang01 | "2024-06-12T10:31:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:31:29Z" | Entry not found |
Anzovi/distilBERT-news | Anzovi | "2024-06-12T12:39:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"DistilBERTClass",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T10:32:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OpeoluwaAdekoya/viv-beta-mistral | OpeoluwaAdekoya | "2024-06-13T15:53:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T10:33:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
malco15/phi3 | malco15 | "2024-06-12T10:35:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:35:58Z" | Entry not found |
grome13180/falcon-7b-qlora_20240612 | grome13180 | "2024-06-12T10:36:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:36:19Z" | Entry not found |
PJMixers/LLaMa-3-PJStoryWriter-v0.3-SFT-8B-QLoRA | PJMixers | "2024-06-12T10:40:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-12T10:36:25Z" | ---
license: llama3
--- |
iamayaak/liamtesting | iamayaak | "2024-06-12T10:39:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:38:13Z" | Entry not found |
KasuleTrevor/wav2vec2-large-xls-r-300m-sw-1hr-v1 | KasuleTrevor | "2024-06-12T12:18:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-12T10:38:57Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-sw-1hr-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: sw
split: test
args: sw
metrics:
- name: Wer
type: wer
value: 0.5901667526216263
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sw-1hr-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8004
- Wer: 0.5902
- Cer: 0.1498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| 10.7706 | 4.6512 | 100 | 5.0262 | 1.0 | 1.0 |
| 3.7038 | 9.3023 | 200 | 3.2132 | 1.0 | 1.0 |
| 2.9571 | 13.9535 | 300 | 2.8597 | 1.0 | 1.0 |
| 2.7859 | 18.6047 | 400 | 2.6007 | 1.0 | 0.7810 |
| 1.2103 | 23.2558 | 500 | 0.8662 | 0.6976 | 0.1859 |
| 0.3075 | 27.9070 | 600 | 0.7534 | 0.6533 | 0.1695 |
| 0.1911 | 32.5581 | 700 | 0.7585 | 0.6282 | 0.1607 |
| 0.1482 | 37.2093 | 800 | 0.8062 | 0.6340 | 0.1667 |
| 0.1241 | 41.8605 | 900 | 0.7999 | 0.6190 | 0.1605 |
| 0.1085 | 46.5116 | 1000 | 0.8105 | 0.6001 | 0.1524 |
| 0.0935 | 51.1628 | 1100 | 0.7972 | 0.5914 | 0.1502 |
| 0.0833 | 55.8140 | 1200 | 0.7978 | 0.5931 | 0.1505 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Ashmit06/finetuned-squad-model | Ashmit06 | "2024-06-12T10:39:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:39:32Z" | Entry not found |
lujunjun/Qwen2-7B-Instruct-ov | lujunjun | "2024-06-12T10:41:41Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-12T10:41:41Z" | ---
license: apache-2.0
---
|
DBangshu/GPT2_5_4 | DBangshu | "2024-06-12T10:46:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-12T10:45:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KayraAksit/unsloth-llama3-ins-bigcode-adapter | KayraAksit | "2024-06-12T10:46:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T10:45:50Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** KayraAksit
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ogbi/ika-mms-1bv3 | ogbi | "2024-06-12T10:47:36Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T10:47:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iamayaak/noely | iamayaak | "2024-06-12T10:49:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:48:24Z" | Entry not found |
iamayaak/LiamHQ | iamayaak | "2024-06-12T10:51:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:50:43Z" | Entry not found |
DavidLacour/mcqaDPOzephyrsft4bits | DavidLacour | "2024-06-12T11:40:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T10:53:07Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
grome13180/falcon7B_20240612_V0 | grome13180 | "2024-06-12T10:57:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:54:15Z" | Entry not found |
Adolfi/HousePricePrediction | Adolfi | "2024-06-12T11:00:19Z" | 0 | 1 | null | [
"time-series-forecasting",
"sv",
"license:mit",
"region:us"
] | time-series-forecasting | "2024-06-12T10:56:58Z" | ---
license: mit
language:
- sv
pipeline_tag: time-series-forecasting
--- |
torphix/face | torphix | "2024-06-28T16:28:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T10:58:23Z" | Entry not found |
parkir/peft-starcoder-lora-a100 | parkir | "2024-06-12T11:00:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:00:01Z" | Entry not found |
xplusy01/2024CL_FinalProj_JM | xplusy01 | "2024-06-12T11:12:47Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-12T11:00:29Z" | Entry not found |
Hunter1214/at-Huggy | Hunter1214 | "2024-06-12T11:03:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:03:36Z" | Entry not found |
abdoy/llama3-8b-sft-qlora-re-chat | abdoy | "2024-06-21T08:06:00Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:nvidia/Llama3-ChatQA-1.5-8B",
"license:llama3",
"region:us"
] | null | "2024-06-12T11:05:41Z" | ---
base_model: nvidia/Llama3-ChatQA-1.5-8B
library_name: peft
license: llama3
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama3-8b-sft-qlora-re-chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-sft-qlora-re-chat
This model is a fine-tuned version of [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
llmat/TinyLlama_v1.1-SFT-adapters | llmat | "2024-06-12T11:09:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-12T11:07:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jamescraiggg/autotrain-dsqea-dmfvv | jamescraiggg | "2024-06-12T11:10:41Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:yezhengli9/wmt20-en-de",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-12T11:09:46Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: Qwen/Qwen2-0.5B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- yezhengli9/wmt20-en-de
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
tranthaihoa/bm25_gemma_k1_evidence | tranthaihoa | "2024-06-12T11:10:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:09:59Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ShapeKapseln33/Bioxtrim67 | ShapeKapseln33 | "2024-06-12T11:12:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:10:49Z" | Bioxtrim Höhle der löwen Deutschland Bewertungen BioXtrim Gummies zur Gewichtsabnahme unterscheiden sich von herkömmlichen Nahrungsergänzungsmitteln zur Gewichtsabnahme und bieten eine praktische und köstliche Möglichkeit zur Unterstützung einer gesunden Gewichtskontrolle. Diese Gummies sind mit einer Mischung aus natürlichen Zutaten formuliert, die sorgfältig ausgewählt wurden, um die Fettverbrennung zu fördern, den Appetit zu unterdrücken und den Stoffwechsel anzukurbeln. Im Gegensatz zu starken Stimulanzien oder restriktiven Diäten bieten BioXtrim Gummies einen sanften und dennoch effektiven Ansatz zum Erreichen und Halten eines gesunden Gewichts.
**[Klicken Sie hier, um jetzt auf der offiziellen Website von Bioxtrim zu kaufen](https://slim-gummies-deutschland.de/bioxtrim-de)**
##Wissenschaftliche Studien
Bei der Bewertung von BioXtrim Gummies dürfen wissenschaftliche Studien nicht außer Acht gelassen werden, um festzustellen, ob und wie das Produkt wirken könnte.
##Forschungshintergrund
Die Glaubwürdigkeit von BioXtrim Gummies beruht auf den Erfahrungen der Anwender, da offizielle wissenschaftliche Studien vor der Markteinführung zu fehlen scheinen. Unabhängige Portale stellen oft Analysen und Berichte zur Verfügung, die Einblick in Kundenerfahrungen und damit indirekt in die Wirksamkeit des Produkts geben.
##Nachweis der Wirksamkeit
Was die Wirksamkeit von BioXtrim Gummies angeht, so legen die vorhandenen Informationen nahe, dass der Hersteller regelmäßig Qualitäts- und Reinheitstests durchführt. Dies garantiert Produkteigenschaften, trifft jedoch keine direkten Aussagen zur Wirksamkeit im Rahmen des Abnehmens. Darüber hinaus enthalten die Berichte und Bewertungen positives Anwenderfeedback, das zwar Hinweise auf mögliche Wirkungen des Produkts liefert, diese sind jedoch subjektiv und nicht mit den strengen Kriterien wissenschaftlicher Studien gleichzusetzen.
##Potentielle Nebenwirkungen
Bei der Anwendung von BioXtrim Gummies können Nebenwirkungen auftreten, die für den Verbraucher erheblich sein könnten. Anwender sollten bei der ersten Anwendung besonders auf die Verträglichkeit und mögliche Beschwerden achten.
##Verträglichkeit und Sicherheit
BioXtrim Gummies werden als ergänzende Nahrungsergänzungsmittel zur Gewichtskontrolle vermarktet und bestehen hauptsächlich aus natürlichen Inhaltsstoffen. Die allgemeine Verträglichkeit gilt als hoch, dennoch ist es wichtig, vor der Einnahme die individuelle Verträglichkeit und bestehende Allergien gegen Bestandteile der Gummies zu prüfen.
**[Klicken Sie hier, um jetzt auf der offiziellen Website von Bioxtrim zu kaufen](https://slim-gummies-deutschland.de/bioxtrim-de)**
##Verträglichkeitsprüfung:
Inhaltsstoffe: Auf allergene Stoffe prüfen
Dosierung: Empfohlene Tagesdosis nicht überschreiten
Beratung: Im Zweifelsfall Arzt aufsuchen
##Häufige Beschwerden
Einige Anwender berichten von Nebenwirkungen wie der „ketogenen Grippe“, die Übelkeit und Müdigkeit beinhalten kann. Diese Symptome treten häufig zu Beginn einer ketogenen Ernährungsumstellung auf. Des Weiteren darf nicht vergessen werden, dass jeder Körper anders reagiert und somit unterschiedliche Reaktionen möglich sind.
Häufig berichtete Nebenwirkungen:
Ketogene Grippe: Übelkeit, Müdigkeit
Verdauungsbeschwerden: Magen-Darm-Probleme
Es ist ratsam, die Reaktion des Körpers genau zu beobachten und bei anhaltenden Symptomen ärztlichen Rat einzuholen.
##Vergleich mit anderen Abnehmprodukten
Im Dschungel der Abnehmprodukte und Nahrungsergänzungsmittel ist es wichtig, die Eigenschaften und Vorteile der verschiedenen Optionen zu verstehen. Bioxtrim Gummies haben eine Position auf dem Markt, die auf Benutzererfahrungen und ihrer spezifischen Zusammensetzung basiert.
##Bioxtrim vs. andere Abnehmgummies
Bioxtrim Gummies werden als Abnehmhilfe präsentiert, die sich von anderen Schlankheitsprodukten, insbesondere anderen Abnehmgummies, abhebt. Während der Markt verschiedene Abnehmgummies anbietet, weisen Benutzer oft darauf hin, dass Bioxtrim durch seine Inhaltsstoffe und die damit verbundene Wirkungsweise die Gewichtsabnahme effektiv unterstützen kann.
##Es gibt eine Reihe von Faktoren, die Bioxtrim von anderen Produkten abheben:
Inhaltsstoffe: Bioxtrim verwendet eine spezielle Formel auf Basis von Fruchtgummies, was nicht bei allen Abnehmgummies der Fall ist.
Benutzererfahrungen: Viele Benutzerberichte loben die Wirksamkeit von Bioxtrim Gummies im Vergleich zu anderen Produkten, da sie ihnen geholfen haben, ihr Wunschgewicht zu erreichen.
**[Klicken Sie hier, um jetzt auf der offiziellen Website von Bioxtrim zu kaufen](https://slim-gummies-deutschland.de/bioxtrim-de)**
|
llmat/TinyLlama_v1.1-SFT | llmat | "2024-06-12T11:13:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-12T11:11:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thewimo/XMEN-annotator | thewimo | "2024-06-27T11:05:59Z" | 0 | 0 | null | [
"joblib",
"region:us"
] | null | "2024-06-12T11:11:19Z" | Entry not found |
xahilmalik/new-model | xahilmalik | "2024-06-12T11:21:00Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-12T11:15:21Z" | Entry not found |
jamescraiggg/autotrain-ecwr2-yyg77 | jamescraiggg | "2024-06-12T11:24:07Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:yezhengli9/wmt20-en-de",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-12T11:18:54Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Meta-Llama-3-8B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- yezhengli9/wmt20-en-de
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
aleksandrtarkojev/repo_name | aleksandrtarkojev | "2024-06-12T11:36:09Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"code",
"text-generation",
"en",
"dataset:OpenGVLab/ShareGPT-4o",
"license:openrail",
"region:us"
] | text-generation | "2024-06-12T11:19:02Z" | ---
license: openrail
datasets:
- OpenGVLab/ShareGPT-4o
language:
- en
metrics:
- code_eval
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- code
--- |
RobertML/sn3-oxypoxy | RobertML | "2024-06-12T11:19:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:19:31Z" | Entry not found |
tranthaihoa/bm25_sbert_gemma_k1_evidence | tranthaihoa | "2024-06-12T11:20:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:20:28Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aldjalkdf/RMT | aldjalkdf | "2024-06-12T11:23:47Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-12T11:23:47Z" | ---
license: mit
---
|
DBangshu/GPT2_6_4 | DBangshu | "2024-06-12T11:24:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-12T11:24:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LeeBurnforest/EmotionAnalyzer | LeeBurnforest | "2024-06-12T11:24:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-12T11:24:33Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: EmotionAnalyzer
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6330274939537048
---
# EmotionAnalyzer
The Emotion Analyzer is ready!
## Example Images
#### angry
![angry](images/angry.jpg)
#### happy
![happy](images/happy.jpg)
#### sad
![sad](images/sad.jpg)
#### tired
![tired](images/tired.jpg) |
AntiStressElixir/AntiStressElixir | AntiStressElixir | "2024-06-12T11:28:19Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-12T11:26:20Z" | ---
license: apache-2.0
---
Šta je Anti Stress Elixir?
Anti Stress Elixir Drops je prirodni lijek formuliran u obliku tekućih kapi dizajniranih za liječenje nesanice i poboljšanje kvalitete sna. Napravljen od mješavine moćnih biljnih ekstrakata, Anti Stress Elixir Kapi ima za cilj pomoći pojedincima koji se bore s uspavljivanjem, ostajanjem u snu ili postizanjem mirnog sna. Ovaj proizvod koristi snagu prirode kako bi pružio nježno, ali učinkovito rješenje za nesanicu bez nuspojava koje se obično povezuju sa pomagalima za spavanje na recept.
Službena web stranica:<a href="https://www.nutritionsee.com/antistrbisele">www.AntiStressElixir.com</a>
<p><a href="https://www.nutritionsee.com/antistrbisele"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/06/Anti-Stress-Elixir-Bosnia-.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/antistrbisele">Kupi sada!! Kliknite na link ispod za više informacija i odmah ostvarite 50% popusta... Požurite</a>
Službena web stranica:<a href="https://www.nutritionsee.com/antistrbisele">www.AntiStressElixir.com</a> |
Blessing988/finetuned_QwenVL | Blessing988 | "2024-06-12T11:32:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:27:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Parasforu/Weke | Parasforu | "2024-06-12T11:29:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:29:33Z" | Entry not found |
tranthaihoa/sbert_gemma_k2_evidence | tranthaihoa | "2024-06-12T11:30:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:29:42Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mNLP-project/baseline-gpt2-quantized | mNLP-project | "2024-06-12T12:00:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"gptq",
"region:us"
] | text-generation | "2024-06-12T11:30:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-direct-ft | GetmanY1 | "2024-06-12T12:48:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"sami",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-12T11:35:16Z" | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- sami
model-index:
- name: wav2vec2-base-fi-voxpopuli-v2-sami-parl-direct-ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: UIT-SME
type: uit-sme
args: sami
metrics:
- name: WER
type: wer
value: 36.12
- name: CER
type: cer
value: 9.21
---
# Northern Sámi Wav2vec2-Base ASR
[facebook/wav2vec2-base-fi-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fi-voxpopuli-v2) fine-tuned on 20 hours of [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
## Model description
The Sámi Wav2Vec2 Base has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
You can read more about the pre-trained model from [this paper](TODO). The training scripts are available on [GitHub](https://github.com/aalto-speech/northern-sami-asr)
## Intended uses & limitations
You can use this model for Sámi ASR (speech-to-text).
### How to use
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-direct-ft")
model = Wav2Vec2ForCTC.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-direct-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("mozilla-foundation/common_voice_16_1", "fi", split='test')
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
### Limitations and bias
This model was fine-tuned with audio samples whose maximum length was 30 seconds so this model most likely works the best for short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
The model was fine-tuned on the data from the [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) so this model might have biases towards formal Sámi.
## Citation
If you use our models or scripts, please cite our article as:
```bibtex
@inproceedings{getman24b_interspeech,
author={Yaroslav Getman and Tamas Grosz and Katri Hiovain-Asikainen and Mikko Kurimo},
title={{Exploring adaptation techniques of large speech foundation models for low-resource ASR: a case study on Northern Sámi}},
year=2024,
booktitle={Proc. INTERSPEECH 2024},
pages={XX--XX},
doi={XXXX},
issn={XXXX-XXXX}
}
``` |
GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-20h | GetmanY1 | "2024-06-12T12:48:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"sami",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-12T11:38:41Z" | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- sami
model-index:
- name: wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-20h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: UIT-SME
type: uit-sme
args: sami
metrics:
- name: WER
type: wer
value: 35.07
- name: CER
type: cer
value: 9.03
---
# Northern Sámi Wav2vec2-Base ASR
[facebook/wav2vec2-base-fi-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fi-voxpopuli-v2) with two-step training that involved continued pre-training and fine-tuning using the same 20-hour set of the [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
## Model description
The Sámi Wav2Vec2 Base has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
You can read more about the pre-trained model from [this paper](TODO). The training scripts are available on [GitHub](https://github.com/aalto-speech/northern-sami-asr)
## Intended uses & limitations
You can use this model for Sámi ASR (speech-to-text).
### How to use
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-20h")
model = Wav2Vec2ForCTC.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-20h")
# load dummy dataset and read soundfiles
ds = load_dataset("mozilla-foundation/common_voice_16_1", "fi", split='test')
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
### Limitations and bias
This model was fine-tuned with audio samples whose maximum length was 30 seconds so this model most likely works the best for short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
The model was fine-tuned on the data from the [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) so this model might have biases towards formal Sámi.
## Citation
If you use our models or scripts, please cite our article as:
```bibtex
@inproceedings{getman24b_interspeech,
author={Yaroslav Getman and Tamas Grosz and Katri Hiovain-Asikainen and Mikko Kurimo},
title={{Exploring adaptation techniques of large speech foundation models for low-resource ASR: a case study on Northern Sámi}},
year=2024,
booktitle={Proc. INTERSPEECH 2024},
pages={XX--XX},
doi={XXXX},
issn={XXXX-XXXX}
}
``` |
GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-108h | GetmanY1 | "2024-06-12T12:48:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"sami",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-12T11:40:04Z" | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- sami
model-index:
- name: wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-108h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: UIT-SME
type: uit-sme
args: sami
metrics:
- name: WER
type: wer
value: 34.72
- name: CER
type: cer
value: 8.85
---
# Northern Sámi Wav2vec2-Base ASR
[facebook/wav2vec2-base-fi-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fi-voxpopuli-v2) with two-step training that involved continued pre-training on all the available [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) (108h) and fine-tuning on the 20-hour transcribed subset. When using the model make sure that your speech input is sampled at 16Khz.
## Model description
The Sámi Wav2Vec2 Base has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
You can read more about the pre-trained model from [this paper](TODO). The training scripts are available on [GitHub](https://github.com/aalto-speech/northern-sami-asr)
## Intended uses & limitations
You can use this model for Sámi ASR (speech-to-text).
### How to use
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-108h")
model = Wav2Vec2ForCTC.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-cont-pt-108h")
# load dummy dataset and read soundfiles
ds = load_dataset("mozilla-foundation/common_voice_16_1", "fi", split='test')
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
### Limitations and bias
This model was fine-tuned with audio samples whose maximum length was 30 seconds so this model most likely works the best for short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
The model was fine-tuned on the data from the [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) so this model might have biases towards formal Sámi.
## Citation
If you use our models or scripts, please cite our article as:
```bibtex
@inproceedings{getman24b_interspeech,
author={Yaroslav Getman and Tamas Grosz and Katri Hiovain-Asikainen and Mikko Kurimo},
title={{Exploring adaptation techniques of large speech foundation models for low-resource ASR: a case study on Northern Sámi}},
year=2024,
booktitle={Proc. INTERSPEECH 2024},
pages={XX--XX},
doi={XXXX},
issn={XXXX-XXXX}
}
``` |
reinbeumer/ai | reinbeumer | "2024-06-12T11:42:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:41:33Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** reinbeumer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DataVare/OST-To-MSG-Converter-Expert | DataVare | "2024-06-12T11:42:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:42:21Z" | With DataVare Outlook OST to MSG program users can export offline OST to MSG file format. It can save emails, notes, calendars, journals, events, and more in here. By using an OST to MSG converter, this program also facilitates the conversion of Outlook OST file in MSG format with attachments. Before the data is converted from an OST file to an MSG file, it will also offer a preview of the exported data. You can convert data safely and securely with the help of this software. user's personal information is protected by this app. This software operates at a very fast pace. This exporting approach takes a lot of time. Instead of that, even this is an expert tool. Someone lacking in technical expertise. This application is also available to them. This utility allows you to move several file formats from OST to MSG. Users interested in learning more about how it works can download the demo version.
Read More - https://www.datavare.com/software/ost-to-msg-converter-expert.html |
GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-ext-ft | GetmanY1 | "2024-06-12T12:48:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"sami",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-12T11:44:38Z" | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- sami
model-index:
- name: wav2vec2-base-fi-voxpopuli-v2-sami-parl-ext-ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: UIT-SME
type: uit-sme
args: sami
metrics:
- name: WER
type: wer
value: 33.67
- name: CER
type: cer
value: 8.61
---
# Northern Sámi Wav2vec2-Base ASR
[facebook/wav2vec2-base-fi-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fi-voxpopuli-v2) with two-step, extended fine-tuning. The model was originally adapted to Finnish ASR with 1500 hours of speech from the [Lahjoita puhetta (Donate Speech) corpus](https://link.springer.com/article/10.1007/s10579-022-09606-3), followed by adding randomly initialized weights and bias terms in the final linear layer (language modeling head) for the 12 new characters introduced by the target Sámi data and fine-tuning on 20 hours of [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
## Model description
The Sámi Wav2Vec2 Base has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
You can read more about the pre-trained model from [this paper](TODO). The training scripts are available on [GitHub](https://github.com/aalto-speech/northern-sami-asr)
## Intended uses & limitations
You can use this model for Sámi ASR (speech-to-text).
### How to use
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-ext-ft")
model = Wav2Vec2ForCTC.from_pretrained("GetmanY1/wav2vec2-base-fi-voxpopuli-v2-sami-parl-ext-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("mozilla-foundation/common_voice_16_1", "fi", split='test')
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
### Limitations and bias
This model was fine-tuned with audio samples whose maximum length was 30 seconds so this model most likely works the best for short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
The model was fine-tuned on the data from the [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) so this model might have biases towards formal Sámi.
## Citation
If you use our models or scripts, please cite our article as:
```bibtex
@inproceedings{getman24b_interspeech,
author={Yaroslav Getman and Tamas Grosz and Katri Hiovain-Asikainen and Mikko Kurimo},
title={{Exploring adaptation techniques of large speech foundation models for low-resource ASR: a case study on Northern Sámi}},
year=2024,
booktitle={Proc. INTERSPEECH 2024},
pages={XX--XX},
doi={XXXX},
issn={XXXX-XXXX}
}
``` |
Sparkoo/Kate-AI | Sparkoo | "2024-06-24T13:01:25Z" | 0 | 0 | null | [
"kate",
"text-classification",
"en",
"dataset:Sparkoo/Kate",
"region:us"
] | text-classification | "2024-06-12T11:44:41Z" | ---
datasets:
- Sparkoo/Kate
language:
- en
tags:
- kate
pipeline_tag: text-classification
--- |
frgzegrez/Shaken-and-stirred-Trumps-golf-course-liquor-licenses-at-risk-after-conviction-5e-updated | frgzegrez | "2024-06-12T11:44:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:44:46Z" | Entry not found |
jperezes/example-model | jperezes | "2024-06-12T11:58:46Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-12T11:48:49Z" | ---
license: mit
---
|
cdznho/vit-base-beans-demo-v5 | cdznho | "2024-06-12T11:49:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:49:22Z" | Entry not found |
swasti-srivastava/vit-12-6 | swasti-srivastava | "2024-06-12T11:50:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:50:50Z" | Entry not found |
Lakoc/ED_small_cv_en_deeper | Lakoc | "2024-06-12T11:52:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"joint_aed_ctc_speech-encoder-decoder",
"generated_from_trainer",
"dataset:common_voice_13_0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:51:52Z" | ---
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: ED_small_cv_en_deeper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ED_small_cv_en_deeper
This model is a fine-tuned version of [](https://huggingface.co/) on the common_voice_13_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15000
- num_epochs: 50.0
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.0+rocm5.6
- Datasets 2.18.0
- Tokenizers 0.15.2
|
onizukal/Boya3_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold4 | onizukal | "2024-06-13T17:47:10Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-12T11:52:46Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Boya3_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8348514851485148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya3_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold4
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0857
- Accuracy: 0.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3968 | 1.0 | 632 | 0.5064 | 0.7988 |
| 0.2217 | 2.0 | 1264 | 0.4437 | 0.8210 |
| 0.1633 | 3.0 | 1896 | 0.5150 | 0.8309 |
| 0.0261 | 4.0 | 2528 | 0.9455 | 0.8352 |
| 0.0033 | 5.0 | 3160 | 1.0857 | 0.8349 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
fbi0826/fine_tuned_bart | fbi0826 | "2024-06-12T11:52:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-12T11:52:50Z" | Entry not found |
fxmeng/PiSSA-Yi-1.5-34B-4bit-r64-5iter | fxmeng | "2024-06-12T14:52:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-12T11:53:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GetmanY1/wav2vec2-large-uralic-voxpopuli-v2-sami-parl-direct-ft | GetmanY1 | "2024-06-12T12:52:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"sami",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-12T11:54:34Z" | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- sami
model-index:
- name: wav2vec2-large-uralic-voxpopuli-v2-sami-parl-direct-ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: UIT-SME
type: uit-sme
args: sami
metrics:
- name: WER
type: wer
value: 42.69
- name: CER
type: cer
value: 10.14
---
# Northern Sámi Wav2vec2-Large ASR
[facebook/wav2vec2-large-uralic-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-large-uralic-voxpopuli-v2) fine-tuned on 20 hours of [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
## Model description
The Sámi Wav2Vec2 Large has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
You can read more about the pre-trained model from [this paper](TODO). The training scripts are available on [GitHub](https://github.com/aalto-speech/northern-sami-asr)
## Intended uses & limitations
You can use this model for Sámi ASR (speech-to-text).
### How to use
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("GetmanY1/wav2vec2-large-uralic-voxpopuli-v2-sami-parl-direct-ft")
model = Wav2Vec2ForCTC.from_pretrained("GetmanY1/wav2vec2-large-uralic-voxpopuli-v2-sami-parl-direct-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("mozilla-foundation/common_voice_16_1", "fi", split='test')
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
### Limitations and bias
This model was fine-tuned with audio samples whose maximum length was 30 seconds so this model most likely works the best for short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
The model was fine-tuned on the data from the [Sámi Parliament speech data](https://sametinget.kommunetv.no/archive) so this model might have biases towards formal Sámi.
## Citation
If you use our models or scripts, please cite our article as:
```bibtex
@inproceedings{getman24b_interspeech,
author={Yaroslav Getman and Tamas Grosz and Katri Hiovain-Asikainen and Mikko Kurimo},
title={{Exploring adaptation techniques of large speech foundation models for low-resource ASR: a case study on Northern Sámi}},
year=2024,
booktitle={Proc. INTERSPEECH 2024},
pages={XX--XX},
doi={XXXX},
issn={XXXX-XXXX}
}
``` |
khaldii/videomae-surf-analytics-runpod | khaldii | "2024-06-12T13:38:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-06-12T11:55:34Z" | Entry not found |
Lakoc/ED_small_cv_en_continue2 | Lakoc | "2024-06-12T15:48:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"joint_aed_ctc_speech-encoder-decoder",
"generated_from_trainer",
"dataset:common_voice_13_0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:55:58Z" | ---
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: ED_small_cv_en_continue2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ED_small_cv_en_continue2
This model is a fine-tuned version of [](https://huggingface.co/) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1534
- Cer: 0.0838
- Wer: 0.1978
- Mer: 0.1928
- Wil: 0.3161
- Wip: 0.6839
- Hits: 122778
- Substitutions: 22066
- Deletions: 3337
- Insertions: 3914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 256
- eval_batch_size: 7
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 512
- total_eval_batch_size: 14
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
| Training Loss | Epoch | Step | Cer | Deletions | Hits | Insertions | Validation Loss | Mer | Substitutions | Wer | Wil | Wip |
|:-------------:|:-----:|:-----:|:------:|:---------:|:------:|:----------:|:---------------:|:------:|:-------------:|:------:|:------:|:------:|
| 1.3588 | 5.0 | 7445 | 0.1570 | 5345 | 104522 | 7446 | 1.5216 | 0.3284 | 38314 | 0.3449 | 0.5094 | 0.4906 |
| 1.285 | 6.0 | 8934 | 0.1497 | 6362 | 105386 | 5691 | 1.4842 | 0.3151 | 36433 | 0.3272 | 0.4919 | 0.5081 |
| 1.7562 | 7.0 | 10423 | 0.1487 | 6144 | 106299 | 5993 | 1.4710 | 0.3105 | 35738 | 0.3231 | 0.4849 | 0.5151 |
| 1.5766 | 8.0 | 11912 | 0.1343 | 5075 | 110239 | 5997 | 1.3866 | 0.2850 | 32867 | 0.2965 | 0.4500 | 0.5500 |
| 1.478 | 9.0 | 13401 | 0.1193 | 4513 | 113519 | 5389 | 1.3274 | 0.2608 | 30149 | 0.2703 | 0.4166 | 0.5834 |
| 1.4494 | 10.0 | 14890 | 0.1141 | 4925 | 114772 | 4845 | 1.2920 | 0.2500 | 28484 | 0.2582 | 0.3998 | 0.6002 |
| 1.4086 | 11.0 | 16379 | 0.1063 | 4113 | 116948 | 4863 | 1.2627 | 0.2359 | 27120 | 0.2436 | 0.3803 | 0.6197 |
| 1.375 | 12.0 | 17868 | 0.1017 | 3817 | 118153 | 4921 | 1.2363 | 0.2283 | 26211 | 0.2359 | 0.3689 | 0.6311 |
| 1.3304 | 13.0 | 19357 | 0.0977 | 3489 | 119548 | 4862 | 1.2181 | 0.2189 | 25144 | 0.2260 | 0.3551 | 0.6449 |
| 1.3215 | 14.0 | 20846 | 0.0928 | 3994 | 120102 | 3969 | 1.1973 | 0.2106 | 24085 | 0.2163 | 0.3430 | 0.6570 |
| 1.2824 | 15.0 | 22335 | 0.0894 | 3388 | 121469 | 4429 | 1.1777 | 0.2041 | 23324 | 0.2102 | 0.3327 | 0.6673 |
| 1.2535 | 16.0 | 23824 | 0.0857 | 3131 | 122436 | 4283 | 1.1625 | 0.1970 | 22614 | 0.2026 | 0.3226 | 0.6774 |
| 1.2096 | 17.0 | 25313 | 0.0817 | 3242 | 123261 | 3842 | 1.1429 | 0.1892 | 21678 | 0.1941 | 0.3109 | 0.6891 |
| 1.1749 | 18.0 | 26802 | 0.0795 | 3384 | 123650 | 3604 | 1.1330 | 0.1854 | 21147 | 0.1899 | 0.3047 | 0.6953 |
| 1.1528 | 19.0 | 28291 | 0.0770 | 3262 | 124432 | 3579 | 1.1220 | 0.1801 | 20487 | 0.1844 | 0.2964 | 0.7036 |
| 1.1373 | 20.0 | 29780 | 0.0762 | 3197 | 124623 | 3517 | 1.1168 | 0.1785 | 20361 | 0.1827 | 0.2942 | 0.7058 |
| 1.2751 | 21.0 | 31269 | 1.1934 | 0.0921 | 0.2159 | 0.2093 | 0.3408 | 0.6592 | 120871 | 24019 | 3291 | 4681 |
| 1.2585 | 22.0 | 32758 | 1.1727 | 0.0884 | 0.2087 | 0.2022 | 0.3297 | 0.6703 | 122013 | 23124 | 3044 | 4751 |
| 1.2612 | 23.0 | 34247 | 1.1634 | 0.0863 | 0.2043 | 0.1986 | 0.3247 | 0.6753 | 122169 | 22727 | 3285 | 4260 |
| 1.2389 | 24.0 | 35736 | 1.1574 | 0.0851 | 0.2020 | 0.1964 | 0.3215 | 0.6785 | 122473 | 22496 | 3212 | 4220 |
| 1.2422 | 25.0 | 37225 | 1.1534 | 0.0838 | 0.1978 | 0.1928 | 0.3161 | 0.6839 | 122778 | 22066 | 3337 | 3914 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.0+rocm5.6
- Datasets 2.18.0
- Tokenizers 0.15.2
### Wandb run
https://wandb.ai/butspeechfit/decred_commonvoice_en/runs/ED_small_cv_en_continue2 |
onizukal/Boya1_3Class_SGD_1e3_20Epoch_Beit-large-224_fold1 | onizukal | "2024-06-12T12:35:28Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-12T11:56:08Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Boya1_3Class_SGD_1e3_20Epoch_Beit-large-224_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7360847135487374
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_3Class_SGD_1e3_20Epoch_Beit-large-224_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6426
- Accuracy: 0.7361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8408 | 1.0 | 924 | 0.9037 | 0.6155 |
| 0.8244 | 2.0 | 1848 | 0.7895 | 0.6715 |
| 0.8238 | 3.0 | 2772 | 0.7327 | 0.6951 |
| 0.6266 | 4.0 | 3696 | 0.6993 | 0.7092 |
| 0.7355 | 5.0 | 4620 | 0.6767 | 0.7220 |
| 0.6356 | 6.0 | 5544 | 0.6627 | 0.7288 |
| 0.6111 | 7.0 | 6468 | 0.6531 | 0.7317 |
| 0.6432 | 8.0 | 7392 | 0.6463 | 0.7355 |
| 0.5597 | 9.0 | 8316 | 0.6435 | 0.7353 |
| 0.7957 | 10.0 | 9240 | 0.6426 | 0.7361 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|