modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
richardkelly/google-gemma-7b-1718558660 | richardkelly | "2024-06-16T17:24:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:24:20Z" | Entry not found |
AstraLabs/Libri-Data | AstraLabs | "2024-06-16T17:28:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:25:56Z" | Entry not found |
somashekar2002/Java-code-Gen-Z | somashekar2002 | "2024-06-16T18:00:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T17:25:57Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SansarK/Qwen2-0.5B-RKLLM | SansarK | "2024-06-16T17:34:59Z" | 0 | 0 | null | [
"license:wtfpl",
"region:us"
] | null | "2024-06-16T17:29:23Z" | ---
license: wtfpl
---
|
RazzzHF/trained-sd3-lora | RazzzHF | "2024-06-16T17:32:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:32:39Z" | Entry not found |
BahaaEldin0/ensembleModelsFinetuned | BahaaEldin0 | "2024-06-16T21:24:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:34:56Z" | Entry not found |
JacobLinCool/odcnn-320k-100 | JacobLinCool | "2024-06-16T17:45:18Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-16T17:35:58Z" | ---
license: apache-2.0
---
# odcnn-320k-100
Onset Detection CNN Model for https://github.com/seiichiinoue/odcnn
## Training Log
The training logs for Don and Ka model can be found in the `log` directory.
## Dataset
This model was trained on the following 100 songs from the internet:
- 1.1.001 ちゅ、多様性。
- 1.1.002 絆ノ奇跡
- 1.1.003 Love You
- 1.1.004 アイドル
- 1.1.005 怪物
- 1.1.006 Surges
- 1.1.007 Back to Life
- 1.1.008 Subtitle
- 1.1.009 ミックスナッツ
- 1.1.010 Bukan Cinta Biasa
- 1.1.011 トウキョウ・シャンディ・ランデヴ
- 1.1.012 私は最強
- 1.1.013 うっせぇわ
- 1.1.014 オトナブルー
- 1.1.015 恋ゲバ
- 1.1.016 残響散歌
- 1.1.017 グッバイ宣言
- 1.1.018 祝福
- 1.1.019 夜に駆ける
- 1.1.020 群青
- 1.1.021 新時代
- 1.1.022 阿修羅ちゃん
- 1.1.023 踊
- 1.1.024 わたしの一番かわいいところ
- 1.1.025 紅蓮華
- 1.1.026 炎
- 1.1.027 明け星
- 1.1.028 フォニイ
- 1.1.029 ロキ
- 1.1.030 Habit
- 1.1.031 RPG
- 1.1.032 Dragon Night
- 1.1.033 夏祭り ジッタリン・ジン
- 1.1.034 夏祭り
- 1.1.035 ドライフラワー
- 1.1.036 シル・ヴ・プレジデント
- 1.1.037 なにやってもうまくいかない
- 1.1.038 シュガーソングとビターステップ
- 1.1.039 前前前世
- 1.1.040 愛にできることはまだあるかい
- 1.1.041 チューリングラブ feat.Sou ナナヲアカリ
- 1.1.042 青と夏
- 1.1.043 一途
- 1.1.044 白日
- 1.1.045 Hope
- 1.1.046 CITRUS
- 1.1.047 天体観測
- 1.1.048 猫
- 1.1.049 廻廻奇譚
- 1.1.050 ナンセンス文学
- 1.7.001 拝啓、学校にて・・・
- 1.7.002 太鼓侍
- 1.7.003 23時54分、陽の旅路へのプレリュード
- 1.7.004 CUT! into the FUTURE
- 1.7.005 Nosferatu
- 1.7.006 GORI × GORI × SafaRI
- 1.7.007 夢うつつカタルシス
- 1.7.008 われら無敵のドコン団
- 1.7.009 ドドドドドンだフル!
- 1.7.010 ラ・モレーナ・クモナイ
- 1.7.012 鼓立あおはる学園校歌
- 1.7.013 スキに理由はいらないじゃん!
- 1.7.014 ドローイン☆ドリーム!
- 1.7.015 Destination 2F29
- 1.7.016 共奏鼓祭
- 1.7.017 エール・エクス・マキナ!
- 1.7.018 RAINBOW★SKY
- 1.7.019 Space-Time Emergency
- 1.7.020 アンチェイン・ガール!
- 1.7.021 白日夢、霧雨に溶けて
- 1.7.022 スリケンランナー
- 1.7.023 詩謳兎揺蕩兎
- 1.7.024 閃光ヴァルキュリア
- 1.7.025 リンダは今日も絶好調
- 1.7.026 六華の舞
- 1.7.027 Illusion Flare
- 1.7.028 ヘイラ
- 1.7.029 LΔchesis
- 1.7.030 六本の薔薇と采の歌
- 1.7.031 Doppelgangers
- 1.7.032 うなぎのたましいロック
- 1.7.033 まいにちがドンダフル
- 1.7.034 ヒカリノカナタヘ(AC)
- 1.7.037 神竜 ~Shinryu~
- 1.7.038 SUPERNOVA
- 1.7.039 そして勇者は眠りにつく
- 1.7.040 狂瀾怒濤
- 1.7.041 きょうはたいこ曜日
- 1.7.043 其方、激昂
- 1.7.044 Challengers
- 1.7.045 SORA-V コズミックバード
- 1.7.046 ON SAY GO SAY
- 1.7.047 まおぅ
- 1.7.048 ねこくじら
- 1.7.049 Player_s High
- 1.7.050 弧
- 1.7.051 めためた☆ゆにば~すっ!
- 1.7.052 ラブユー☆どんちゃん
- 1.7.053 トンガチン
- 1.7.054 喫茶レイン
|
SilvioLima/absa_treinamento_1 | SilvioLima | "2024-06-17T19:23:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T17:35:58Z" |
# Model Card para ABSA_AOTE_distilGPT2
## Dados Gerais
- **Nome:** Modelo para Aspect-opinion Triplet Extraction (AOTE) baseado em distilGPT2
- **Tipo:** decoder-only
- **Licença:** Apache License 2.0
- **Modelo base:** distilGPT2
## Resumo
Modelo distilGPT2 ajustado para a tarefa ABSA/AOTE com os datasets SemEval + Amazon.
Para treinamento foi utilizado PyTorch.
Parâmetros:
| Parâmetro | Valor | Descrição |
| ------------- | ------------- | ------------- |
|model | distilGPT2 | Nome do modelo base |
|train_size | None | Número de amostras para treinameto |
|val_size | None | Número de amostras para validação |
|test_size | None | Número de amostras para teste |
|max_input_length | 128 | Quantidade de tokens máxima na entrada |
|max_output_length | 128 | Quantidade de tokens máxima na saída |
|batch_size | 16 | Quantidade de amostras no batch |
|n_epochs | 10 | Número máximo de épocas de treinamento |
|lr | 1,00E-03 | Taxa de aprendizado |
|use_weights | FALSO | Usar ou não pesos personalizados para cada polaridade |
|use_paraphrase | VERDADEIRO | Usar ou não a saída no formato de paráfrase |
|use_prompt | FALSO | Usar uma instrução junto com o review na entrada |
|one_shot | FALSO | Fornecer ou não um exemplo junto ao prompt |
|early_stop | 3 | Paciência do early stop (se a perda de validação não abaixar por três épocas o treinamento encerra) |
## Utilização Pretendida
O modelo foi ajustado considerando o formato de entrada e saída descrito abaixo, sendo assim recomenda-se que ao se carregar, fazer inferências com dados seguindo o mesmo formato.
Entrada: The pizza was good, but the waiter was lazy.
Saída: pizza is great because it is good <sep> waiter is bad because it is lazy
## Idiomas
Inglês
## Dados de Treinamento
Os dados são uma composição dos datasets ASTE de [1] e DM-ASTE [2], que seguem o mesmo formato de dados descrito acima.
[1] XU, Lu et al. Position-aware tagging for aspect sentiment triplet extraction. arXiv preprint arXiv:2010.02609, 2020.
[2] XU, Ting et al. Measuring Your ASTE Models in The Wild: A Diversified Multi-domain Dataset For Aspect Sentiment Triplet Extraction. arXiv preprint arXiv:2305.17448, 2023.
|
GandalfTheHoly1/cathode_materials_first_try | GandalfTheHoly1 | "2024-06-16T17:51:00Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-16T17:36:16Z" | ---
license: unknown
---
|
harsham/guddi | harsham | "2024-06-16T17:36:28Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2024-06-16T17:36:28Z" | ---
license: bigscience-openrail-m
---
|
vikkyyy/vic_v3 | vikkyyy | "2024-06-16T17:39:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:36:34Z" | Entry not found |
f4b1an/dalle2tribute | f4b1an | "2024-06-16T17:38:53Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-16T17:38:02Z" | ---
license: creativeml-openrail-m
---
|
chhuuchuuz/Kyujin2024 | chhuuchuuz | "2024-06-17T17:35:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:38:12Z" | Entry not found |
zrile-95/llama38binstruct_summarize | zrile-95 | "2024-06-16T17:38:35Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | "2024-06-16T17:38:16Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: llama38binstruct_summarize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama38binstruct_summarize
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2319 | 1.25 | 25 | 1.7342 |
| 0.4296 | 2.5 | 50 | 1.9347 |
| 0.2162 | 3.75 | 75 | 2.1274 |
| 0.111 | 5.0 | 100 | 2.2577 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
Tinuva/MidkemiaAnimeTV | Tinuva | "2024-06-16T17:54:42Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-16T17:44:04Z" | ---
license: creativeml-openrail-m
---
|
fruk19/whisper-thai-north | fruk19 | "2024-06-16T17:48:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:48:09Z" | Entry not found |
clxudiajazmin/ClaudiaSoria_TFM_V4 | clxudiajazmin | "2024-06-17T12:00:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T17:49:00Z" | Entry not found |
lhbit20010120/without_vg_refcoco_model | lhbit20010120 | "2024-06-16T17:50:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T17:50:43Z" | Entry not found |
fruk19/thainorthmodel | fruk19 | "2024-06-16T18:13:42Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-16T18:00:51Z" | Entry not found |
menglc/deepstack-l-vicuna-7b | menglc | "2024-06-17T03:18:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deepstack_llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-16T18:03:55Z" | ---
license: apache-2.0
---
|
samannar/ddduva | samannar | "2024-06-16T18:03:57Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T18:03:57Z" | ---
license: openrail
---
|
Jareen/tesing | Jareen | "2024-06-16T18:07:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:07:38Z" | Entry not found |
SeoulStreamingStation/KLM4 | SeoulStreamingStation | "2024-06-16T19:14:36Z" | 0 | 5 | null | [
"license:other",
"region:us"
] | null | "2024-06-16T18:12:13Z" | ---
license: other
license_name: sss
license_link: LICENSE
---
|
moschouChry/chronos-t5-finetuned_small_1-Patient0-fine-tuned_20240616_205512 | moschouChry | "2024-06-16T18:14:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T18:14:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SilvioLima/absa_treinamento_2 | SilvioLima | "2024-06-16T19:05:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T18:15:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
north/llama2_DensityExperiment_filtered80-70k-exporttest | north | "2024-06-16T18:26:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T18:17:01Z" | Entry not found |
Emmanuel132/Mack_dm690s | Emmanuel132 | "2024-06-16T18:19:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:19:34Z" | Entry not found |
EugeneShally/whisper-small-nl | EugeneShally | "2024-06-17T05:27:31Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-16T18:24:57Z" | Entry not found |
Mohammed-majeed/llama-3-8b-bnb-4bit-Unsloth-chunk-7-0.5-2 | Mohammed-majeed | "2024-06-16T18:26:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T18:25:56Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Mohammed-majeed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bfrenan/Llama3-log-to-ttp-lora-adapters_2 | bfrenan | "2024-06-16T18:29:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T18:29:38Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** bfrenan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bfrenan/Llama3-log-to-ttp-tokenizer_2 | bfrenan | "2024-06-16T18:29:48Z" | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T18:29:47Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Fawazzx/alzheimer_classification_using_resnet50_finetuned | Fawazzx | "2024-06-16T21:37:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:30:25Z" | # Fine-Tuning ResNet50 for Alzheimer's MRI Classification
This repository contains a Jupyter Notebook for fine-tuning a ResNet50 model to classify Alzheimer's disease stages from MRI images. The notebook uses PyTorch and the dataset is loaded from the Hugging Face Datasets library.
## Table of Contents
- [Introduction](#introduction)
- [Dataset](#dataset)
- [Model Architecture](#model-architecture)
- [Setup](#setup)
- [Training](#training)
- [Evaluation](#evaluation)
- [Usage](#usage)
- [Results](#results)
- [Contributing](#contributing)
- [License](#license)
## Introduction
This notebook fine-tunes a pre-trained ResNet50 model to classify MRI images into one of four stages of Alzheimer's disease:
- Mild Demented
- Moderate Demented
- Non-Demented
- Very Mild Demented
## Dataset
The dataset used is [Falah/Alzheimer_MRI](https://huggingface.co/datasets/Falah/Alzheimer_MRI) from the Hugging Face Datasets library. It consists of MRI images categorized into the four stages of Alzheimer's disease.
## Model Architecture
The model architecture is based on ResNet50. The final fully connected layer is modified to output predictions for 4 classes.
## Setup
To run the notebook locally, follow these steps:
1. Clone the repository:
```bash
git clone https://github.com/your_username/alzheimer_mri_classification.git
cd alzheimer_mri_classification
```
2. Install the required dependencies:
```bash
pip install -r requirements.txt
```
3. Open the notebook:
```bash
jupyter notebook fine-tuning.ipynb
```
## Training
The notebook includes sections for:
- Loading and preprocessing the dataset
- Defining the model architecture
- Setting up the training loop with a learning rate scheduler and optimizer
- Training the model for a specified number of epochs
- Saving the trained model weights
## Evaluation
The notebook includes a section for evaluating the trained model on the validation set. It calculates and prints the validation loss and accuracy.
## Usage
Once trained, the model can be saved and used for inference on new MRI images. The trained model weights are saved as alzheimer_model_resnet50.pth.
## Load the model architecture and weights
```python
model = models.resnet50(weights=None)
model.fc = nn.Linear(model.fc.in_features, 4)
model.load_state_dict(torch.load("alzheimer_model_resnet50.pth", map_location=torch.device('cpu')))
model.eval()
```
## Results
The model achieved an accuracy of 95.9375% on the validation set.
## Contributing
Contributions are welcome! If you have any suggestions, bug reports, or feature requests, please open an issue or submit a pull request. |
seashorers/GRAGAS | seashorers | "2024-06-17T00:52:46Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T18:30:39Z" | ---
license: openrail
---
|
hngan/cocowholebody | hngan | "2024-06-16T18:40:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:31:24Z" | Entry not found |
SukritSNegi/Llama-2-7b-chat-new-finetune | SukritSNegi | "2024-06-22T10:47:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:38:43Z" | Entry not found |
dtruong46me/flant5-large-lora | dtruong46me | "2024-06-16T18:41:25Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"license:apache-2.0",
"region:us"
] | null | "2024-06-16T18:41:15Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: google/flan-t5-large
metrics:
- rouge
model-index:
- name: flant5-large-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flant5-large-lora
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6119
- Rouge1: 8.9675
- Rouge2: 0.6714
- Rougel: 8.0407
- Rougelsum: 8.3753
- Gen Len: 18.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.8402 | 1.0 | 1538 | 0.7486 | 8.8441 | 0.6859 | 7.9731 | 8.3103 | 19.502 |
| 0.8152 | 2.0 | 3076 | 0.6119 | 8.9675 | 0.6714 | 8.0407 | 8.3753 | 18.37 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.36.1
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.15.2 |
moschouChry/chronos-t5-finetuned_small_1-Patient0-fine-tuned_20240616_205441 | moschouChry | "2024-06-16T18:42:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T18:42:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PLS442/Yoko | PLS442 | "2024-06-16T18:43:49Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T18:43:02Z" | ---
license: openrail
---
|
coco233/run | coco233 | "2024-06-16T18:44:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:43:28Z" | Entry not found |
abyesses/results | abyesses | "2024-06-16T18:43:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:43:56Z" | Entry not found |
gamallo/translator-gl-zh | gamallo | "2024-06-16T22:16:38Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-16T18:44:02Z" | ---
license: mit
---
**How to translate with this model**
+ Install [Python 3.9](https://www.python.org/downloads/release/python-390/) + ctranslate 2 + subword-nmt
```bash
pip install ctranslate2~=3.20.0
```
```bash
pip install subword-nmt
```
+ tokenization with BPE:
```bash
subword-nmt apply-bpe -c gl-detok10k.code < input_file.txt > input_file_bpe.txt
```
+ Translating an input_text using ct2_detok-gl-zh:
```bash
python3 trans_ct2.py ct2_detok-gl-zh input_file_bpe.txt >output_file_bpe.txt
```
+ DeBPEar output txt:
```bash
cat out_test_bpe.txt | sed "s/@@ //g" > output_file.txt
```
**Acknowledgments**
Thanks to Tang Waying, Zheng Jie and Wang Tianjiao for helping prepare the parallel corpora. |
puipuipui/zee | puipuipui | "2024-06-16T19:12:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:44:14Z" | Entry not found |
lucao123/h-an-m-model | lucao123 | "2024-06-16T18:46:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:46:43Z" | Entry not found |
Paco4365483/Finetune10 | Paco4365483 | "2024-06-16T19:02:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-16T18:50:52Z" | Entry not found |
strwbrylily/Im-Nayeon-by-strwbrylily | strwbrylily | "2024-06-16T18:55:29Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T18:54:50Z" | ---
license: openrail
---
|
numen-tech/Llama-3-WhiteRabbitNeo-8B-v2.0-w4a16g128asym | numen-tech | "2024-06-16T19:00:17Z" | 0 | 0 | null | [
"arxiv:2308.13137",
"license:llama3",
"region:us"
] | null | "2024-06-16T18:55:34Z" | ---
license: llama3
---
4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0).
|
codingninja/testing | codingninja | "2024-06-16T18:55:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:55:43Z" | Entry not found |
numen-tech/Llama-3-WhiteRabbitNeo-8B-v2.0-w3a16g40sym | numen-tech | "2024-06-16T19:00:26Z" | 0 | 0 | null | [
"arxiv:2308.13137",
"license:llama3",
"region:us"
] | null | "2024-06-16T18:56:01Z" | ---
license: llama3
---
3-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0).
|
kaitr/opt-6.7b-lora | kaitr | "2024-06-16T18:59:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T18:59:00Z" | Entry not found |
AndreMitri/BERT_cls_polaridade | AndreMitri | "2024-06-16T19:05:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:04:20Z" | Entry not found |
Danyt24/finetuning-sentiment-model-4000-samples | Danyt24 | "2024-06-16T19:06:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:06:29Z" | Entry not found |
HotDrify/thelemyAI | HotDrify | "2024-06-16T19:06:31Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-06-16T19:06:31Z" | ---
license: mit
---
|
silent666/Qwen-Qwen1.5-7B-1718564795 | silent666 | "2024-06-16T19:06:38Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"region:us"
] | null | "2024-06-16T19:06:35Z" | ---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
iasjkk/MV_EC | iasjkk | "2024-07-01T18:07:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:07:21Z" | Entry not found |
levipereira/yolov8n-trt | levipereira | "2024-06-16T19:08:44Z" | 0 | 0 | null | [
"license:agpl-3.0",
"region:us"
] | null | "2024-06-16T19:08:43Z" | ---
license: agpl-3.0
---
|
Nibo4k/CantoraCreditos | Nibo4k | "2024-06-16T21:24:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:11:43Z" | Entry not found |
Testvsls0224/test1model | Testvsls0224 | "2024-06-17T01:57:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:11:58Z" | Entry not found |
Damir4/izamodel | Damir4 | "2024-06-16T19:12:05Z" | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | null | "2024-06-16T19:12:05Z" | ---
license: gpl-3.0
---
|
vsls0224/testModel | vsls0224 | "2024-06-16T19:13:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:13:10Z" | Entry not found |
ahmedesmail16/0.50-800Train-100Test-beit-base | ahmedesmail16 | "2024-06-17T00:18:58Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224-pt22k-ft22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-16T19:16:41Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224-pt22k-ft22k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 0.50-800Train-100Test-beit-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.50-800Train-100Test-beit-base
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7501
- Accuracy: 0.8192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.7627 | 0.9536 | 18 | 0.6991 | 0.7860 |
| 0.3414 | 1.9603 | 37 | 0.5881 | 0.8070 |
| 0.1402 | 2.9669 | 56 | 0.5879 | 0.8114 |
| 0.0663 | 3.9735 | 75 | 0.6249 | 0.8175 |
| 0.0377 | 4.9801 | 94 | 0.6539 | 0.8210 |
| 0.0314 | 5.9868 | 113 | 0.7074 | 0.8175 |
| 0.0189 | 6.9934 | 132 | 0.7596 | 0.8210 |
| 0.0147 | 8.0 | 151 | 0.7211 | 0.8253 |
| 0.0157 | 8.9536 | 169 | 0.7412 | 0.8166 |
| 0.0095 | 9.5364 | 180 | 0.7501 | 0.8192 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Rrrr3/Hhk | Rrrr3 | "2024-06-16T19:18:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:18:07Z" | Entry not found |
andreluiz1/teste | andreluiz1 | "2024-06-16T19:21:34Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T19:21:34Z" | ---
license: openrail
---
|
mjfan1999/LukeCombs2024 | mjfan1999 | "2024-06-16T19:32:06Z" | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | "2024-06-16T19:22:29Z" | ---
license: unknown
---
|
dostoewslybtw/portal_of_i | dostoewslybtw | "2024-06-16T19:24:01Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-16T19:24:01Z" | ---
license: apache-2.0
---
|
moschouChry/chronos-t5-finetuned_small_1-Patient0-fine-tuned_20240616_205414 | moschouChry | "2024-06-16T19:25:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-16T19:24:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
codingninja/openchat-7b-galbaat | codingninja | "2024-06-21T13:22:07Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-16T19:25:09Z" | Entry not found |
royvdkoelen/DayZ | royvdkoelen | "2024-06-16T19:26:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:26:24Z" | Entry not found |
matthewleechen/yolov8s_ukpatents_singleclass | matthewleechen | "2024-06-16T19:27:14Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-06-16T19:26:30Z" | Entry not found |
aerainyourarea/S1Seoyeon | aerainyourarea | "2024-06-16T19:34:03Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T19:26:40Z" | ---
license: openrail
---
|
Marco127/llamantino_hodi_requalification | Marco127 | "2024-06-16T21:45:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T19:28:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JhuTheBunny999/Frey | JhuTheBunny999 | "2024-06-20T09:39:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:31:48Z" | Entry not found |
strwbrylily/Kim-Jiwoo-RUNext-by-strwbrylily | strwbrylily | "2024-06-16T19:34:11Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T19:31:51Z" | ---
license: openrail
---
|
TheMindExpansionNetwork/m1nd3xpand3r-1024x1024-sd3-lora | TheMindExpansionNetwork | "2024-06-16T19:36:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T19:36:26Z" | Entry not found |
Mortello/q-FrozenLake-v1 | Mortello | "2024-06-16T19:36:51Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T19:36:48Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mortello/q-FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
T-ZERO/first_prototype | T-ZERO | "2024-06-16T19:44:23Z" | 0 | 0 | flair | [
"flair",
"legal",
"text-generation",
"fa",
"dataset:OpenGVLab/ShareGPT-4o",
"license:llama3",
"region:us"
] | text-generation | "2024-06-16T19:38:26Z" | ---
license: llama3
datasets:
- OpenGVLab/ShareGPT-4o
language:
- fa
metrics:
- character
library_name: flair
pipeline_tag: text-generation
tags:
- legal
--- |
IlyaGusev/saiga_llama3_70b_sft_m1_d5_abliterated_kto_m1_d2_lora | IlyaGusev | "2024-06-16T19:42:25Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-16T19:39:29Z" | Entry not found |
Mortello/q-Taxi-v3 | Mortello | "2024-06-16T19:43:32Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T19:43:31Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Mortello/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kawagoshi-llm-team/test_40B | kawagoshi-llm-team | "2024-06-16T19:58:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-16T19:45:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
menglc/deepstack-l-hd-vicuna-7b | menglc | "2024-06-17T03:08:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deepstack_llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-16T19:49:03Z" | ---
license: apache-2.0
---
|
stojchet/python-sft-r64-a16-d0.05-e3 | stojchet | "2024-06-16T19:54:05Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
] | null | "2024-06-16T19:53:59Z" | ---
base_model: deepseek-ai/deepseek-coder-1.3b-base
datasets:
- generator
library_name: peft
license: other
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: python-sft-r64-a16-d0.05-e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/rmvtpvu9)
# python-sft-r64-a16-d0.05-e3
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.42.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1 |
pwl15/llava-v1.5-food101 | pwl15 | "2024-06-17T18:52:42Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-16T19:59:04Z" | Entry not found |
kohapahm/distilhubert-finetuned-gtzan | kohapahm | "2024-06-16T20:00:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T20:00:38Z" | Entry not found |
CLASS-MATE/llama2-train_test | CLASS-MATE | "2024-06-17T22:40:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T20:03:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xpozryx/ponyRealisticTrainingColab | xpozryx | "2024-06-16T21:59:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T20:07:59Z" | Entry not found |
SkyWR/wgn | SkyWR | "2024-06-16T20:12:04Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T20:09:32Z" | ---
license: openrail
---
|
evitalyst/ChatMe | evitalyst | "2024-06-16T20:09:57Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-16T20:09:57Z" | ---
license: apache-2.0
---
|
Yuseifer/Reinforce_model-cartpole | Yuseifer | "2024-06-16T20:10:22Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-16T20:10:13Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_model-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 467.80 +/- 96.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sharad31/my_model | sharad31 | "2024-06-16T20:12:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T20:11:37Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** sharad31
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shunsso/data | shunsso | "2024-06-16T20:19:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T20:19:13Z" | Entry not found |
Roo89/Ru | Roo89 | "2024-06-16T20:19:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T20:19:36Z" | Entry not found |
GalaktischeGurke/whisper-large-v3_German_merge_ratio_ch_de_0.5 | GalaktischeGurke | "2024-06-16T20:19:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T20:19:51Z" | Entry not found |
sharad31/talktoyourself | sharad31 | "2024-06-16T20:29:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T20:29:04Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** sharad31
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SZ0/sha | SZ0 | "2024-06-17T21:47:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T20:51:37Z" | Entry not found |
arloo/Iggy_Azalea | arloo | "2024-06-16T20:59:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T20:56:37Z" | Entry not found |
Augusto777/swinv2-tiny-patch4-window8-256-ve-UH | Augusto777 | "2024-06-16T21:07:34Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-16T20:59:33Z" | ---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swinv2-tiny-patch4-window8-256-ve-UH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-ve-UH
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0154
- Accuracy: 0.7115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.6092 | 0.4038 |
| No log | 2.0 | 4 | 1.6075 | 0.4231 |
| No log | 3.0 | 6 | 1.6037 | 0.4038 |
| No log | 4.0 | 8 | 1.5960 | 0.4038 |
| 1.6041 | 5.0 | 10 | 1.5820 | 0.4038 |
| 1.6041 | 6.0 | 12 | 1.5578 | 0.4038 |
| 1.6041 | 7.0 | 14 | 1.5218 | 0.4038 |
| 1.6041 | 8.0 | 16 | 1.4849 | 0.4038 |
| 1.6041 | 9.0 | 18 | 1.4459 | 0.4038 |
| 1.4962 | 10.0 | 20 | 1.4109 | 0.4038 |
| 1.4962 | 11.0 | 22 | 1.3941 | 0.4038 |
| 1.4962 | 12.0 | 24 | 1.3865 | 0.4038 |
| 1.4962 | 13.0 | 26 | 1.3754 | 0.4038 |
| 1.4962 | 14.0 | 28 | 1.3655 | 0.4038 |
| 1.3392 | 15.0 | 30 | 1.3794 | 0.4038 |
| 1.3392 | 16.0 | 32 | 1.3800 | 0.4038 |
| 1.3392 | 17.0 | 34 | 1.3404 | 0.4038 |
| 1.3392 | 18.0 | 36 | 1.3337 | 0.4038 |
| 1.3392 | 19.0 | 38 | 1.3602 | 0.4038 |
| 1.2738 | 20.0 | 40 | 1.3384 | 0.4038 |
| 1.2738 | 21.0 | 42 | 1.3248 | 0.4038 |
| 1.2738 | 22.0 | 44 | 1.2693 | 0.4038 |
| 1.2738 | 23.0 | 46 | 1.2395 | 0.4038 |
| 1.2738 | 24.0 | 48 | 1.2427 | 0.4038 |
| 1.2283 | 25.0 | 50 | 1.2885 | 0.4038 |
| 1.2283 | 26.0 | 52 | 1.2916 | 0.4038 |
| 1.2283 | 27.0 | 54 | 1.2353 | 0.4038 |
| 1.2283 | 28.0 | 56 | 1.2032 | 0.4038 |
| 1.2283 | 29.0 | 58 | 1.2100 | 0.5577 |
| 1.1804 | 30.0 | 60 | 1.2110 | 0.6154 |
| 1.1804 | 31.0 | 62 | 1.1710 | 0.6346 |
| 1.1804 | 32.0 | 64 | 1.1323 | 0.6154 |
| 1.1804 | 33.0 | 66 | 1.1083 | 0.5962 |
| 1.1804 | 34.0 | 68 | 1.0935 | 0.5962 |
| 1.0925 | 35.0 | 70 | 1.0853 | 0.6346 |
| 1.0925 | 36.0 | 72 | 1.0622 | 0.6731 |
| 1.0925 | 37.0 | 74 | 1.0154 | 0.7115 |
| 1.0925 | 38.0 | 76 | 0.9901 | 0.7115 |
| 1.0925 | 39.0 | 78 | 0.9925 | 0.6923 |
| 0.9981 | 40.0 | 80 | 0.9865 | 0.6731 |
| 0.9981 | 41.0 | 82 | 0.9540 | 0.6731 |
| 0.9981 | 42.0 | 84 | 0.9316 | 0.7115 |
| 0.9981 | 43.0 | 86 | 0.9304 | 0.7115 |
| 0.9981 | 44.0 | 88 | 0.9246 | 0.6923 |
| 0.9102 | 45.0 | 90 | 0.8785 | 0.7115 |
| 0.9102 | 46.0 | 92 | 0.8422 | 0.7115 |
| 0.9102 | 47.0 | 94 | 0.8381 | 0.7115 |
| 0.9102 | 48.0 | 96 | 0.8359 | 0.7115 |
| 0.9102 | 49.0 | 98 | 0.8444 | 0.7115 |
| 0.8496 | 50.0 | 100 | 0.8287 | 0.6731 |
| 0.8496 | 51.0 | 102 | 0.7973 | 0.6923 |
| 0.8496 | 52.0 | 104 | 0.7799 | 0.6923 |
| 0.8496 | 53.0 | 106 | 0.7780 | 0.6923 |
| 0.8496 | 54.0 | 108 | 0.7820 | 0.7115 |
| 0.7808 | 55.0 | 110 | 0.7896 | 0.7115 |
| 0.7808 | 56.0 | 112 | 0.7737 | 0.6923 |
| 0.7808 | 57.0 | 114 | 0.7631 | 0.6731 |
| 0.7808 | 58.0 | 116 | 0.7635 | 0.6538 |
| 0.7808 | 59.0 | 118 | 0.7779 | 0.6538 |
| 0.757 | 60.0 | 120 | 0.7990 | 0.6731 |
| 0.757 | 61.0 | 122 | 0.8222 | 0.6538 |
| 0.757 | 62.0 | 124 | 0.8204 | 0.6538 |
| 0.757 | 63.0 | 126 | 0.7964 | 0.6731 |
| 0.757 | 64.0 | 128 | 0.7818 | 0.6538 |
| 0.6919 | 65.0 | 130 | 0.7796 | 0.6346 |
| 0.6919 | 66.0 | 132 | 0.7831 | 0.6346 |
| 0.6919 | 67.0 | 134 | 0.7867 | 0.6346 |
| 0.6919 | 68.0 | 136 | 0.7856 | 0.6346 |
| 0.6919 | 69.0 | 138 | 0.7793 | 0.6538 |
| 0.6722 | 70.0 | 140 | 0.7736 | 0.6538 |
| 0.6722 | 71.0 | 142 | 0.7682 | 0.6538 |
| 0.6722 | 72.0 | 144 | 0.7681 | 0.6538 |
| 0.6722 | 73.0 | 146 | 0.7672 | 0.6538 |
| 0.6722 | 74.0 | 148 | 0.7655 | 0.6538 |
| 0.6642 | 75.0 | 150 | 0.7645 | 0.6538 |
| 0.6642 | 76.0 | 152 | 0.7658 | 0.6538 |
| 0.6642 | 77.0 | 154 | 0.7677 | 0.6538 |
| 0.6642 | 78.0 | 156 | 0.7683 | 0.6538 |
| 0.6642 | 79.0 | 158 | 0.7684 | 0.6538 |
| 0.6491 | 80.0 | 160 | 0.7686 | 0.6538 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tgama/wem_sentiment_model_v2 | tgama | "2024-06-20T18:03:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T21:00:04Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** tgama
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Auart/klm | Auart | "2024-06-23T10:21:47Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-16T21:00:05Z" | ---
license: openrail
---
|
pascal-maker/paligemma_vqav2 | pascal-maker | "2024-06-16T21:01:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T21:01:39Z" | Entry not found |
Avalonus/Griffith | Avalonus | "2024-06-16T21:42:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-16T21:04:52Z" | Entry not found |
Edgar404/donut-shivi-cheques_pruning_0.5 | Edgar404 | "2024-07-02T10:13:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T21:07:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |