modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
435M
| likes
int64 0
6.52k
| library_name
stringclasses 345
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 51
values | createdAt
unknown | card
stringlengths 1
913k
|
---|---|---|---|---|---|---|---|---|---|
SandeepBoddu/BigBull | SandeepBoddu | "2024-04-06T06:27:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T06:27:50Z" | Entry not found |
teragron/canv3 | teragron | "2024-04-06T06:33:48Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T06:28:25Z" | Entry not found |
Inishds/code-llama-7b-text-to-sql | Inishds | "2024-04-06T06:28:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T06:28:30Z" | Entry not found |
oreeenn/jennie | oreeenn | "2024-08-11T10:25:13Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-06T06:41:13Z" | ---
license: openrail
---
|
Jinwoo870/Llama2_Finetuned_train_textdata_prompt_Instruction_Set | Jinwoo870 | "2024-04-08T07:22:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T06:46:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thanhtuit96/ssd | thanhtuit96 | "2024-04-14T18:00:01Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-04-06T06:51:03Z" | Entry not found |
lmms-lab/PG_Video_LLaVA-projector | lmms-lab | "2024-04-06T06:52:42Z" | 0 | 1 | null | [
"region:us"
] | null | "2024-04-06T06:51:33Z" | projector checkpoints from
```
@article{munasinghe2023PGVideoLLaVA,
title={PG-Video-LLaVA: Pixel Grounding Large Video-Language Models},
author={Shehan Munasinghe and Rusiru Thushara and Muhammad Maaz and Hanoona Abdul Rasheed and Salman Khan and Mubarak Shah and Fahad Khan},
journal={ArXiv 2311.13435},
year={2023}
}
```
for you to more easily to download with |
BhushanP-01/AdGenerator | BhushanP-01 | "2024-04-06T06:53:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T06:53:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Inishds/deepseekcoder1.3B-text-to-sql | Inishds | "2024-04-06T08:12:16Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
] | null | "2024-04-06T06:54:37Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: deepseek-ai/deepseek-coder-1.3b-base
model-index:
- name: deepseekcoder1.3B-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseekcoder1.3B-text-to-sql
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
tkwon4/whisper-large-v3-finetuned-6 | tkwon4 | "2024-04-06T06:57:43Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-06T06:55:35Z" | ---
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v3-finetuned-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-finetuned-6
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1070
- Wer: 115.1997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3572 | 1.0 | 7532 | 0.2604 | 105.7528 |
| 0.0008 | 2.0 | 15064 | 0.1888 | 99.7556 |
| 0.0001 | 3.0 | 22596 | 0.1495 | 106.2064 |
| 0.0 | 4.0 | 30128 | 0.1288 | 97.6591 |
| 0.0025 | 5.0 | 37660 | 0.1170 | 124.9313 |
| 0.0001 | 6.0 | 45192 | 0.1071 | 122.8575 |
| 0.0001 | 7.0 | 52724 | 0.1053 | 117.5784 |
| 0.0009 | 8.0 | 60256 | 0.1045 | 117.7447 |
| 0.0 | 9.0 | 67788 | 0.1058 | 118.8636 |
| 0.0003 | 10.0 | 75320 | 0.1070 | 115.1997 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
wfdsdfsdfwer/FL | wfdsdfsdfwer | "2024-05-08T19:35:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:04:52Z" | Entry not found |
Aditadot23/ultramen.bksi | Aditadot23 | "2024-04-06T07:10:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:06:06Z" | Entry not found |
devesh1496/prompt_recover | devesh1496 | "2024-04-06T07:07:14Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T07:07:14Z" | ---
license: apache-2.0
---
|
bei0209/test_0406 | bei0209 | "2024-04-07T07:15:49Z" | 0 | 0 | null | [
"code",
"graph-ml",
"en",
"dataset:imagenet-1k",
"doi:10.57967/hf/2041",
"license:apache-2.0",
"region:us"
] | graph-ml | "2024-04-06T07:15:38Z" | ---
license: apache-2.0
datasets:
- imagenet-1k
language:
- en
metrics:
- accuracy
pipeline_tag: graph-ml
tags:
- code
--- |
viber1/Llama-2-7b-chat-finetune | viber1 | "2024-04-06T07:16:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:16:32Z" | Entry not found |
MinhMinh09/gpt2-vietnamese-finetuned-ner | MinhMinh09 | "2024-04-06T07:19:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:19:14Z" | Entry not found |
Simonk97/LUNAS | Simonk97 | "2024-04-06T10:57:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:19:38Z" | Entry not found |
leptonai/UNA-SimpleSmaug-34b-v1beta-4heads | leptonai | "2024-04-06T07:20:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T07:20:20Z" | Entry not found |
silencer107/bobik03 | silencer107 | "2024-04-06T07:28:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:25:58Z" | Entry not found |
yohanchu/tripleS | yohanchu | "2024-04-06T07:28:22Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:26:22Z" | Entry not found |
tistak/sn3_12 | tistak | "2024-06-03T07:14:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:26:56Z" | Entry not found |
tistak/sn3_8 | tistak | "2024-06-03T07:14:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:27:00Z" | Entry not found |
tistak/sn3_7 | tistak | "2024-06-03T07:14:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:27:00Z" | Entry not found |
tistak/sn3_6 | tistak | "2024-06-03T07:14:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:27:01Z" | Entry not found |
tistak/sn3_11 | tistak | "2024-06-03T07:14:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:27:01Z" | Entry not found |
tistak/sn3_9 | tistak | "2024-06-03T07:14:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:27:01Z" | Entry not found |
tistak/sn3_10 | tistak | "2024-06-03T07:14:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:27:01Z" | Entry not found |
xjyplayer/cyberpunk-goggles | xjyplayer | "2024-04-06T07:40:46Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T07:27:23Z" | ---
license: apache-2.0
---
|
yraziel/edengolan | yraziel | "2024-04-06T07:32:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:28:02Z" | Entry not found |
UlutSoftLLC/kyrgyz-tts | UlutSoftLLC | "2024-04-12T09:26:44Z" | 0 | 1 | null | [
"Text-to-speech",
"region:us"
] | null | "2024-04-06T07:29:36Z" | ---
tags:
- Text-to-speech
---
Кыргыз Республикасынын Президентине караштуу Мамлекеттик тил боюнча улуттук комиссиясы
Кыргызча текстти аудиого айландыруу
Kyrgyz Text-To-Speech Models trained by Ulutsoft LLC.
Эркек: checkpoint_epoch=279.ckpt
Аял: checkpoint_epoch=479.ckpt
GitHub: https://github.com/UlutSoftLLC/MamtilTTS |
silencer107/bobik04 | silencer107 | "2024-04-06T07:31:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:31:32Z" | Entry not found |
aa1223/MTCNN | aa1223 | "2024-04-06T07:39:29Z" | 0 | 0 | espnet | [
"espnet",
"deepfake",
"en",
"dataset:Cohere/wikipedia-2023-11-embed-multilingual-v3",
"license:mit",
"region:us"
] | null | "2024-04-06T07:34:16Z" | ---
license: mit
datasets:
- Cohere/wikipedia-2023-11-embed-multilingual-v3
language:
- en
metrics:
- bertscore
library_name: espnet
tags:
- deepfake
--- |
oneandahalfcats/halfcatpooplol | oneandahalfcats | "2024-04-06T07:36:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:35:16Z" | Entry not found |
silencer107/bobik05 | silencer107 | "2024-04-06T07:37:18Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:37:10Z" | Entry not found |
oneandahalfcats/notenoughcats | oneandahalfcats | "2024-04-06T07:38:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:38:26Z" | Entry not found |
smahjouri/falcon-40b-formai | smahjouri | "2024-04-06T07:40:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T07:40:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oneandahalfcats/valstillbrokeniprofit | oneandahalfcats | "2024-04-06T07:41:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:41:27Z" | Entry not found |
AMKAK/REALSTATEBUZZ | AMKAK | "2024-04-06T07:50:22Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-06T07:50:22Z" | ---
license: mit
---
|
pisad/f | pisad | "2024-04-06T07:52:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:52:17Z" | Entry not found |
oneandahalfcats/valbrokeiprofithehe | oneandahalfcats | "2024-04-06T08:08:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:55:52Z" | Entry not found |
Or4cl3-1/SAM-Gemini-BLOOM-OPT-Gopher-Megatron-slerp | Or4cl3-1 | "2024-04-06T07:58:02Z" | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"SuperAGI/SAM",
"GoogleAI/Gemini",
"bigscience/bloom",
"openai/opt-175b",
"deepmind/gopher",
"microsoft/megatron-turing-nlg",
"base_model:SuperAGI/SAM",
"base_model:merge:SuperAGI/SAM",
"base_model:bigscience/bloom",
"base_model:merge:bigscience/bloom",
"region:us"
] | null | "2024-04-06T07:58:02Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- SuperAGI/SAM
- GoogleAI/Gemini
- bigscience/bloom
- openai/opt-175b
- deepmind/gopher
- microsoft/megatron-turing-nlg
base_model:
- SuperAGI/SAM
- GoogleAI/Gemini
- bigscience/bloom
- openai/opt-175b
- deepmind/gopher
- microsoft/megatron-turing-nlg
---
# SAM-Gemini-BLOOM-OPT-Gopher-Megatron-slerp
SAM-Gemini-BLOOM-OPT-Gopher-Megatron-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SuperAGI/SAM](https://huggingface.co/SuperAGI/SAM)
* [GoogleAI/Gemini](https://huggingface.co/GoogleAI/Gemini)
* [bigscience/bloom](https://huggingface.co/bigscience/bloom)
* [openai/opt-175b](https://huggingface.co/openai/opt-175b)
* [deepmind/gopher](https://huggingface.co/deepmind/gopher)
* [microsoft/megatron-turing-nlg](https://huggingface.co/microsoft/megatron-turing-nlg)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SuperAGI/SAM
layer_range: [0, 32]
- model: GoogleAI/Gemini
layer_range: [0, 32]
- model: bigscience/bloom
layer_range: [0, 32]
- model: openai/opt-175b
layer_range: [0, 32]
- model: deepmind/gopher
layer_range: [0, 32]
- model: microsoft/megatron-turing-nlg
layer_range: [0, 32]
merge_method: slerp
base_model: SuperAGI/SAM
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat1
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Or4cl3-1/SAM-Gemini-BLOOM-OPT-Gopher-Megatron-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
lijingnian/gollie-7B | lijingnian | "2024-04-06T07:59:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T07:59:46Z" | Entry not found |
StandardCAS-NSTID/RAIDP-LICENSES | StandardCAS-NSTID | "2024-04-06T08:12:47Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-04-06T08:04:49Z" | ---
license: other
license_name: raidp
license_link: LICENSE
---
|
axs2k/anubis | axs2k | "2024-04-06T11:09:59Z" | 0 | 1 | transformers | [
"transformers",
"dataset:zefang-liu/secqa",
"dataset:zefang-liu/phishing-email-dataset",
"dataset:zefang-liu/cve-and-cwe-mapping-dataset",
"dataset:morpheuslord/cve-llm-training",
"dataset:EddieChen372/python_vul_cvefix_small",
"dataset:icantiemyshoe/cve-to-metasploit-module",
"dataset:lambdasec/cve-single-line-fixes",
"dataset:lambdasec/gh-top-1000-projects-vulns",
"dataset:hackaprompt/hackaprompt-dataset",
"dataset:pentest-org-2/dataset-2",
"dataset:Isamu136/penetration_testing_scraped_dataset",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:Vezora/Tested-143k-Python-Alpaca",
"dataset:neuralsentry/bigvul_devign_cvefixes_neuralsentry_commits",
"dataset:hotal/linux_commands",
"dataset:Nexa4AI/android_data_test",
"dataset:SamsungSAILMontreal/deepnets1m",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T08:07:12Z" | ---
license: gpl-3.0
datasets:
- zefang-liu/secqa
- zefang-liu/phishing-email-dataset
- zefang-liu/cve-and-cwe-mapping-dataset
- morpheuslord/cve-llm-training
- EddieChen372/python_vul_cvefix_small
- icantiemyshoe/cve-to-metasploit-module
- lambdasec/cve-single-line-fixes
- lambdasec/gh-top-1000-projects-vulns
- hackaprompt/hackaprompt-dataset
- pentest-org-2/dataset-2
- Isamu136/penetration_testing_scraped_dataset
- iamtarun/python_code_instructions_18k_alpaca
- Vezora/Tested-143k-Python-Alpaca
- neuralsentry/bigvul_devign_cvefixes_neuralsentry_commits
- hotal/linux_commands
- Nexa4AI/android_data_test
- SamsungSAILMontreal/deepnets1m
metrics:
- accuracy
- precision
- bleurt
- rouge
- code_eval
- f1
- bertscore
- bleu
library_name: transformers
--- |
Reyouf/speecht5_tts_Ar | Reyouf | "2024-04-06T08:42:01Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-04-06T08:07:32Z" | Entry not found |
TechWezz/Q_A | TechWezz | "2024-04-06T08:12:23Z" | 0 | 0 | null | [
"license:llama2",
"region:us"
] | null | "2024-04-06T08:12:23Z" | ---
license: llama2
---
|
Alphaiborkano/Alphaschool | Alphaiborkano | "2024-04-06T08:14:56Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T08:14:56Z" | ---
license: apache-2.0
---
|
haryoaw/scenario-KD-PO-MSV-D2_data-AmazonScience_massive_all_1_1_delta | haryoaw | "2024-04-06T08:16:41Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"dataset:massive",
"base_model:haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base",
"base_model:finetune:haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T08:15:55Z" | ---
license: mit
base_model: haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
- f1
model-index:
- name: scenario-KD-PO-MSV-D2_data-AmazonScience_massive_all_1_1_delta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scenario-KD-PO-MSV-D2_data-AmazonScience_massive_all_1_1_delta
This model is a fine-tuned version of [haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base](https://huggingface.co/haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0311
- Accuracy: 0.8606
- F1: 0.8342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 11213
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 0.9866 | 0.27 | 5000 | 1.5404 | 0.8352 | 0.8118 |
| 0.8243 | 0.53 | 10000 | 1.4475 | 0.8405 | 0.8176 |
| 0.7347 | 0.8 | 15000 | 1.4195 | 0.8413 | 0.8225 |
| 0.5835 | 1.07 | 20000 | 1.3804 | 0.8457 | 0.8210 |
| 0.5829 | 1.34 | 25000 | 1.3623 | 0.8460 | 0.8221 |
| 0.569 | 1.6 | 30000 | 1.3527 | 0.8474 | 0.8266 |
| 0.5566 | 1.87 | 35000 | 1.3316 | 0.8463 | 0.8242 |
| 0.478 | 2.14 | 40000 | 1.3076 | 0.8492 | 0.8250 |
| 0.4712 | 2.41 | 45000 | 1.2920 | 0.8507 | 0.8270 |
| 0.4646 | 2.67 | 50000 | 1.2986 | 0.8497 | 0.8256 |
| 0.4534 | 2.94 | 55000 | 1.2796 | 0.8503 | 0.8284 |
| 0.4099 | 3.21 | 60000 | 1.2625 | 0.8520 | 0.8297 |
| 0.4002 | 3.47 | 65000 | 1.2672 | 0.8503 | 0.8290 |
| 0.4116 | 3.74 | 70000 | 1.2519 | 0.8494 | 0.8259 |
| 0.4068 | 4.01 | 75000 | 1.2484 | 0.8494 | 0.8238 |
| 0.3706 | 4.28 | 80000 | 1.2264 | 0.8519 | 0.8280 |
| 0.3724 | 4.54 | 85000 | 1.2330 | 0.8525 | 0.8278 |
| 0.3804 | 4.81 | 90000 | 1.2312 | 0.8483 | 0.8251 |
| 0.34 | 5.08 | 95000 | 1.2227 | 0.8508 | 0.8257 |
| 0.3606 | 5.34 | 100000 | 1.1968 | 0.8534 | 0.8278 |
| 0.3473 | 5.61 | 105000 | 1.2000 | 0.8551 | 0.8311 |
| 0.3554 | 5.88 | 110000 | 1.2086 | 0.8536 | 0.8299 |
| 0.3234 | 6.15 | 115000 | 1.1836 | 0.8553 | 0.8313 |
| 0.325 | 6.41 | 120000 | 1.1953 | 0.8536 | 0.8303 |
| 0.3227 | 6.68 | 125000 | 1.1653 | 0.8559 | 0.8305 |
| 0.3213 | 6.95 | 130000 | 1.1733 | 0.8550 | 0.8305 |
| 0.3042 | 7.22 | 135000 | 1.1637 | 0.8550 | 0.8297 |
| 0.3099 | 7.48 | 140000 | 1.1601 | 0.8563 | 0.8314 |
| 0.3064 | 7.75 | 145000 | 1.1568 | 0.8567 | 0.8296 |
| 0.2928 | 8.02 | 150000 | 1.1507 | 0.8559 | 0.8322 |
| 0.298 | 8.28 | 155000 | 1.1578 | 0.8556 | 0.8316 |
| 0.2896 | 8.55 | 160000 | 1.1591 | 0.8574 | 0.8349 |
| 0.2895 | 8.82 | 165000 | 1.1504 | 0.8558 | 0.8299 |
| 0.2827 | 9.09 | 170000 | 1.1552 | 0.8551 | 0.8286 |
| 0.2864 | 9.35 | 175000 | 1.1277 | 0.8563 | 0.8328 |
| 0.2796 | 9.62 | 180000 | 1.1345 | 0.8564 | 0.8317 |
| 0.28 | 9.89 | 185000 | 1.1409 | 0.8562 | 0.8297 |
| 0.2753 | 10.15 | 190000 | 1.1290 | 0.8557 | 0.8307 |
| 0.2766 | 10.42 | 195000 | 1.1143 | 0.8580 | 0.8321 |
| 0.2652 | 10.69 | 200000 | 1.1150 | 0.8560 | 0.8306 |
| 0.2741 | 10.96 | 205000 | 1.1275 | 0.8568 | 0.8296 |
| 0.2668 | 11.22 | 210000 | 1.1069 | 0.8574 | 0.8333 |
| 0.2629 | 11.49 | 215000 | 1.1168 | 0.8580 | 0.8315 |
| 0.2607 | 11.76 | 220000 | 1.1173 | 0.8584 | 0.8344 |
| 0.2546 | 12.03 | 225000 | 1.1096 | 0.8574 | 0.8315 |
| 0.2583 | 12.29 | 230000 | 1.1244 | 0.8554 | 0.8303 |
| 0.2557 | 12.56 | 235000 | 1.1080 | 0.8572 | 0.8339 |
| 0.2571 | 12.83 | 240000 | 1.1008 | 0.8578 | 0.8344 |
| 0.2471 | 13.09 | 245000 | 1.0980 | 0.8578 | 0.8323 |
| 0.2528 | 13.36 | 250000 | 1.0975 | 0.8576 | 0.8311 |
| 0.2478 | 13.63 | 255000 | 1.0921 | 0.8581 | 0.8322 |
| 0.2548 | 13.9 | 260000 | 1.0826 | 0.8604 | 0.8352 |
| 0.2429 | 14.16 | 265000 | 1.0903 | 0.8578 | 0.8332 |
| 0.2416 | 14.43 | 270000 | 1.0892 | 0.8593 | 0.8344 |
| 0.2331 | 14.7 | 275000 | 1.0807 | 0.8601 | 0.8339 |
| 0.2449 | 14.96 | 280000 | 1.0792 | 0.8579 | 0.8312 |
| 0.237 | 15.23 | 285000 | 1.0863 | 0.8600 | 0.8348 |
| 0.2338 | 15.5 | 290000 | 1.0805 | 0.8575 | 0.8310 |
| 0.2378 | 15.77 | 295000 | 1.0822 | 0.8579 | 0.8321 |
| 0.2346 | 16.03 | 300000 | 1.0757 | 0.8583 | 0.8309 |
| 0.2274 | 16.3 | 305000 | 1.0847 | 0.8579 | 0.8312 |
| 0.2414 | 16.57 | 310000 | 1.0734 | 0.8602 | 0.8353 |
| 0.2314 | 16.84 | 315000 | 1.0814 | 0.8584 | 0.8333 |
| 0.2291 | 17.1 | 320000 | 1.0677 | 0.8604 | 0.8340 |
| 0.223 | 17.37 | 325000 | 1.0731 | 0.8590 | 0.8340 |
| 0.2224 | 17.64 | 330000 | 1.0648 | 0.8601 | 0.8338 |
| 0.2264 | 17.9 | 335000 | 1.0603 | 0.8593 | 0.8333 |
| 0.2252 | 18.17 | 340000 | 1.0639 | 0.8590 | 0.8323 |
| 0.2164 | 18.44 | 345000 | 1.0607 | 0.8595 | 0.8336 |
| 0.2231 | 18.71 | 350000 | 1.0614 | 0.8597 | 0.8338 |
| 0.2204 | 18.97 | 355000 | 1.0702 | 0.8582 | 0.8323 |
| 0.2155 | 19.24 | 360000 | 1.0539 | 0.8583 | 0.8321 |
| 0.2107 | 19.51 | 365000 | 1.0579 | 0.8594 | 0.8335 |
| 0.2163 | 19.77 | 370000 | 1.0608 | 0.8584 | 0.8323 |
| 0.2136 | 20.04 | 375000 | 1.0552 | 0.8593 | 0.8330 |
| 0.2062 | 20.31 | 380000 | 1.0577 | 0.8587 | 0.8345 |
| 0.212 | 20.58 | 385000 | 1.0594 | 0.8592 | 0.8321 |
| 0.2116 | 20.84 | 390000 | 1.0561 | 0.8593 | 0.8348 |
| 0.2058 | 21.11 | 395000 | 1.0490 | 0.8597 | 0.8358 |
| 0.2106 | 21.38 | 400000 | 1.0557 | 0.8593 | 0.8346 |
| 0.2061 | 21.65 | 405000 | 1.0508 | 0.8601 | 0.8349 |
| 0.2119 | 21.91 | 410000 | 1.0496 | 0.8591 | 0.8343 |
| 0.2057 | 22.18 | 415000 | 1.0447 | 0.8602 | 0.8345 |
| 0.2017 | 22.45 | 420000 | 1.0486 | 0.8597 | 0.8338 |
| 0.1997 | 22.71 | 425000 | 1.0346 | 0.8612 | 0.8377 |
| 0.1959 | 22.98 | 430000 | 1.0478 | 0.8586 | 0.8323 |
| 0.1989 | 23.25 | 435000 | 1.0436 | 0.8599 | 0.8327 |
| 0.1996 | 23.52 | 440000 | 1.0459 | 0.8603 | 0.8342 |
| 0.1982 | 23.78 | 445000 | 1.0410 | 0.8602 | 0.8348 |
| 0.1955 | 24.05 | 450000 | 1.0416 | 0.8596 | 0.8344 |
| 0.1977 | 24.32 | 455000 | 1.0383 | 0.8594 | 0.8329 |
| 0.1951 | 24.58 | 460000 | 1.0413 | 0.8599 | 0.8351 |
| 0.2014 | 24.85 | 465000 | 1.0332 | 0.8606 | 0.8357 |
| 0.1948 | 25.12 | 470000 | 1.0370 | 0.8599 | 0.8351 |
| 0.1933 | 25.39 | 475000 | 1.0393 | 0.8598 | 0.8342 |
| 0.1937 | 25.65 | 480000 | 1.0334 | 0.8607 | 0.8346 |
| 0.1923 | 25.92 | 485000 | 1.0358 | 0.8607 | 0.8358 |
| 0.1911 | 26.19 | 490000 | 1.0323 | 0.8604 | 0.8345 |
| 0.1921 | 26.46 | 495000 | 1.0347 | 0.8600 | 0.8337 |
| 0.1917 | 26.72 | 500000 | 1.0327 | 0.8602 | 0.8346 |
| 0.1882 | 26.99 | 505000 | 1.0293 | 0.8606 | 0.8340 |
| 0.1876 | 27.26 | 510000 | 1.0324 | 0.8603 | 0.8349 |
| 0.189 | 27.52 | 515000 | 1.0310 | 0.8613 | 0.8355 |
| 0.1891 | 27.79 | 520000 | 1.0309 | 0.8606 | 0.8341 |
| 0.1847 | 28.06 | 525000 | 1.0290 | 0.8609 | 0.8352 |
| 0.1827 | 28.33 | 530000 | 1.0272 | 0.8601 | 0.8333 |
| 0.1913 | 28.59 | 535000 | 1.0279 | 0.8609 | 0.8352 |
| 0.1872 | 28.86 | 540000 | 1.0291 | 0.8606 | 0.8341 |
| 0.1875 | 29.13 | 545000 | 1.0266 | 0.8606 | 0.8339 |
| 0.1856 | 29.39 | 550000 | 1.0283 | 0.8609 | 0.8363 |
| 0.1823 | 29.66 | 555000 | 1.0282 | 0.8602 | 0.8339 |
| 0.1823 | 29.93 | 560000 | 1.0311 | 0.8606 | 0.8342 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
yujia23/axolotl-mistral-cold-3e-4-lora | yujia23 | "2024-04-06T08:17:54Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-04-06T08:17:08Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: home/yujia/home/CN_Hateful/trained_models/mistral/cold/3e-4/
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
# - path: mhenrichsen/alpaca_2k_test
# - path: /home/yujia/home/CN_Hateful/train_toxiCN.json
# - path: /home/yujia/home/CN_Hateful/train_toxiCN_cn.json
- path: /home/yujia/home/CN_Hateful/train.json
ds_type: json
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
# output_dir: /home/yujia/home/CN_Hateful/trained_models/mistral/toxi/1e-5/
# output_dir: /home/yujia/home/CN_Hateful/trained_models/mistral/CN/toxi/1e-5/
output_dir: /home/yujia/home/CN_Hateful/trained_models/mistral/cold/3e-4/
adapter: lora
lora_model_dir:
sequence_len: 256
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 4
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0003
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# home/yujia/home/CN_Hateful/trained_models/mistral/cold/3e-4/
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9655 | 0.01 | 1 | 2.9710 |
| 0.0451 | 0.25 | 46 | 0.0619 |
| 0.0541 | 0.5 | 92 | 0.0392 |
| 0.0353 | 0.75 | 138 | 0.0345 |
| 0.0249 | 1.0 | 184 | 0.0315 |
| 0.0259 | 1.23 | 230 | 0.0329 |
| 0.0238 | 1.48 | 276 | 0.0309 |
| 0.019 | 1.73 | 322 | 0.0305 |
| 0.0173 | 1.97 | 368 | 0.0313 |
| 0.0051 | 2.21 | 414 | 0.0369 |
| 0.0093 | 2.46 | 460 | 0.0436 |
| 0.0023 | 2.71 | 506 | 0.0407 |
| 0.003 | 2.95 | 552 | 0.0406 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0 |
haryoaw/scenario-KD-PO-MSV-D2_data-AmazonScience_massive_all_1_1_beta | haryoaw | "2024-04-06T08:18:53Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"dataset:massive",
"base_model:haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base",
"base_model:finetune:haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T08:18:06Z" | ---
license: mit
base_model: haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
- f1
model-index:
- name: scenario-KD-PO-MSV-D2_data-AmazonScience_massive_all_1_1_beta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scenario-KD-PO-MSV-D2_data-AmazonScience_massive_all_1_1_beta
This model is a fine-tuned version of [haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base](https://huggingface.co/haryoaw/scenario-TCR-data-AmazonScience-massive-all_1.1-model-xlm-roberta-base) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0301
- Accuracy: 0.8597
- F1: 0.8341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 112233
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 1.0109 | 0.27 | 5000 | 1.5142 | 0.8386 | 0.8137 |
| 0.8133 | 0.53 | 10000 | 1.4503 | 0.8411 | 0.8191 |
| 0.7197 | 0.8 | 15000 | 1.4028 | 0.8438 | 0.8204 |
| 0.5839 | 1.07 | 20000 | 1.3897 | 0.8463 | 0.8223 |
| 0.5816 | 1.34 | 25000 | 1.3593 | 0.8455 | 0.8186 |
| 0.5482 | 1.6 | 30000 | 1.3400 | 0.8465 | 0.8225 |
| 0.5493 | 1.87 | 35000 | 1.3227 | 0.8449 | 0.8199 |
| 0.4571 | 2.14 | 40000 | 1.3476 | 0.8449 | 0.8215 |
| 0.4612 | 2.41 | 45000 | 1.2853 | 0.8515 | 0.8298 |
| 0.4589 | 2.67 | 50000 | 1.3029 | 0.8493 | 0.8283 |
| 0.4417 | 2.94 | 55000 | 1.2670 | 0.8515 | 0.8281 |
| 0.4113 | 3.21 | 60000 | 1.2688 | 0.8496 | 0.8251 |
| 0.402 | 3.47 | 65000 | 1.2629 | 0.8521 | 0.8286 |
| 0.4041 | 3.74 | 70000 | 1.2528 | 0.8501 | 0.8246 |
| 0.3878 | 4.01 | 75000 | 1.2265 | 0.8523 | 0.8316 |
| 0.3715 | 4.28 | 80000 | 1.2460 | 0.8501 | 0.8256 |
| 0.3702 | 4.54 | 85000 | 1.2228 | 0.8538 | 0.8321 |
| 0.373 | 4.81 | 90000 | 1.2171 | 0.8533 | 0.8271 |
| 0.3494 | 5.08 | 95000 | 1.2042 | 0.8537 | 0.8312 |
| 0.3509 | 5.34 | 100000 | 1.2069 | 0.8544 | 0.8280 |
| 0.3421 | 5.61 | 105000 | 1.2048 | 0.8535 | 0.8286 |
| 0.3448 | 5.88 | 110000 | 1.1911 | 0.8542 | 0.8296 |
| 0.3249 | 6.15 | 115000 | 1.1902 | 0.8539 | 0.8245 |
| 0.3252 | 6.41 | 120000 | 1.1825 | 0.8564 | 0.8288 |
| 0.3176 | 6.68 | 125000 | 1.1763 | 0.8546 | 0.8313 |
| 0.3123 | 6.95 | 130000 | 1.1913 | 0.8530 | 0.8307 |
| 0.3059 | 7.22 | 135000 | 1.1681 | 0.8548 | 0.8284 |
| 0.316 | 7.48 | 140000 | 1.1605 | 0.8557 | 0.8292 |
| 0.3067 | 7.75 | 145000 | 1.1802 | 0.8541 | 0.8302 |
| 0.3022 | 8.02 | 150000 | 1.1536 | 0.8548 | 0.8306 |
| 0.2985 | 8.28 | 155000 | 1.1672 | 0.8528 | 0.8265 |
| 0.3004 | 8.55 | 160000 | 1.1480 | 0.8556 | 0.8318 |
| 0.2942 | 8.82 | 165000 | 1.1470 | 0.8549 | 0.8291 |
| 0.2854 | 9.09 | 170000 | 1.1378 | 0.8558 | 0.8314 |
| 0.2967 | 9.35 | 175000 | 1.1340 | 0.8553 | 0.8307 |
| 0.2901 | 9.62 | 180000 | 1.1443 | 0.8557 | 0.8313 |
| 0.2844 | 9.89 | 185000 | 1.1419 | 0.8551 | 0.8312 |
| 0.2742 | 10.15 | 190000 | 1.1258 | 0.8564 | 0.8334 |
| 0.2763 | 10.42 | 195000 | 1.1267 | 0.8572 | 0.8323 |
| 0.2757 | 10.69 | 200000 | 1.1275 | 0.8558 | 0.8295 |
| 0.2728 | 10.96 | 205000 | 1.1238 | 0.8578 | 0.8348 |
| 0.2739 | 11.22 | 210000 | 1.1029 | 0.8578 | 0.8319 |
| 0.2579 | 11.49 | 215000 | 1.1155 | 0.8575 | 0.8350 |
| 0.2604 | 11.76 | 220000 | 1.1021 | 0.8583 | 0.8375 |
| 0.2605 | 12.03 | 225000 | 1.1134 | 0.8567 | 0.8301 |
| 0.2496 | 12.29 | 230000 | 1.0996 | 0.8565 | 0.8291 |
| 0.2518 | 12.56 | 235000 | 1.1162 | 0.8567 | 0.8302 |
| 0.2574 | 12.83 | 240000 | 1.1053 | 0.8576 | 0.8327 |
| 0.2488 | 13.09 | 245000 | 1.1105 | 0.8574 | 0.8346 |
| 0.2423 | 13.36 | 250000 | 1.1050 | 0.8580 | 0.8326 |
| 0.2473 | 13.63 | 255000 | 1.0882 | 0.8584 | 0.8348 |
| 0.2479 | 13.9 | 260000 | 1.0926 | 0.8589 | 0.8366 |
| 0.254 | 14.16 | 265000 | 1.0919 | 0.8576 | 0.8312 |
| 0.24 | 14.43 | 270000 | 1.0941 | 0.8576 | 0.8339 |
| 0.2415 | 14.7 | 275000 | 1.0881 | 0.8584 | 0.8336 |
| 0.2421 | 14.96 | 280000 | 1.0816 | 0.8580 | 0.8326 |
| 0.2395 | 15.23 | 285000 | 1.0832 | 0.8574 | 0.8323 |
| 0.2336 | 15.5 | 290000 | 1.0822 | 0.8580 | 0.8341 |
| 0.2324 | 15.77 | 295000 | 1.0873 | 0.8586 | 0.8344 |
| 0.2274 | 16.03 | 300000 | 1.0795 | 0.8583 | 0.8336 |
| 0.233 | 16.3 | 305000 | 1.0857 | 0.8581 | 0.8331 |
| 0.2281 | 16.57 | 310000 | 1.0852 | 0.8578 | 0.8341 |
| 0.2268 | 16.84 | 315000 | 1.0803 | 0.8583 | 0.8338 |
| 0.2253 | 17.1 | 320000 | 1.0756 | 0.8590 | 0.8345 |
| 0.2247 | 17.37 | 325000 | 1.0650 | 0.8589 | 0.8328 |
| 0.2182 | 17.64 | 330000 | 1.0696 | 0.8582 | 0.8336 |
| 0.2187 | 17.9 | 335000 | 1.0694 | 0.8578 | 0.8324 |
| 0.2202 | 18.17 | 340000 | 1.0621 | 0.8584 | 0.8329 |
| 0.2214 | 18.44 | 345000 | 1.0737 | 0.8578 | 0.8328 |
| 0.2186 | 18.71 | 350000 | 1.0606 | 0.8586 | 0.8346 |
| 0.2171 | 18.97 | 355000 | 1.0623 | 0.8583 | 0.8345 |
| 0.2123 | 19.24 | 360000 | 1.0641 | 0.8589 | 0.8352 |
| 0.2141 | 19.51 | 365000 | 1.0627 | 0.8586 | 0.8349 |
| 0.2145 | 19.77 | 370000 | 1.0616 | 0.8585 | 0.8336 |
| 0.2087 | 20.04 | 375000 | 1.0570 | 0.8593 | 0.8332 |
| 0.2098 | 20.31 | 380000 | 1.0639 | 0.8585 | 0.8330 |
| 0.2122 | 20.58 | 385000 | 1.0501 | 0.8602 | 0.8347 |
| 0.2071 | 20.84 | 390000 | 1.0583 | 0.8592 | 0.8328 |
| 0.2083 | 21.11 | 395000 | 1.0554 | 0.8589 | 0.8337 |
| 0.2041 | 21.38 | 400000 | 1.0604 | 0.8586 | 0.8331 |
| 0.2035 | 21.65 | 405000 | 1.0560 | 0.8581 | 0.8333 |
| 0.2099 | 21.91 | 410000 | 1.0557 | 0.8586 | 0.8346 |
| 0.2031 | 22.18 | 415000 | 1.0511 | 0.8593 | 0.8348 |
| 0.2024 | 22.45 | 420000 | 1.0534 | 0.8586 | 0.8340 |
| 0.2024 | 22.71 | 425000 | 1.0447 | 0.8604 | 0.8343 |
| 0.2025 | 22.98 | 430000 | 1.0465 | 0.8593 | 0.8344 |
| 0.2029 | 23.25 | 435000 | 1.0472 | 0.8592 | 0.8346 |
| 0.197 | 23.52 | 440000 | 1.0467 | 0.8595 | 0.8345 |
| 0.1996 | 23.78 | 445000 | 1.0428 | 0.8592 | 0.8346 |
| 0.1938 | 24.05 | 450000 | 1.0428 | 0.8593 | 0.8342 |
| 0.1949 | 24.32 | 455000 | 1.0390 | 0.8605 | 0.8352 |
| 0.1976 | 24.58 | 460000 | 1.0453 | 0.8586 | 0.8325 |
| 0.1957 | 24.85 | 465000 | 1.0346 | 0.8605 | 0.8350 |
| 0.1946 | 25.12 | 470000 | 1.0370 | 0.8592 | 0.8328 |
| 0.1954 | 25.39 | 475000 | 1.0385 | 0.8600 | 0.8348 |
| 0.1877 | 25.65 | 480000 | 1.0358 | 0.8604 | 0.8351 |
| 0.1911 | 25.92 | 485000 | 1.0385 | 0.8597 | 0.8357 |
| 0.1919 | 26.19 | 490000 | 1.0400 | 0.8593 | 0.8340 |
| 0.19 | 26.46 | 495000 | 1.0357 | 0.8596 | 0.8345 |
| 0.1923 | 26.72 | 500000 | 1.0386 | 0.8600 | 0.8356 |
| 0.1942 | 26.99 | 505000 | 1.0407 | 0.8590 | 0.8337 |
| 0.189 | 27.26 | 510000 | 1.0333 | 0.8604 | 0.8359 |
| 0.1888 | 27.52 | 515000 | 1.0352 | 0.8590 | 0.8334 |
| 0.1899 | 27.79 | 520000 | 1.0330 | 0.8595 | 0.8347 |
| 0.1934 | 28.06 | 525000 | 1.0310 | 0.8593 | 0.8342 |
| 0.1873 | 28.33 | 530000 | 1.0342 | 0.8600 | 0.8350 |
| 0.1869 | 28.59 | 535000 | 1.0305 | 0.8601 | 0.8358 |
| 0.1799 | 28.86 | 540000 | 1.0326 | 0.8590 | 0.8337 |
| 0.1832 | 29.13 | 545000 | 1.0331 | 0.8597 | 0.8350 |
| 0.1842 | 29.39 | 550000 | 1.0323 | 0.8599 | 0.8349 |
| 0.1818 | 29.66 | 555000 | 1.0313 | 0.8602 | 0.8360 |
| 0.1819 | 29.93 | 560000 | 1.0301 | 0.8597 | 0.8341 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
paularusti78/q-FrozenLake-v1-4x4-noSlippery | paularusti78 | "2024-04-06T08:18:20Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-06T08:18:17Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="paularusti78/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Esmaeilkiani/Zali | Esmaeilkiani | "2024-04-06T08:22:57Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T08:22:57Z" | ---
license: apache-2.0
---
|
Qin56/text2imagev1 | Qin56 | "2024-04-07T07:49:25Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T08:23:24Z" | ---
license: apache-2.0
---
|
LongBabin/babin | LongBabin | "2024-04-06T08:25:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:25:54Z" | Entry not found |
paularusti78/tax1-v3-1 | paularusti78 | "2024-04-06T08:29:02Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-06T08:29:01Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: tax1-v3-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="paularusti78/tax1-v3-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
harupurito/whisper-small-hi-cv | harupurito | "2024-04-06T11:55:37Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-06T08:32:24Z" | Entry not found |
spraja08/fine-bitsy | spraja08 | "2024-04-07T08:18:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T08:34:12Z" | ---
license: mit
library_name: transformers
---
# Fine-Bitsy
Fine-Bitsy is a model (that has an attempted) specialty on US federsal reserves faq. This is my experimental attempt to create a specialist model that can be trained within a few dollars but can produce surprising good results.
Phi-2 seems to have "seen" most commercially available content datasets. To convincingly test the "specialisation effect", I had to hunt for a less common dataset.
🤗 came handy with such a variety of datasets that can be readily used without much dataprep...
## Model Details
Base Model : Phi-2
FineTuned with US Federal Reserves Q&A
### Model Description
- **Developed by:** SP Raja
- **License:** MIT
- **Finetuned from model Phi-2:** Supervised Fine Tuning using custom dataset
## Uses
This is an experiment on finetuning. Use the code below to get started with the model.
```
#load the model from the hub
from peft import PeftConfig
peft_model_id_from_hub = "spraja08/fine-bitsy"
config = PeftConfig.from_pretrained(peft_model_id_from_hub)
model_from_hub = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
return_dict=True,
load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
#try the inference
model_from_hub.eval()
eval_prompt = """Instruction:What should I do if I have damaged or mutilated currency?
Assistant:"""
#eval_prompt = "Who is on the Federal Open Market Committee?"
#eval_prompt = """What does the Federal Reserve mean when it says monetary policy remains "accommodative"?"""
model_input = tokenizer(eval_prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
print(tokenizer.decode(model_from_hub.generate(**model_input, max_new_tokens=80)[0], skip_special_tokens=True))
```
### Training Data
[us-federal-reserve-qa](https://huggingface.co/datasets/clement-cvll/us-federal-reserve-qa)
### Training Procedure
[As in the notebook here](https://github.com/spraja08/supervised_fine_tuning/tree/main)
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SJTU-TES/Everybody_Dance_Now | SJTU-TES | "2024-04-06T08:34:55Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T08:34:55Z" | ---
license: apache-2.0
---
|
csAugust/Llama-2-7b-chat-hf-q0f16-MLC | csAugust | "2024-04-07T02:44:01Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T08:35:00Z" | ---
license: apache-2.0
---
|
zgcr654321/human_matting_training | zgcr654321 | "2024-04-06T08:59:10Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-06T08:35:03Z" | ---
license: mit
---
|
zgcr654321/salient_object_detection_training | zgcr654321 | "2024-04-27T06:50:40Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-06T08:35:36Z" | ---
license: mit
---
|
zgcr654321/interactive_segmentation_training | zgcr654321 | "2024-05-17T23:56:08Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-06T08:36:16Z" | ---
license: mit
---
|
Or4cl3-1/code-slerp | Or4cl3-1 | "2024-04-06T08:36:56Z" | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"microsoft/codebert-base",
"EleutherAI/gpt-neo-x-20b",
"openai/codex",
"bigscience/bloom",
"google/jurassic-1-jumbo",
"google/t5-v1_1-large",
"facebook/bart-large",
"base_model:bigscience/bloom",
"base_model:merge:bigscience/bloom",
"base_model:facebook/bart-large",
"base_model:merge:facebook/bart-large",
"base_model:google/t5-v1_1-large",
"base_model:merge:google/t5-v1_1-large",
"base_model:microsoft/codebert-base",
"base_model:merge:microsoft/codebert-base",
"region:us"
] | null | "2024-04-06T08:36:54Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- microsoft/codebert-base
- EleutherAI/gpt-neo-x-20b
- openai/codex
- bigscience/bloom
- google/jurassic-1-jumbo
- google/t5-v1_1-large
- facebook/bart-large
base_model:
- microsoft/codebert-base
- EleutherAI/gpt-neo-x-20b
- openai/codex
- bigscience/bloom
- google/jurassic-1-jumbo
- google/t5-v1_1-large
- facebook/bart-large
---
# code-slerp
code-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base)
* [EleutherAI/gpt-neo-x-20b](https://huggingface.co/EleutherAI/gpt-neo-x-20b)
* [openai/codex](https://huggingface.co/openai/codex)
* [bigscience/bloom](https://huggingface.co/bigscience/bloom)
* [google/jurassic-1-jumbo](https://huggingface.co/google/jurassic-1-jumbo)
* [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large)
* [facebook/bart-large](https://huggingface.co/facebook/bart-large)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: microsoft/codebert-base
layer_range: [0, 32]
- model: EleutherAI/gpt-neo-x-20b
layer_range: [0, 32]
- model: openai/codex
layer_range: [0, 32]
- model: bigscience/bloom
layer_range: [0, 32]
- model: google/jurassic-1-jumbo
layer_range: [0, 32]
- model: google/t5-v1_1-large
layer_range: [0, 32]
- model: facebook/bart-large
layer_range: [0, 32]
merge_method: slerp
base_model: microsoft/codebert-base
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat1
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Or4cl3-1/code-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
faiimea/openpose | faiimea | "2024-04-06T08:38:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:38:10Z" | Entry not found |
SJTU-TES/RobustVideoMatting | SJTU-TES | "2024-04-06T09:04:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:41:20Z" | Entry not found |
ajeya-op/mistral | ajeya-op | "2024-04-06T08:42:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:42:52Z" | Entry not found |
LuckyMan123/sushi_style | LuckyMan123 | "2024-04-06T09:04:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:43:06Z" | Entry not found |
nuratamton/test3 | nuratamton | "2024-04-06T09:14:52Z" | 0 | 0 | peft | [
"peft",
"pytorch",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-04-06T08:48:35Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
faiimea/uap | faiimea | "2024-04-06T08:50:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:50:42Z" | Entry not found |
shrimalrishika/cnn-dm | shrimalrishika | "2024-04-06T08:51:26Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:51:26Z" | Entry not found |
HeydarS/flant5_xl_EQ_peft_v2 | HeydarS | "2024-04-06T08:51:41Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-xl",
"base_model:adapter:google/flan-t5-xl",
"region:us"
] | null | "2024-04-06T08:51:35Z" | ---
library_name: peft
base_model: google/flan-t5-xl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
faiimea/wav2com | faiimea | "2024-04-06T08:52:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T08:52:52Z" | Entry not found |
Gopal2002/zehpyr-gemma-dpo-finetune | Gopal2002 | "2024-04-23T09:50:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T08:54:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shashashasha/Serial_Designation_N_GPT_SoVITS | Shashashasha | "2024-04-06T08:57:38Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-06T08:55:07Z" | ---
license: openrail
---
|
dianamihalache27/Sarcasm.detector.bert_base_LSTM_FE_recall_CE_loss_Imbalance_datasampler | dianamihalache27 | "2024-04-06T08:57:02Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-06T08:57:02Z" | ---
license: mit
---
|
dianamihalache27/Sarcasm.detector.bert_tweet_kim_cnn_recall_CE_loss_Imbalance_datasampler | dianamihalache27 | "2024-04-06T08:59:42Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-06T08:59:08Z" | ---
license: mit
---
|
giuly97/finetuning-sentiment-model-3000-samples | giuly97 | "2024-04-06T09:00:21Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:00:20Z" | Entry not found |
tumanggors/Jamba-v0.1 | tumanggors | "2024-04-06T09:12:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:12:50Z" | Entry not found |
abhijitstat/my_awesome_billsum_model | abhijitstat | "2024-04-06T09:15:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:15:39Z" | Entry not found |
Caska26/bert-base-multilingual-uncased-sentiment | Caska26 | "2024-04-06T09:24:20Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:24:20Z" | Entry not found |
mrsarthakgupta/onnx-classifier | mrsarthakgupta | "2024-04-08T21:01:36Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"vit",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T09:24:23Z" | Entry not found |
24abhimanyu10/api-classification | 24abhimanyu10 | "2024-04-06T09:27:06Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T09:27:06Z" | ---
license: apache-2.0
---
|
shrimalrishika/medical_summarization | shrimalrishika | "2024-04-06T09:30:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:30:05Z" | Entry not found |
kenchiayy/wav2vec2-xls-r-1b-atcosim_corpus-google-colab | kenchiayy | "2024-04-06T09:31:26Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T09:31:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
giuly97/bert-base-multilingual-uncased-sentiment | giuly97 | "2024-04-06T09:32:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:32:23Z" | Entry not found |
huybopbi/vpnfast | huybopbi | "2024-04-06T09:35:19Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T09:35:19Z" | ---
license: apache-2.0
---
|
John-Yakuza/markymooolora | John-Yakuza | "2024-05-15T22:11:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:40:58Z" | Entry not found |
TroyHow/Helsinki-NLP | TroyHow | "2024-05-18T18:28:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T09:43:27Z" | # These are the models required for running OFFLINE Translation
You can either download all the files or just the files that you need.
Please make sure it is in a format like ./Helsinki-NLP/XXX |
TropikOyuncu/Baldiback | TropikOyuncu | "2024-04-06T09:51:43Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-06T09:50:06Z" | ---
license: openrail
---
|
ShahlaDnshi96/mobile_mistral_2 | ShahlaDnshi96 | "2024-04-07T07:54:10Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T09:52:03Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mobile_mistral_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobile_mistral_2
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
OpenNLG/OpenBA-V2-Vocab-Pruning | OpenNLG | "2024-04-06T10:02:20Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-06T10:02:20Z" | ---
license: apache-2.0
---
|
swsn21/forrxl | swsn21 | "2024-10-01T06:36:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T10:02:37Z" | Entry not found |
devin-ai/devin | devin-ai | "2024-04-06T10:02:45Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-04-06T10:02:45Z" | ---
license: mit
---
|
Jakolo121/Sappho_V0.0.1-GGUF | Jakolo121 | "2024-04-06T10:06:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T10:06:27Z" | Entry not found |
abhinandom42/test_sum | abhinandom42 | "2024-04-06T10:12:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-06T10:12:08Z" | Entry not found |
HeydarS/flant5_bs_popQA_peft_v6 | HeydarS | "2024-04-06T10:16:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
] | null | "2024-04-06T10:16:45Z" | ---
library_name: peft
base_model: google/flan-t5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
SwimChoi/villama2-7b-Denmark-lora | SwimChoi | "2024-04-09T10:56:34Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-04-06T10:22:44Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
SwimChoi/villama2-7b-Bulgaria-lora | SwimChoi | "2024-04-09T10:57:51Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-04-06T10:22:55Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
SwimChoi/villama2-7b-Belgium-lora | SwimChoi | "2024-04-09T10:59:11Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-04-06T10:23:06Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
SwimChoi/villama2-7b-France-lora | SwimChoi | "2024-04-09T11:00:36Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-04-06T10:23:18Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |