modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ajmishler/test_models | ajmishler | "2024-06-14T00:33:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:33:32Z" | Entry not found |
SiMajid/reward-train-facebook-opt350m_v4 | SiMajid | "2024-06-14T00:34:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T00:34:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zetaphor/Llama-3-PepsiMax | Zetaphor | "2024-06-14T00:35:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:35:38Z" | Entry not found |
Frixi/Ninomae_Inanis_HoloEN | Frixi | "2024-06-14T00:36:16Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-14T00:35:59Z" | ---
license: openrail
---
|
argmaxinc/coreml-stable-diffusion-3-medium-1024-t5 | argmaxinc | "2024-06-19T15:38:23Z" | 0 | 1 | DiffusionKit | [
"DiffusionKit",
"text-to-image",
"coreml",
"en",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2024-06-14T00:37:22Z" | ---
license: other
license_name: stabilityai-nc-research-community
license_link: >-
https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/blob/main/LICENSE
library_name: DiffusionKit
base_model: stabilityai/stable-diffusion-3-medium-diffusers
tags:
- text-to-image
- coreml
inference: false
language:
- en
--- |
dlynch243/DialoGPT-small-uncleruckus2 | dlynch243 | "2024-06-14T00:40:27Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:40:27Z" | Entry not found |
ThorBaller/small_mistral | ThorBaller | "2024-06-15T02:33:49Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T00:43:39Z" | ---
license: apache-2.0
---
|
amritpuhan/fine-tuned-distilbert-base-uncased-swag-peft | amritpuhan | "2024-06-14T05:31:29Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:swag",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T00:50:55Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert/distilbert-base-uncased
datasets:
- swag
metrics:
- accuracy
model-index:
- name: fine-tuned-distilbert-base-uncased-swag-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-distilbert-base-uncased-swag-peft
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7733
- Accuracy: 0.6858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0103 | 1.0 | 4597 | 0.8978 | 0.6370 |
| 0.9591 | 2.0 | 9194 | 0.8498 | 0.6568 |
| 0.9401 | 3.0 | 13791 | 0.8270 | 0.6626 |
| 0.9265 | 4.0 | 18388 | 0.8105 | 0.6713 |
| 0.9202 | 5.0 | 22985 | 0.8001 | 0.6759 |
| 0.8921 | 6.0 | 27582 | 0.7894 | 0.6790 |
| 0.894 | 7.0 | 32179 | 0.7836 | 0.6823 |
| 0.8695 | 8.0 | 36776 | 0.7803 | 0.6835 |
| 0.8684 | 9.0 | 41373 | 0.7753 | 0.6845 |
| 0.8696 | 10.0 | 45970 | 0.7733 | 0.6858 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1 |
proyectoItegrado241EAFIT/XGBoost_Model | proyectoItegrado241EAFIT | "2024-06-14T02:20:28Z" | 0 | 0 | null | [
"ciencia_de_datos",
"XGBoost",
"EAFIT",
"Python",
"Research",
"graph-ml",
"es",
"region:us"
] | graph-ml | "2024-06-14T00:53:26Z" | ---
language:
- es
pipeline_tag: graph-ml
tags:
- ciencia_de_datos
- XGBoost
- EAFIT
- Python
- Research
---
Los modelos mostrados en este repositorio son del proyecto integrado de Semestre de la universidad EAFIT, correspondientes al semestre 2024-1 de la maestría en ciencia de datos y Analitica.
Cada modelo se construyó usando la serie temporal de los días de la semana para cada una de las horas del día. En este caso se está usando el modelo Arima para la predicción
Predice las horas del metro de medellín |
varun-v-rao/opt-350m-bn-adapter-squad-model1 | varun-v-rao | "2024-06-14T00:54:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:54:11Z" | Entry not found |
vigneshv59/mistral-7b-finetuned-ultrachat | vigneshv59 | "2024-06-14T00:54:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T00:54:34Z" | Entry not found |
onizukal/Karma_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold2 | onizukal | "2024-06-14T02:49:00Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-14T01:00:46Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Karma_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8576483672025074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Karma_3Class_RMSprop_1e5_20Epoch_Beit-large-224_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6418
- Accuracy: 0.8576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3343 | 1.0 | 2466 | 0.3772 | 0.8460 |
| 0.269 | 2.0 | 4932 | 0.3610 | 0.8583 |
| 0.1499 | 3.0 | 7398 | 0.4653 | 0.8552 |
| 0.1293 | 4.0 | 9864 | 0.8042 | 0.8496 |
| 0.1824 | 5.0 | 12330 | 0.9597 | 0.8549 |
| 0.1453 | 6.0 | 14796 | 1.2832 | 0.8563 |
| 0.0537 | 7.0 | 17262 | 1.4415 | 0.8533 |
| 0.0 | 8.0 | 19728 | 1.6006 | 0.8561 |
| 0.0 | 9.0 | 22194 | 1.6244 | 0.8587 |
| 0.0 | 10.0 | 24660 | 1.6418 | 0.8576 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
onizukal/Karma_3Class_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold2 | onizukal | "2024-06-14T02:54:50Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-14T01:03:32Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Karma_3Class_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8534020827014458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Karma_3Class_3Class_Adamax_1e4_20Epoch_Beit-large-224_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5433
- Accuracy: 0.8534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3595 | 1.0 | 2466 | 0.4309 | 0.8251 |
| 0.3101 | 2.0 | 4932 | 0.3865 | 0.8447 |
| 0.1826 | 3.0 | 7398 | 0.4588 | 0.8485 |
| 0.1658 | 4.0 | 9864 | 0.5997 | 0.8504 |
| 0.1373 | 5.0 | 12330 | 0.8549 | 0.8498 |
| 0.0639 | 6.0 | 14796 | 1.1026 | 0.8527 |
| 0.0234 | 7.0 | 17262 | 1.2762 | 0.8538 |
| 0.0001 | 8.0 | 19728 | 1.4347 | 0.8547 |
| 0.0 | 9.0 | 22194 | 1.5139 | 0.8518 |
| 0.0002 | 10.0 | 24660 | 1.5433 | 0.8534 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
VisionAI4Healthcare/MIMIC_VISION | VisionAI4Healthcare | "2024-06-14T01:04:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:04:01Z" | Entry not found |
sagnikrayc/opt-1.3b-bn-adapter-snli-model3 | sagnikrayc | "2024-06-14T01:05:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:05:14Z" | Entry not found |
farah17/MyMistral | farah17 | "2024-06-14T01:10:14Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-06-14T01:10:14Z" | ---
license: other
license_name: mistral
license_link: LICENSE
---
|
AmberYifan/spin-filtered | AmberYifan | "2024-06-17T19:08:00Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T01:16:49Z" | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- generated_from_trainer
model-index:
- name: spin-trans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spin-trans
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0027
- Rewards/real: -3.8149
- Rewards/generated: -24.3554
- Rewards/accuracies: 1.0
- Rewards/margins: 20.5405
- Logps/generated: -336.8123
- Logps/real: -163.0993
- Logits/generated: -2.3894
- Logits/real: -1.8917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/real | Rewards/generated | Rewards/accuracies | Rewards/margins | Logps/generated | Logps/real | Logits/generated | Logits/real |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------------:|:------------------:|:---------------:|:---------------:|:----------:|:----------------:|:-----------:|
| 0.0085 | 0.1 | 100 | 0.0130 | 0.5228 | -9.3297 | 1.0 | 9.8526 | -186.5559 | -119.7219 | -2.7911 | -2.5502 |
| 0.0041 | 0.21 | 200 | 0.0070 | -0.1706 | -14.7969 | 1.0 | 14.6263 | -241.2277 | -126.6563 | -2.6228 | -2.2904 |
| 0.0007 | 0.31 | 300 | 0.0073 | -2.7706 | -22.3901 | 0.9974 | 19.6195 | -317.1598 | -152.6565 | -2.4825 | -1.9073 |
| 0.0049 | 0.41 | 400 | 0.0044 | -2.9093 | -19.4947 | 1.0 | 16.5854 | -288.2053 | -154.0429 | -2.6010 | -2.2355 |
| 0.001 | 0.52 | 500 | 0.0050 | -1.5600 | -21.7213 | 1.0 | 20.1614 | -310.4720 | -140.5501 | -2.5715 | -2.2758 |
| 0.0004 | 0.62 | 600 | 0.0029 | -2.4635 | -24.2161 | 1.0 | 21.7526 | -335.4198 | -149.5852 | -2.4626 | -2.0545 |
| 0.0004 | 0.72 | 700 | 0.0034 | -1.9810 | -20.7429 | 1.0 | 18.7619 | -300.6877 | -144.7602 | -2.4823 | -2.0980 |
| 0.0003 | 0.83 | 800 | 0.0034 | -4.2857 | -23.6128 | 1.0 | 19.3270 | -329.3861 | -167.8074 | -2.3861 | -1.8496 |
| 0.0003 | 0.93 | 900 | 0.0027 | -3.8149 | -24.3554 | 1.0 | 20.5405 | -336.8123 | -163.0993 | -2.3894 | -1.8917 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
erwannd/vit-base-patch16-224-in21k-finetuned-lora-food101 | erwannd | "2024-06-14T01:22:34Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T01:17:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
theailearner/AIProfessions-doctor-llama-3-8b-testV2 | theailearner | "2024-06-14T01:19:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T01:17:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mali6/autocap | mali6 | "2024-06-25T04:47:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:18:36Z" | Entry not found |
Kudod/phobert-large-case-finetuned-ner-vlsp2021-3090-14June | Kudod | "2024-06-14T01:23:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:23:43Z" | Entry not found |
Kudod/bert-large-case-finetuned-ner-vlsp2021-3090-14June | Kudod | "2024-06-14T01:26:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:26:23Z" | Entry not found |
zhoujy7/results | zhoujy7 | "2024-06-14T01:28:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:28:01Z" | Entry not found |
RUXHIR2828/MikiTakumiFujiwaraJP | RUXHIR2828 | "2024-06-14T03:38:26Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-14T01:28:06Z" | ---
license: openrail
---
|
mlx-community/dolphin-2.9.3-qwen2-1.5b-2bit | mlx-community | "2024-06-14T01:28:24Z" | 0 | 2 | mlx | [
"mlx",
"safetensors",
"qwen2",
"generated_from_trainer",
"axolotl",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:Qwen/Qwen2-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T01:28:10Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
- axolotl
- mlx
base_model: Qwen/Qwen2-1.5B
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# mlx-community/dolphin-2.9.3-qwen2-1.5b-2bit
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9.3-qwen2-1.5b`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-1.5b) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/dolphin-2.9.3-qwen2-1.5b-2bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
bongmo/trained-sd3 | bongmo | "2024-06-14T01:29:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:29:41Z" | Entry not found |
LucasLima07/midjourney-prompt | LucasLima07 | "2024-06-20T16:27:23Z" | 0 | 0 | null | [
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null | "2024-06-14T01:31:18Z" | ---
base_model: "vilsonrodrigues/falcon-7b-instruct-sharded"
--- |
Danjin/unsloth-gemma-glaive-function-callingv2 | Danjin | "2024-06-14T01:33:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-1.1-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T01:33:30Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-1.1-2b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** Danjin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tanya-kta/whisper-small-ru | tanya-kta | "2024-06-14T01:35:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:35:51Z" | Entry not found |
itssugyaru/Eunhye | itssugyaru | "2024-06-14T01:37:58Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-06-14T01:36:53Z" | ---
license: openrail
---
|
Lahinthefutureland/CuteDoodle | Lahinthefutureland | "2024-06-14T01:43:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:42:32Z" | Entry not found |
tctrautman/20240613-kibbe-training-base-merged | tctrautman | "2024-06-14T05:18:32Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T01:46:27Z" | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: 20240613-kibbe-training-base-merged
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dubs/Kibbe-Training/runs/rwmz1usm)
# 20240613-kibbe-training-base-merged
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.501 | 0.5 | 46 | 0.5596 |
| 0.5503 | 1.0 | 92 | 0.5274 |
| 0.3954 | 1.5 | 138 | 0.5451 |
| 0.3103 | 2.0 | 184 | 0.5501 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Hev832/baser-vc | Hev832 | "2024-06-14T01:53:57Z" | 0 | 4 | null | [
"music",
"license:mit",
"region:us"
] | null | "2024-06-14T01:47:04Z" | ---
license: mit
tags:
- music
--- |
hyojuuun/gte-base-pair-FT | hyojuuun | "2024-06-14T01:51:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:51:50Z" | Entry not found |
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-af-33-percent-high-bt-rouge-1 | AdamKasumovic | "2024-06-14T01:55:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T01:52:21Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alexshengzhili/ph3-0606-lora-dpo-beta-0dot2-merged | alexshengzhili | "2024-06-14T01:53:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"llama-factory",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T01:52:32Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sinodex/AS | Sinodex | "2024-06-14T01:53:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:53:55Z" | Entry not found |
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-af-33-percent-low-bt-rouge-1 | AdamKasumovic | "2024-06-14T01:58:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T01:55:59Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RAY2L/Llama-3-Instruct-8B-SimPO | RAY2L | "2024-06-14T03:52:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:princeton-nlp/llama3-ultrafeedback",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T01:56:41Z" | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- princeton-nlp/llama3-ultrafeedback
model-index:
- name: llama-3-8b-instruct-simpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3-8b-instruct-simpo
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the princeton-nlp/llama3-ultrafeedback dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3755
- Rewards/chosen: -2.9448
- Rewards/rejected: -3.6038
- Rewards/accuracies: 0.6613
- Rewards/margins: 0.6589
- Logps/rejected: -1.4415
- Logps/chosen: -1.1779
- Logits/rejected: -1.1545
- Logits/chosen: -1.1873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 1.3975 | 0.8549 | 400 | 1.3755 | -2.9448 | -3.6038 | 0.6613 | 0.6589 | -1.4415 | -1.1779 | -1.1545 | -1.1873 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
MG31/DETR_multiclass_last_e8_b4_n0 | MG31 | "2024-06-14T02:12:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T01:59:01Z" | Entry not found |
tanya-kta/whisper-small-even | tanya-kta | "2024-06-14T04:53:35Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ru",
"dataset:tbkazakova/even_speech_biblical",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-14T01:59:41Z" | ---
language:
- ru
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- tbkazakova/even_speech_biblical
metrics:
- wer
model-index:
- name: Whisper Small Even - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Even Speech Biblical
type: tbkazakova/even_speech_biblical
config: default
split: None
args: 'config: ru, split: train'
metrics:
- name: Wer
type: wer
value: 53.88015717092338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Even - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Even Speech Biblical dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4591
- Wer: 53.8802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.05 | 5.9880 | 500 | 0.3920 | 63.7525 |
| 0.0022 | 11.9760 | 1000 | 0.4307 | 57.3674 |
| 0.0003 | 17.9641 | 1500 | 0.4528 | 51.7682 |
| 0.0003 | 23.9521 | 2000 | 0.4591 | 53.8802 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-af-33-percent-med-bt-rouge-1 | AdamKasumovic | "2024-06-14T02:08:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T02:05:43Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VVAA89/Misha | VVAA89 | "2024-06-14T02:08:44Z" | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-06-14T02:08:44Z" | ---
license: cc-by-nc-4.0
---
|
varun-v-rao/opt-350m-bn-adapter-squad-model2 | varun-v-rao | "2024-06-14T02:10:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:10:37Z" | Entry not found |
Danjin/Llama-2-7b-chat-finetunev2 | Danjin | "2024-06-14T02:24:00Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T02:13:13Z" | Entry not found |
moemoe101/SmartAIRecipe | moemoe101 | "2024-06-14T02:19:47Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T02:19:47Z" | ---
license: apache-2.0
---
|
fxmeng/PiSSA-Mistral-7b-r64 | fxmeng | "2024-06-14T06:54:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T02:21:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-xh-33-percent-high-bt-rouge-1 | AdamKasumovic | "2024-06-14T02:35:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T02:32:10Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qsebso/first_model_yelp_review | qsebso | "2024-06-14T02:32:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:32:56Z" | Entry not found |
cadeath/cvss_0614 | cadeath | "2024-06-14T03:32:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T02:33:51Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** cadeath
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-xh-33-percent-med-bt-rouge-1 | AdamKasumovic | "2024-06-14T02:38:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T02:34:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sagnikrayc/opt-350m-bn-adapter-squad-model1 | sagnikrayc | "2024-06-14T13:37:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:35:26Z" | Entry not found |
Puandraa/AisakaTaiga | Puandraa | "2024-06-14T02:35:32Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T02:35:32Z" | ---
license: apache-2.0
---
|
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-xh-33-percent-low-bt-rouge-1 | AdamKasumovic | "2024-06-14T02:39:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-14T02:36:15Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amazeble/epiclazygasm_.safetensors | amazeble | "2024-06-14T02:43:12Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T02:39:04Z" | ---
license: apache-2.0
---
|
chainup244/Qwen-Qwen1.5-0.5B-1718332862 | chainup244 | "2024-06-14T02:41:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:41:03Z" | Entry not found |
chainup244/Qwen-Qwen1.5-1.8B-1718332943 | chainup244 | "2024-06-14T02:42:24Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:42:24Z" | Entry not found |
jony8/test001 | jony8 | "2024-06-14T02:43:31Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T02:43:31Z" | ---
license: apache-2.0
---
|
iamnguyen/cupid | iamnguyen | "2024-06-30T11:54:37Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-14T02:43:44Z" | Entry not found |
yuekai/icefall_asr_aishell_whisper_qwen2_1.5B | yuekai | "2024-06-14T04:20:30Z" | 0 | 0 | null | [
"tensorboard",
"region:us"
] | null | "2024-06-14T02:44:04Z" | Entry not found |
dimassyoga42/huggingface_hub | dimassyoga42 | "2024-06-14T02:44:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:44:31Z" | Entry not found |
harryslater58/models | harryslater58 | "2024-06-14T02:45:02Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T02:45:02Z" | ---
license: apache-2.0
---
|
jinbhunandaxue/pipeline | jinbhunandaxue | "2024-06-14T02:45:09Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:45:09Z" | Entry not found |
dimassyoga42/Dimas | dimassyoga42 | "2024-06-14T02:57:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T02:53:12Z" | Entry not found |
camenduru/Unique3D | camenduru | "2024-06-14T03:03:15Z" | 0 | 1 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"region:us"
] | null | "2024-06-14T02:57:37Z" | Entry not found |
Ksgk-fy/phillipine_customer_v3.6_Maria_Intro_Objection_v2 | Ksgk-fy | "2024-06-14T03:42:50Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-06-14T02:59:06Z" | Entry not found |
AWeirdDev/zh-tw-llama3-tokenizer-3k | AWeirdDev | "2024-06-14T03:03:10Z" | 0 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T03:02:20Z" | ---
library_name: transformers
tags: []
---
# zh-tw-llama-3-tokenizer-3k |
10ths/test | 10ths | "2024-06-14T03:03:42Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:03:42Z" | Entry not found |
JudithWiz/Astrologa | JudithWiz | "2024-06-14T03:04:10Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:04:10Z" | Entry not found |
xxxhhhttt/TinyChineseStories-LLaMA2 | xxxhhhttt | "2024-06-14T03:08:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T03:08:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Blackroot/Euryale-2.1-3.3b-6h-exl2 | Blackroot | "2024-06-14T03:25:27Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-14T03:11:16Z" | 3.3Bpw 6bit head quantized version of https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1 |
RyotaKadoya1993/fullymerged_v4_adapter | RyotaKadoya1993 | "2024-06-14T03:16:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:RyotaKadoya1993/fullymerged_v1_128_gen3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T03:11:42Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: RyotaKadoya1993/fullymerged_v1_128_gen3
---
# Uploaded model
- **Developed by:** RyotaKadoya1993
- **License:** apache-2.0
- **Finetuned from model :** RyotaKadoya1993/fullymerged_v1_128_gen3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
emidiosouza/kukac-doc | emidiosouza | "2024-06-14T03:11:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:11:58Z" | Entry not found |
OpilotAI/TinyLlama-1.1B-Chat-v1.0-q4f16_1-Opilot | OpilotAI | "2024-06-14T03:13:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:13:23Z" | Entry not found |
ManCD/Arima_pm25_model | ManCD | "2024-06-14T03:14:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:14:28Z" | Entry not found |
againeureka/support_and_attack_classifier | againeureka | "2024-06-14T03:19:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:19:35Z" | Entry not found |
jeiku/Aura_Qwen2_v4_7B | jeiku | "2024-06-14T03:20:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:20:32Z" | Entry not found |
erizakaria/llama3-8b-daftarin-id-lora | erizakaria | "2024-06-14T03:22:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T03:22:31Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** erizakaria
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ShiftAddLLM/opt13b-2bit-lat | ShiftAddLLM | "2024-06-14T04:11:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:23:34Z" | Entry not found |
Yuki20/llama3_8b_aci_2e | Yuki20 | "2024-06-14T03:24:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T03:24:46Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Yuki20
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
onizukal/Karma_3Class_RMSprop_1e4_20Epoch_Beit-large-224_fold2 | onizukal | "2024-06-14T05:22:16Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-14T03:37:50Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Karma_3Class_RMSprop_1e4_20Epoch_Beit-large-224_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8374279648164998
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Karma_3Class_RMSprop_1e4_20Epoch_Beit-large-224_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7596
- Accuracy: 0.8374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.412 | 1.0 | 2466 | 0.5102 | 0.7956 |
| 0.4251 | 2.0 | 4932 | 0.4462 | 0.8177 |
| 0.3057 | 3.0 | 7398 | 0.4440 | 0.8302 |
| 0.2355 | 4.0 | 9864 | 0.5256 | 0.8296 |
| 0.1256 | 5.0 | 12330 | 0.7596 | 0.8374 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
nikollaz/promptoargentum | nikollaz | "2024-06-14T03:40:39Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T03:40:39Z" | ---
license: apache-2.0
---
|
Milancheeks/AuRA | Milancheeks | "2024-06-19T13:04:30Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T03:42:41Z" | ---
license: apache-2.0
---
---
# AuRA - Augmented Universal Real-Time Assistant
## Overview
**AuRA (Augmented Universal Real-Time Assistant)** represents a new paradigm in AI-driven assistance by leveraging outputs from multiple state-of-the-art language models. This approach ensures that AuRA continuously learns and evolves, integrating the latest advancements in natural language processing (NLP). By combining the strengths of various models, AuRA offers unparalleled assistance across diverse domains, making it a highly versatile and intelligent assistant.
## Vision and Goals
AuRA is designed to redefine AI-driven assistance with the following core goals:
- **Integrate Knowledge**: Combine outputs from multiple LLMs to create a comprehensive and enriched knowledge base.
- **Real-Time Learning**: Continuously update its training data with new information and advancements, ensuring it remains cutting-edge.
- **Versatile Assistance**: Provide high-quality responses across a wide range of topics and tasks.
- **User-Centric Development**: Incorporate user feedback to dynamically refine and improve performance.
- **AI Data Broker**: Act as a joint controller for user data, ensuring users get compensated when their data is used and providing the option to lock their data if they choose.
- **Action Model**: Learn actions from tools created by other developers, enabling AuRA to perform a wide range of tasks beyond traditional text-based assistance.
## System Architecture
### Data Integration Pipeline
The data integration pipeline is designed to ensure seamless collection, processing, and utilization of data from various sources. Key components include:
- **Source Models**: Collect data from leading language models (LLMs) such as GPT-3.5, GPT-4, and others.
- **Automated Data Collection**: Continuously fetch outputs from these models based on user interactions.
- **Data Processing**: Clean, format, and validate collected data to ensure high quality and consistency.
- **Dynamic Dataset**: Maintain a regularly updated dataset that serves as the foundation for training.
- **Intelligent Data Sampling**: Use active learning techniques to selectively sample the most informative and diverse data points for training.
- **Data Augmentation**: Increase the diversity and robustness of the training data through techniques like paraphrasing and synonym replacement.
- **Real-Time Data Integration**: Enable real-time data integration to keep the model current.
- **Scalability and Efficiency**: Design the pipeline to handle large volumes of data without compromising performance.
- **Security and Privacy**: Adhere to strict security and privacy standards to protect user data.
### Model Training
AuRA's model training process includes:
- **Base Model**: Built on the Mistral-7B-v0.2 model.
- **Finetuning with LoRA**: Use Low-Rank Adaptation (LoRA) for efficient adaptation to new data.
- **Incremental Training**: Regular updates with new interaction data.
- **Mixture of Experts (MoE)**: Utilize different parts of the model for different inputs to handle a wide variety of tasks efficiently.
- **Sparse Attention Mechanisms**: Reduce computational complexity for processing long sequences of data.
- **Knowledge Distillation**: Use a larger, pre-trained model to teach AuRA.
- **Gradient Checkpointing**: Save memory by checkpointing intermediate states during training.
- **Mixed Precision Training**: Use mixed precision (fp16) to speed up training and reduce memory usage.
- **Layer-wise Learning Rate Scaling**: Adjust learning rates at different layers for faster convergence.
### Feedback Loop
The feedback loop ensures continuous learning and improvement by:
- **User Feedback**: Collecting feedback from users through interactions, surveys, and implicit behavior.
- **Active Learning**: Integrating feedback into the training pipeline.
- **Automated Feedback Analysis**: Using NLP and machine learning algorithms to analyze feedback.
- **Reinforcement Learning**: Fine-tuning the model based on user interactions.
- **Real-Time Adaptation**: Adjusting responses and behavior based on immediate feedback.
- **Quality Assurance**: Regular evaluations and benchmarking.
- **Transparency and Communication**: Maintaining transparency about how user feedback is used.
## Real-World Applications
AuRA's versatility enables its application in various domains, including:
- **Customer Support**: Providing real-time assistance and resolving queries.
- **Education**: Offering personalized tutoring and educational content.
- **Healthcare**: Assisting with medical information retrieval and patient interaction.
- **Business Intelligence**: Analyzing data and generating insights for decision-making.
- **AI Data Broker**: Ensuring users get compensated when their data is used and providing the option to lock their data.
## Ethical Considerations
AuRA's development adheres to strict ethical principles, including:
- **Data Privacy**: Ensuring user data privacy with robust encryption and user control.
- **Bias Mitigation**: Continuously monitoring and correcting biases in data and model outputs.
- **Transparency**: Maintaining transparency about data practices.
- **Accountability**: Regular audits and compliance with legal and regulatory standards.
- **Collaborative Ethics Development**: Working with the World Ethics Organization to build an ethical framework.
## Future Work
Future development focuses on:
- **Expansion of Data Sources**: Integrating additional models and data sources.
- **Advanced NLP Techniques**: Incorporating new NLP techniques and architectures.
- **Multimodal Learning**: Enabling understanding and processing of various data formats.
- **Enhanced User Interfaces**: Developing more intuitive and user-friendly interfaces.
- **Real-Time Adaptability**: Strengthening real-time learning and adaptation capabilities.
- **Ethical AI Development**: Fully implementing the ethical framework.
- **Real-World Applications**: Expanding into new application domains and conducting case studies.
## Conclusion
AuRA represents a significant leap forward in AI-driven assistance, integrating multiple language models to provide unparalleled support across diverse domains. With a commitment to real-time learning, user-centric development, and ethical AI practices, AuRA is set to revolutionize the way we interact with technology.
For more information and to explore the capabilities of AuRA, visit the [Hugging Face model page](https://huggingface.co/Milancheeks/AuRA).
---
|
onizukal/Boya1_3Class_SGD_1e4_20Epoch_Beit-large-224_fold3 | onizukal | "2024-06-14T04:53:23Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-14T03:44:03Z" | ---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Boya1_3Class_SGD_1e4_20Epoch_Beit-large-224_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5871212121212122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_3Class_SGD_1e4_20Epoch_Beit-large-224_fold3
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0362
- Accuracy: 0.5871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1275 | 1.0 | 923 | 1.0932 | 0.5739 |
| 1.0274 | 2.0 | 1846 | 1.0618 | 0.5809 |
| 1.0425 | 3.0 | 2769 | 1.0467 | 0.5847 |
| 1.0677 | 4.0 | 3692 | 1.0385 | 0.5863 |
| 1.0395 | 5.0 | 4615 | 1.0362 | 0.5871 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Ksgk-fy/ecoach_philippine_v7_intro_object_merge | Ksgk-fy | "2024-06-14T03:45:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:45:28Z" | Entry not found |
shakilanf/pedut123 | shakilanf | "2024-06-14T03:48:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:48:41Z" | Entry not found |
A00954334/finetuning-sentiment-model-3000-samples | A00954334 | "2024-06-14T03:58:05Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-14T03:48:44Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3252
- Accuracy: 0.8733
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Elyitra/gamer | Elyitra | "2024-06-14T03:50:33Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:50:28Z" | Entry not found |
datnguyen16123009/Test_Phi3 | datnguyen16123009 | "2024-06-14T03:51:30Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T03:51:30Z" | Entry not found |
Nutanix/Meta-Llama-3-8B-Instruct_KTO_lora_hh-rlhf-processed | Nutanix | "2024-06-14T03:52:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T03:52:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Danjin/unsloth-gemma-glaive-function-callingv3 | Danjin | "2024-06-14T04:00:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-1.1-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T03:59:57Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-1.1-2b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** Danjin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nhfffff/niare_flipgod | nhfffff | "2024-06-14T04:05:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T04:01:04Z" | Entry not found |
JamesHujy/EMABench | JamesHujy | "2024-06-14T04:01:35Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T04:01:15Z" | Entry not found |
Nutanix/Meta-Llama-3-8B-Instruct_KTO_lora_distilabel-capybara-kto-15k-binarized-processed | Nutanix | "2024-06-14T04:04:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T04:04:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LAB-IA-UFBA/myeloma-yolo7-model | LAB-IA-UFBA | "2024-06-14T04:57:54Z" | 0 | 0 | null | [
"yolo7",
"dataset:LAB-IA-UFBA/myeloma-dataset",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T04:09:51Z" | ---
license: apache-2.0
datasets:
- LAB-IA-UFBA/myeloma-dataset
tags:
- yolo7
---
Here you will find all codes, models, and data used in the manuscript "Enhancing diagnostic accuracy of multiple myeloma through ML-driven analysis of hematological slides: new dataset and identification model to support hematologists." Scientific Reports 14.1 (2024). |
WolfSmasher99/ana-de-armas | WolfSmasher99 | "2024-06-14T04:16:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T04:10:17Z" | Entry not found |
ShiftAddLLM/opt30b-2bit-lat | ShiftAddLLM | "2024-06-14T04:15:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T04:13:29Z" | Entry not found |
chainup244/Qwen-Qwen1.5-0.5B-1718338417 | chainup244 | "2024-06-14T04:13:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T04:13:39Z" | Entry not found |
tspeterkim3/arcface | tspeterkim3 | "2024-06-14T04:15:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T04:15:06Z" | Entry not found |
ShiftAddLLM/opt30b-3bit-lat | ShiftAddLLM | "2024-06-14T04:18:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-14T04:15:33Z" | Entry not found |