Search is not available for this dataset
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 375
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
csikasote/whisper-medium-toigen-female-model | csikasote | "2025-01-05T17:12:06Z" | 95 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:toigen",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-01-05T16:30:50Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- toigen
metrics:
- wer
model-index:
- name: whisper-medium-toigen-female-model
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: toigen
type: toigen
metrics:
- name: Wer
type: wer
value: 0.44457831325301206
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-toigen-female-model
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the toigen dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5737
- Wer: 0.4446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.3083 | 1.6024 | 200 | 0.6687 | 0.5309 |
| 0.3381 | 3.2008 | 400 | 0.5737 | 0.4446 |
| 0.2639 | 4.8032 | 600 | 0.6060 | 0.4297 |
| 0.0831 | 6.4016 | 800 | 0.6427 | 0.4032 |
| 0.0776 | 8.0 | 1000 | 0.6705 | 0.4209 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
John6666/zabramix-illustrious-sdxl-v175-sdxl | John6666 | "2024-12-23T06:44:27Z" | 132 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-04T07:02:48Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/907571/zabramix-illustrious-sdxl?modelVersionId=1024516).
This model created by [za4beqsbv36z2s889](https://civitai.com/user/za4beqsbv36z2s889).
|
istominvi/goncharov_50_16_16 | istominvi | "2025-01-16T10:53:35Z" | 42 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-16T04:27:37Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: goncharov
---
# Goncharov_50_16_16
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `goncharov` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('istominvi/goncharov_50_16_16', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
vimal52/phi2_medical | vimal52 | "2024-06-19T10:03:25Z" | 154 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-19T09:58:03Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
konishant/shayari_fillmasking | konishant | "2025-03-03T12:19:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-03T12:19:29Z" | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** konishant
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mmnga/Stockmark-2-100B-Instruct-beta-gguf | mmnga | "2025-03-07T11:43:42Z" | 27 | 2 | null | [
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:stockmark/Stockmark-2-100B-Instruct-beta",
"base_model:quantized:stockmark/Stockmark-2-100B-Instruct-beta",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-06T17:09:42Z" |
---
license: mit
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
base_model:
- stockmark/Stockmark-2-100B-Instruct-beta
---
# Stockmark-2-100B-Instruct-beta-gguf
[stockmarkさんが公開しているStockmark-2-100B-Instruct-beta](https://huggingface.co/stockmark/Stockmark-2-100B-Instruct-beta)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Stockmark-2-100B-Instruct-beta-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて' -cnv
```
|
philip-hightech/44a567c0-1392-4d97-8a57-092e689e0ada | philip-hightech | "2025-01-18T02:53:21Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:tlphams/gollm-12.8b-instruct-v2.3",
"base_model:adapter:tlphams/gollm-12.8b-instruct-v2.3",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-01-18T02:28:43Z" | ---
library_name: peft
license: cc-by-nc-4.0
base_model: tlphams/gollm-12.8b-instruct-v2.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 44a567c0-1392-4d97-8a57-092e689e0ada
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tlphams/gollm-12.8b-instruct-v2.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 500a091925e5b6f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/500a091925e5b6f2_train_data.json
type:
field_input: selected_word
field_instruction: original
field_output: perturbed
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/44a567c0-1392-4d97-8a57-092e689e0ada
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/500a091925e5b6f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 68317063-c692-4732-a3b0-9be4ed3ef2e3
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 68317063-c692-4732-a3b0-9be4ed3ef2e3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 44a567c0-1392-4d97-8a57-092e689e0ada
This model is a fine-tuned version of [tlphams/gollm-12.8b-instruct-v2.3](https://huggingface.co/tlphams/gollm-12.8b-instruct-v2.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2547 | 0.0001 | 1 | 0.3119 |
| 0.996 | 0.0002 | 3 | 0.3117 |
| 1.2226 | 0.0005 | 6 | 0.3072 |
| 2.2028 | 0.0007 | 9 | 0.2783 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF | mradermacher | "2025-01-17T07:26:23Z" | 198 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k",
"base_model:quantized:AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-17T07:08:51Z" | ---
base_model: AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k
language:
- en
library_name: transformers
model_name: Mistral-7B-v0.1-sft-spin-1.6k
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jingyaogong/MiniMind2-Small-R1 | jingyaogong | "2025-02-18T00:48:29Z" | 18 | 0 | null | [
"pytorch",
"minimind",
"custom_code",
"arxiv:2405.04434",
"arxiv:2402.14905",
"arxiv:2401.04088",
"region:us"
] | null | "2025-02-10T15:40:13Z" | <div align="center">

</div>
<div align="center">

[](https://github.com/jingyaogong/minimind/stargazers)
[](LICENSE)
[](https://github.com/jingyaogong/minimind/commits/master)
[](https://github.com/jingyaogong/minimind/pulls)
[](https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5)
</div>
<div align="center">
<h3>"大道至简"</h3>
</div>
<div align="center">
中文 | [English](./README_en.md)
</div>
* 此开源项目旨在完全从0开始,仅用3块钱成本 + 2小时!即可训练出仅为25.8M的超小语言模型**MiniMind**。
* **MiniMind**系列极其轻量,最小版本体积是 GPT-3 的 $\frac{1}{7000}$,力求做到最普通的个人GPU也可快速训练。
* 项目同时开源了大模型的极简结构-包含拓展共享混合专家(MoE)、数据集清洗、预训练(Pretrain)、监督微调(SFT)、LoRA微调,
直接偏好强化学习(DPO)算法、模型蒸馏算法等全过程代码。
* **MiniMind**同时拓展了视觉多模态的VLM: [MiniMind-V](https://github.com/jingyaogong/minimind-v)。
* 项目所有核心算法代码均从0使用PyTorch原生重构!不依赖第三方库提供的抽象接口。
* 这不仅是大语言模型的全阶段开源复现,也是一个入门LLM的教程。
* 希望此项目能为所有人提供一个抛砖引玉的示例,一起感受创造的乐趣!推动更广泛AI社区的进步!
> 为防止误解,“2小时” 基于NVIDIA 3090硬件设备(单卡)测试,“3块钱”
> 指GPU服务器租用成本,具体规格详情见下文。
---
<div align="center">

[🔗🍓推理模型](https://www.modelscope.cn/studios/gongjy/MiniMind-Reasoning) | [🔗🤖常规模型](https://www.modelscope.cn/studios/gongjy/MiniMind) | [🔗🎞️视频介绍](https://www.bilibili.com/video/BV12dHPeqE72/?share_source=copy_web&vd_source=670c2504f88726f8cf4a21ef6147c0e8)
<div align="center">
<table>
<tr>
<td align="center">
<a href="https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5" style="text-decoration: none;">
<img src="./images/and_huggingface.png" alt="Hugging Face Logo" style="vertical-align: middle; width: auto; max-width: 100%;" />
</a>
</td>
<td align="center">
<a href="https://www.modelscope.cn/profile/gongjy" style="text-decoration: none;">
<img src="./images/and_modelscope.png" alt="ModelScope Logo" style="vertical-align: middle; width: auto; max-width: 100%;" />
</a>
</td>
</tr>
</table>
</div>
</div>
# 📌 Introduction
大语言模型(Large Language Model, LLM)的出现引发了全世界对AI的空前关注。
无论是ChatGPT、DeepSeek还是Qwen,都以其惊艳的效果令人叹为观止。
然而,动辄数百亿参数的庞大规模,使得它们对个人设备而言不仅难以训练,甚至连部署都显得遥不可及。
打开大模型的“黑盒子”,探索其内部运作机制,多么令人心潮澎湃!
遗憾的是,99%的探索只能止步于使用LoRA等技术对现有大模型进行少量微调,学习一些新指令或任务。
这就好比教牛顿如何使用21世纪的智能手机——虽然有趣,却完全偏离了理解物理本质的初衷。
与此同时,第三方的大模型框架和工具库,如transformers+trl,几乎只暴露了高度抽象的接口。
通过短短10行代码,就能完成“加载模型+加载数据集+推理+强化学习”的全流程训练。
这种高效的封装固然便利,但也像一架高速飞船,将我们与底层实现隔离开来,阻碍了深入探究LLM核心代码的机会。
然而,“用乐高拼出一架飞机,远比坐在头等舱里飞行更让人兴奋!”。
更糟糕的是,互联网上充斥着大量付费课程和营销号,以漏洞百出、一知半解的内容推销AI教程。
正因如此,本项目初衷是拉低LLM的学习门槛,让每个人都能从理解每一行代码开始,
从零开始亲手训练一个极小的语言模型。是的,从**零开始训练**,而不是仅仅进行**推理**!
最低只需3块钱不到的服务器成本,就能亲身体验从0到1构建一个语言模型的全过程。
一起感受创造的乐趣吧!
> [!NOTE]
> (截至2025-02-07)MiniMind系列已完成多个型号模型的预训练,最小仅需25.8M(0.02B),即可具备流畅对话能力!
<details style="color:rgb(128,128,128)">
<summary>Models List</summary>
| 模型 (大小) | 推理占用 (约) | Release |
|-------------------------|----------|------------|
| MiniMind2-small (26M) | 0.5 GB | 2025.02.06 |
| MiniMind2-MoE (145M) | 1.0 GB | 2025.02.06 |
| MiniMind2 (104M) | 1.0 GB | 2025.02.06 |
| minimind-v1-small (26M) | 0.5 GB | 2024.08.28 |
| minimind-v1-moe (4×26M) | 1.0 GB | 2024.09.17 |
| minimind-v1 (108M) | 1.0 GB | 2024.09.01 |
</details>
**项目包含**
- MiniMind-LLM结构的全部代码(Dense+MoE模型)。
- 包含Tokenizer分词器详细训练代码。
- 包含Pretrain、SFT、LoRA、RLHF-DPO、模型蒸馏的全过程训练代码。
- 收集、蒸馏、整理并清洗去重所有阶段的高质量数据集,且全部开源。
- 从0实现预训练、指令微调、LoRA、DPO强化学习,白盒模型蒸馏。关键算法几乎不依赖第三方封装的框架,且全部开源。
- 同时兼容`transformers`、`trl`、`peft`等第三方主流框架。
- 训练支持单机单卡、单机多卡(DDP、DeepSpeed)训练,支持wandb可视化训练流程。支持动态启停训练。
- 在第三方测评榜(C-Eval、C-MMLU、OpenBookQA等)进行模型测试。
- 实现Openai-Api协议的极简服务端,便于集成到第三方ChatUI使用(FastGPT、Open-WebUI等)。
- 基于streamlit实现最简聊天WebUI前端。
- 复现(蒸馏/RL)大型推理模型DeepSeek-R1的MiniMind-Reason模型,**数据+模型**全部开源!
希望此开源项目可以帮助LLM初学者快速入门!
### 👉**更新日志**
<details close>
<summary> <b>2025-02-09 (newest 🎉🎉🎉)</b> </summary>
- 迎来发布以来重大更新,Release MiniMind2 Series。
- 代码几乎全部重构,使用更简洁明了的统一结构。
如有旧代码的兼容性需要,可访问[🔗旧仓库内容🔗](https://github.com/jingyaogong/minimind/tree/6e9cd28ef9b34a0a10afbdf6f59e65cb6e628efb)。
- 免去数据预处理步骤。统一数据集格式,更换为`jsonl`格式杜绝数据集下载混乱的问题。
- MiniMind2系列效果相比MiniMind-V1显著提升。
- 小问题:{kv-cache写法更标准、MoE的负载均衡loss被考虑等等}
- 提供模型迁移到私有数据集的训练方案(医疗模型、自我认知样例)。
- 精简预训练数据集,并大幅提升预训练数据质量,大幅缩短个人快速训练所需时间,单卡3090即可2小时复现!
- 更新:LoRA微调脱离peft包装,从0实现LoRA过程;DPO算法从0使用PyTorch原生实现;模型白盒蒸馏原生实现。
- MiniMind2-DeepSeek-R1系列蒸馏模型诞生!
- MiniMind2具备一定的英文能力!
- 更新MiniMind2与第三方模型的基于更多大模型榜单测试性能的结果。
</details>
<details close>
<summary> <b>2024-10-05</b> </summary>
- 为MiniMind拓展了多模态能力之---视觉
- 移步孪生项目[minimind-v](https://github.com/jingyaogong/minimind-v)查看详情!
</details>
<details close>
<summary> <b>2024-09-27</b> </summary>
- 09-27更新pretrain数据集的预处理方式,为了保证文本完整性,放弃预处理成.bin训练的形式(轻微牺牲训练速度)。
- 目前pretrain预处理后的文件命名为:pretrain_data.csv。
- 删除了一些冗余的代码。
</details>
<details close>
<summary> <b>2024-09-17</b> </summary>
- 更新minimind-v1-moe模型
- 为了防止歧义,不再使用mistral_tokenizer分词,全部采用自定义的minimind_tokenizer作为分词器。
</details>
<details close>
<summary> <b>2024-09-01</b> </summary>
- 更新minimind-v1 (108M)模型,采用minimind_tokenizer,预训练轮次3 + SFT轮次10,更充分训练,性能更强。
- 项目已部署至ModelScope创空间,可以在此网站上体验:
- [🔗ModelScope在线体验🔗](https://www.modelscope.cn/studios/gongjy/minimind)
</details>
<details close>
<summary> <b>2024-08-27</b> </summary>
- 项目首次开源
</details>
# 📌 快速开始
<details style="color:rgb(128,128,128)">
<summary>分享本人的软硬件配置(仅供参考)</summary>
* CPU: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
* RAM: 128 GB
* GPU: NVIDIA GeForce RTX 3090(24GB) * 8
* Ubuntu==20.04
* CUDA==12.2
* Python==3.10.16
* [requirements.txt](./requirements.txt)
</details>
### 第0步
```bash
git clone https://github.com/jingyaogong/minimind.git
```
## Ⅰ 测试已有模型效果
### 1.下载模型
```bash
# step 1
git clone https://huggingface.co/jingyaogong/MiniMind2
```
### 2.命令行问答
```bash
# step 2
# load=1: load from transformers-hf model
python eval_model.py --load 1
```
### 3.或启动WebUI
```bash
# 可能需要`python>=3.10` 安装 `pip install streamlit`
# cd scripts
streamlit run web_demo.py
```
## Ⅱ 从0开始自己训练
### 1.环境准备
```bash
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
```
<details style="color:rgb(128,128,128)">
<summary>注:提前测试Torch是否可用cuda</summary>
```bash
import torch
print(torch.cuda.is_available())
```
如果不可用,请自行去[torch_stable](https://download.pytorch.org/whl/torch_stable.html)
下载whl文件安装。参考[链接](https://blog.csdn.net/weixin_45456738/article/details/141029610?ops_request_misc=&request_id=&biz_id=102&utm_term=%E5%AE%89%E8%A3%85torch&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-2-141029610.nonecase&spm=1018.2226.3001.4187)
</details>
### 2.数据下载
从下文提供的[数据集下载链接](https://www.modelscope.cn/datasets/gongjy/minimind_dataset/files)
下载需要的数据文件(创建`./dataset`目录)并放到`./dataset`下
<details style="color:rgb(128,128,128)">
<summary>注:数据集须知</summary>
默认推荐下载`pretrain_hq.jsonl` + `sft_mini_512.jsonl`最快速度复现Zero聊天模型。
数据文件可自由选择,下文提供了多种搭配方案,可根据自己手头的训练需求和GPU资源进行适当组合。
</details>
### 3.开始训练
**3.1 预训练(学知识)**
```bash
python train_pretrain.py
```
> 执行预训练,得到 `pretrain_*.pth` 作为预训练的输出权重(其中*为模型的dimension,默认为512)
**3.2 监督微调(学对话方式)**
```bash
python train_full_sft.py
```
> 执行监督微调,得到 `full_sft_*.pth` 作为指令微调的输出权重(其中`full`即为全参数微调)
<details style="color:rgb(128,128,128)">
<summary>注:训练须知</summary>
所有训练过程默认每隔100步保存1次参数到文件`./out/***.pth`(每次会覆盖掉旧权重文件)。
简单起见,此处只写明两个阶段训练过程。如需其它训练 (LoRA, 蒸馏, 强化学习, 微调推理等) 可参考下文【实验】小节的详细说明。
</details>
---
### 4.测试模型效果
确保需要测试的模型`*.pth`文件位于`./out/`目录下。
也可以直接去[此处](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch/files)下载使用我训练的`*.pth`文件。
```bash
python eval_model.py --model_mode 1 # 默认为0:测试pretrain模型效果,设置为1:测试full_sft模型效果
```
<details style="color:rgb(128,128,128)">
<summary>注:测试须知</summary>
如需详情,查看`eval_model.py`脚本代码即可。model_mode分为 0: 预训练模型,1: SFT-Chat模型,2: RLHF-Chat模型,3: Reason模型
</details>
---
> [!TIP]
> 所有训练脚本均为Pytorch原生框架,均支持多卡加速,假设你的设备有N (N>1) 张显卡:
单机N卡启动训练方式 (DDP, 支持多机多卡集群)
```bash
torchrun --nproc_per_node 3 train_xxx.py
```
<details style="color:rgb(128,128,128)">
<summary>注:其它须知</summary>
* 单机N卡启动训练 (DeepSpeed)
```bash
deepspeed --master_port 29500 --num_gpus=N train_xxx.py
```
* 可根据需要开启wandb记录训练过程
```bash
# 需要登录: wandb login
torchrun --nproc_per_node N train_xxx.py --use_wandb
# and
python train_xxx.py --use_wandb
```
通过添加`--use_wandb`参数,可以记录训练过程,训练完成后,可以在wandb网站上查看训练过程。通过修改`wandb_project`
和`wandb_run_name`参数,可以指定项目名称和运行名称。
</details>
# 📌 数据介绍
## Ⅰ Tokenizer
分词器将单词从自然语言通过“词典”映射到`0, 1, 36`这样的数字,可以理解为数字就代表了单词在“词典”中的页码。
可以选择自己构造词表训练一个“词典”,代码可见`./scripts/train_tokenizer.py`(仅供学习参考,若非必要无需再自行训练,MiniMind已自带tokenizer)。
或者选择比较出名的开源大模型分词器,
正如同直接用新华/牛津词典的优点是token编码压缩率很好,缺点是页数太多,动辄数十万个词汇短语;
自己训练的分词器,优点是词表长度和内容随意控制,缺点是压缩率很低(例如"hello"也许会被拆分为"h e l l o"
五个独立的token),且生僻词难以覆盖。
“词典”的选择固然很重要,LLM的输出本质上是SoftMax到词典N个词的多分类问题,然后通过“词典”解码到自然语言。
因为MiniMind体积需要严格控制,为了避免模型头重脚轻(词嵌入embedding层参数在LLM占比太高),所以词表长度短短益善。
<details style="color:rgb(128,128,128)">
<summary>Tokenizer介绍</summary>
第三方强大的开源模型例如Yi、qwen、chatglm、mistral、Llama3的tokenizer词表长度如下:
<table>
<tr><th>Tokenizer模型</th><th>词表大小</th><th>来源</th></tr>
<tr><td>yi tokenizer</td><td>64,000</td><td>01万物(中国)</td></tr>
<tr><td>qwen2 tokenizer</td><td>151,643</td><td>阿里云(中国)</td></tr>
<tr><td>glm tokenizer</td><td>151,329</td><td>智谱AI(中国)</td></tr>
<tr><td>mistral tokenizer</td><td>32,000</td><td>Mistral AI(法国)</td></tr>
<tr><td>llama3 tokenizer</td><td>128,000</td><td>Meta(美国)</td></tr>
<tr><td>minimind tokenizer</td><td>6,400</td><td>自定义</td></tr>
</table>
> 👉2024-09-17更新:为了防止过去的版本歧义&控制体积,minimind所有模型均使用minimind_tokenizer分词,废弃所有mistral_tokenizer版本。
```
# 一些自言自语
> 尽管minimind_tokenizer长度很小,编解码效率弱于qwen2、glm等中文友好型分词器。
> 但minimind模型选择了自己训练的minimind_tokenizer作为分词器,以保持整体参数轻量,避免编码层和计算层占比失衡,头重脚轻,因为minimind的词表大小只有6400。
> 且minimind在实际测试中没有出现过生僻词汇解码失败的情况,效果良好。
> 由于自定义词表压缩长度到6400,使得LLM总参数量最低只有25.8M。
> 训练数据`tokenizer_train.jsonl`均来自于`匠数大模型数据集`,这部分数据相对次要,如需训练可以自由选择。
```
</details>
## Ⅱ Pretrain数据
经历了MiniMind-V1的低质量预训练数据,导致模型胡言乱语的教训,`2025-02-05` 之后决定不再采用大规模无监督的数据集做预训练。
进而尝试把[匠数大模型数据集](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)的中文部分提取出来,
清洗出字符`<512`长度的大约1.6GB的语料直接拼接成预训练数据 `pretrain_hq.jsonl`,hq即为high
quality(当然也还不算high,提升数据质量无止尽)。
文件`pretrain_hq.jsonl` 数据格式为
```bash
{"text": "如何才能摆脱拖延症? 治愈拖延症并不容易,但以下建议可能有所帮助..."}
```
## Ⅲ SFT数据
[匠数大模型SFT数据集](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
“是一个完整、格式统一、安全的大模型训练和研究资源。
从网络上的公开数据源收集并整理了大量开源数据集,对其进行了格式统一,数据清洗,
包含10M条数据的中文数据集和包含2M条数据的英文数据集。”
以上是官方介绍,下载文件后的数据总量大约在4B tokens,肯定是适合作为中文大语言模型的SFT数据的。
但是官方提供的数据格式很乱,全部用来sft代价太大。
我将把官方数据集进行了二次清洗,把含有符号污染和噪声的条目去除;另外依然只保留了总长度`<512`
的内容,此阶段希望通过大量对话补充预训练阶段欠缺的知识。
导出文件为`sft_512.jsonl`(~7.5GB)。
[Magpie-SFT数据集](https://www.modelscope.cn/organization/Magpie-Align)
收集了~1M条来自Qwen2/2.5的高质量对话,我将这部分数据进一步清洗,把总长度`<2048`的部分导出为`sft_2048.jsonl`(~9GB)。
长度`<1024`的部分导出为`sft_1024.jsonl`(~5.5GB),用大模型对话数据直接进行sft就属于“黑盒蒸馏”的范畴。
进一步清洗前两步sft的数据(只保留中文字符占比高的内容),筛选长度`<512`的对话,得到`sft_mini_512.jsonl`(~1.2GB)。
所有sft文件 `sft_X.jsonl` 数据格式均为
```text
{
"conversations": [
{"role": "user", "content": "你好"},
{"role": "assistant", "content": "你好!"},
{"role": "user", "content": "再见"},
{"role": "assistant", "content": "再见!"}
]
}
```
## Ⅳ RLHF数据
来自[Magpie-DPO数据集](https://www.modelscope.cn/datasets/Magpie-Align/MagpieLM-DPO-Data-v0.1)
大约200k条偏好数据(均是英文)生成自Llama3.1-70B/8B,可以用于训练奖励模型,优化模型回复质量,使其更加符合人类偏好。
这里将数据总长度`<3000`的内容重组为`dpo.jsonl`(~0.9GB),包含`chosen`和`rejected`两个字段,`chosen`
为偏好的回复,`rejected`为拒绝的回复。
文件 `dpo.jsonl` 数据格式为
```text
{
"chosen": [
{"content": "Q", "role": "user"},
{"content": "good answer", "role": "assistant"}
],
"rejected": [
{"content": "Q", "role": "user"},
{"content": "bad answer", "role": "assistant"}
]
}
```
## Ⅴ Reason数据集:
不得不说2025年2月谁能火的过DeepSeek...
也激发了我对RL引导的推理模型的浓厚兴趣,目前已经用Qwen2.5复现了R1-Zero。
如果有时间+效果work(但99%基模能力不足)我会在之后更新MiniMind基于RL训练的推理模型而不是蒸馏模型。
时间有限,最快的低成本方案依然是直接蒸馏(黑盒方式)。
耐不住R1太火,短短几天就已经存在一些R1的蒸馏数据集[R1-Llama-70B](https://www.modelscope.cn/datasets/Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B)、[R1-Distill-SFT](https://www.modelscope.cn/datasets/AI-ModelScope/R1-Distill-SFT)、
[Alpaca-Distill-R1](https://huggingface.co/datasets/shareAI/Alpaca-Distill-R1-ZH)、
[deepseek_r1_zh](https://huggingface.co/datasets/jinliuxi/deepseek_r1_zh)等等,纯中文的数据可能比较少。
最终整合它们,导出文件为`r1_mix_1024.jsonl`,数据格式和`sft_X.jsonl`一致。
## Ⅵ 更多数据集
目前已经有[HqWu-HITCS/Awesome-Chinese-LLM](https://github.com/HqWu-HITCS/Awesome-Chinese-LLM)
在收集和梳理中文LLM相关的开源模型、应用、数据集及教程等资料,并持续更新这方面的最新进展。全面且专业,Respect!
---
## Ⅷ 数据集下载
> [!NOTE]
> 2025-02-05后,开源MiniMind最终训练所用的所有数据集,因此无需再自行预处理大规模数据集,避免重复性的数据处理工作。
MiniMind训练数据集 ([ModelScope](https://www.modelscope.cn/datasets/gongjy/minimind_dataset/files) | [HuggingFace](https://huggingface.co/datasets/jingyaogong/minimind_dataset/tree/main))
> 无需全部clone,可单独下载所需的文件
将下载的数据集文件放到`./dataset/`目录下(✨为推荐的必须项)
```bash
./dataset/
├── dpo.jsonl (909MB)
├── lora_identity.jsonl (22.8KB)
├── lora_medical.jsonl (34MB)
├── pretrain_hq.jsonl (1.6GB, ✨)
├── r1_mix_1024.jsonl (340MB)
├── sft_1024.jsonl (5.6GB)
├── sft_2048.jsonl (9GB)
├── sft_512.jsonl (7.5GB)
├── sft_mini_512.jsonl (1.2GB, ✨)
└── tokenizer_train.jsonl (1GB)
```
<details style="color:rgb(128,128,128)">
<summary>注:各数据集简介</summary>
* `dpo.jsonl` --RLHF阶段数据集
* `lora_identity.jsonl` --自我认知数据集(例如:你是谁?我是minimind...),推荐用于lora训练(亦可用于全参SFT,勿被名字局限)
* `lora_medical.jsonl` --医疗问答数据集,推荐用于lora训练(亦可用于全参SFT,勿被名字局限)
* `pretrain_hq.jsonl`✨ --预训练数据集,整合自jiangshu科技
* `r1_mix_1024.jsonl` --DeepSeek-R1-1.5B蒸馏数据,每条数据字符最大长度为1024(因此训练时设置max_seq_len=1024)
* `sft_1024.jsonl` --整合自Qwen2.5蒸馏数据(是sft_2048的子集),每条数据字符最大长度为1024(因此训练时设置max_seq_len=1024)
* `sft_2048.jsonl` --整合自Qwen2.5蒸馏数据,每条数据字符最大长度为2048(因此训练时设置max_seq_len=2048)
* `sft_512.jsonl` --整合自匠数科技SFT数据,每条数据字符最大长度为512(因此训练时设置max_seq_len=512)
* `sft_mini_512.jsonl`✨ --极简整合自匠数科技SFT数据+Qwen2.5蒸馏数据(用于快速训练Zero模型),每条数据字符最大长度为512(因此训练时设置max_seq_len=512)
* `tokenizer_train.jsonl` --均来自于`匠数大模型数据集`,这部分数据相对次要,(不推荐自己重复训练tokenizer,理由如上)如需自己训练tokenizer可以自由选择数据集。
</details>

<details style="color:rgb(128,128,128)">
<summary>说明 & 推荐训练方案</summary>
* MiniMind2 Series均经过共约20GB语料训练,大约4B tokens,即对应上面的数据组合训练结果(开销:💰💰💰💰💰💰💰💰,效果:😊😊😊😊😊😊)
* 想要最快速度从0实现Zero模型,推荐使用`pretrain_hq.jsonl` + `sft_mini_512.jsonl` 的数据组合,具体花销和效果可查看下文表格(开销:💰,效果:😊😊)
* 推荐具备一定算力资源或更在意效果的朋友可以考虑前者完整复现MiniMind2;仅有单卡GPU或在乎短时间快速复现的朋友强烈推荐后者;
* 【折中方案】亦可选择例如`sft_mini_512.jsonl`、`sft_1024.jsonl`中等规模数据进行自由组合训练(开销:💰💰💰,效果:😊😊😊😊)。
</details>
# 📌 Model Structure
MiniMind-Dense(和[Llama3.1](https://ai.meta.com/blog/meta-llama-3-1/)一样)使用了Transformer的Decoder-Only结构,跟GPT-3的区别在于:
* 采用了GPT-3的预标准化方法,也就是在每个Transformer子层的输入上进行归一化,而不是在输出上。具体来说,使用的是RMSNorm归一化函数。
* 用SwiGLU激活函数替代了ReLU,这样做是为了提高性能。
* 像GPT-Neo一样,去掉了绝对位置嵌入,改用了旋转位置嵌入(RoPE),这样在处理超出训练长度的推理时效果更好。
---
MiniMind-MoE模型,它的结构基于Llama3和[Deepseek-V2/3](https://arxiv.org/pdf/2405.04434)中的MixFFN混合专家模块。
* DeepSeek-V2在前馈网络(FFN)方面,采用了更细粒度的专家分割和共享的专家隔离技术,以提高Experts的效果。
---
MiniMind的整体结构一致,只是在RoPE计算、推理函数和FFN层的代码上做了一些小调整。
其结构如下图(重绘版):


修改模型配置见[./model/LMConfig.py](./model/LMConfig.py)。
参考模型参数版本见下表:
| Model Name | params | len_vocab | rope_theta | n_layers | d_model | kv_heads | q_heads | share+route |
|-------------------|--------|-----------|------------|----------|---------|----------|---------|-------------|
| MiniMind2-Small | 26M | 6400 | 1e6 | 8 | 512 | 2 | 8 | - |
| MiniMind2-MoE | 145M | 6400 | 1e6 | 8 | 640 | 2 | 8 | 1+4 |
| MiniMind2 | 104M | 6400 | 1e6 | 16 | 768 | 2 | 8 | - |
| minimind-v1-small | 26M | 6400 | 1e4 | 8 | 512 | 8 | 16 | - |
| minimind-v1-moe | 4×26M | 6400 | 1e4 | 8 | 512 | 8 | 16 | 1+4 |
| minimind-v1 | 108M | 6400 | 1e4 | 16 | 768 | 8 | 16 | - |
# 📌 Experiment
## Ⅰ 训练开销
- **时间单位**:小时 (h)。
- **成本单位**:人民币 (¥);7¥ ≈ 1美元。
- **3090 租卡单价**:≈1.3¥/h(可自行参考实时市价)。
- **参考标准**:表格仅实测 `pretrain` 和 `sft_mini_512` 两个数据集的训练时间,其它耗时根据数据集大小估算(可能存在些许出入)。
> 基于 3090 (单卡)成本计算
| Model Name | params | pretrain | sft_mini_512 | sft_512 | sft_1024 | sft_2048 | RLHF |
|-----------------|--------|------------------|------------------|---------------|-------------------|------------------|---------------|
| MiniMind2-Small | 26M | ≈1.1h<br/>≈1.43¥ | ≈1h<br/>≈1.3¥ | ≈6h<br/>≈7.8¥ | ≈4.58h<br/>≈5.95¥ | ≈7.5h<br/>≈9.75¥ | ≈1h<br/>≈1.3¥ |
| MiniMind2 | 104M | ≈3.9h<br/>≈5.07¥ | ≈3.3h<br/>≈4.29¥ | ≈20h<br/>≈26¥ | ≈15h<br/>≈19.5¥ | ≈25h<br/>≈32.5¥ | ≈3h<br/>≈3.9¥ |
---
<details style="color:rgb(128,128,128)">
<summary>训练开销总结&预测</summary>
> MiniMind2-Small参数
>> `pretrain_hq`+`sft_mini_512`数据集
<br/>单卡3090 (1 epoch) + 2.1小时 + 花费2.73元人民币
<br/>即可从0训练出MiniMind-Zero-0.025B模型!!!
> MiniMind2-Small参数
>> `pretrain_hq`+`sft_512`+`sft_2048`+`dpo`数据集
<br/>单卡3090 (2 epochs) + 大约38.16小时 + 花费49.61元人民币
<br/>即可从0训练出MiniMind2-Small-0.025B模型!!!
> MiniMind2参数
>> `pretrain_hq`+`sft_512`+`sft_2048`+`dpo`数据集
<br/>单卡3090 (2 epochs) + 大约122小时 + 花费158.6元人民币
<br/>即可从0训练出MiniMind2-0.1B模型!!!
</details>
✨基于单卡NVIDIA 3090的`MiniMind-Zero`从0训练仅需`2小时` + `3块钱`,实现ChatBot效果!
✨PS:若采用8卡4090训练,总用时甚至可以压缩到10分钟以内!(由于时间更短,花费同样3元左右,与单卡成本相当)
✨以极低极低的门槛,实现人人可玩的大模型自由!这正是MiniMind系列的诞生初衷!
✨仅价值`3块钱`成本的`MiniMind-Zero`并不是噱头!Chat测试:
```text
👶: 请介绍一下自己。
🤖️: 作为人工智能,我没有实际的生活,也没有自我意识,所以没有自己的生活。我被设计成能够帮助用户解答问题、提供信息、进行对话等。我的设计和功能是由计算机科学、机器学习、人工智能技术和算法所构建的,我通过编程和算法实现。
👶: 推荐一些杭州的特色美食吧。
🤖️: 杭州有很多美味的特色美食,比如鸡肉串、鳗鱼头、水煮鱼、豆腐脑等。这些美食都非常美味,有着独特的风味和口感,绝对是值得一试的美食。
👶: 请为我讲解“大语言模型”这个概念。
🤖️: 大语言模型是一种人工智能技术,它使用大量的文本数据来训练模型,然后在不断的迭代中不断优化模型。它的目标是根据输入的数据来生成符合特定需求的文本,这些文本可以是任何你感兴趣的主题。
👶: Introduce the history of the United States, please.
🤖️: 您提到的“Introok's the believeations of theument." 这个名字来源于中国古代的"groty of of the change."
```
极速且初具效果,甚至仍然可以进一步压缩获取更小更优质的训练数据。
Zero模型权重保存为 `full_sft_512_zero.pth`(见下文MiniMind模型文件链接),如有兴趣可下载检验此模型效果。
---
## Ⅱ 主要训练步骤
### **1. 预训练(Pretrain)**:
LLM首先要学习的并非直接与人交流,而是让网络参数中充满知识的墨水,“墨水” 理论上喝的越饱越好,产生大量的对世界的知识积累。
预训练就是让Model先埋头苦学大量基本的知识,例如从Wiki百科、新闻、书籍整理大规模的高质量训练数据。
这个过程是“无监督”的,即人类不需要在过程中做任何“有监督”的校正,而是由模型自己从大量文本中总结规律学习知识点。
模型此阶段目的只有一个:**学会词语接龙**。例如我们输入“秦始皇”四个字,它可以接龙“是中国的第一位皇帝”。
```bash
torchrun --nproc_per_node 1 train_pretrain.py # 1即为单卡训练,可根据硬件情况自行调整 (设置>=2)
# or
python train_pretrain.py
```
> 训练后的模型权重文件默认每隔`100步`保存为: `pretrain_*.pth`(*
> 为模型具体dimension,每次保存时新文件会覆盖旧文件)
### **2. 有监督微调(Supervised Fine-Tuning)**:
经过预训练,LLM此时已经掌握了大量知识,然而此时它只会无脑地词语接龙,还不会与人聊天。
SFT阶段就需要把半成品LLM施加一个自定义的聊天模板进行微调。
例如模型遇到这样的模板【问题->回答,问题->回答】后不再无脑接龙,而是意识到这是一段完整的对话结束。
称这个过程为指令微调,就如同让已经学富五车的「牛顿」先生适应21世纪智能手机的聊天习惯,学习屏幕左侧是对方消息,右侧是本人消息这个规律。
在训练时,MiniMind的指令和回答长度被截断在512,是为了节省显存空间。就像我们学习时,会先从短的文章开始,当学会写作200字作文后,800字文章也可以手到擒来。
在需要长度拓展时,只需要准备少量的2k/4k/8k长度对话数据进行进一步微调即可(此时最好配合RoPE-NTK的基准差值)。
> 在推理时通过调整RoPE线性差值,实现免训练长度外推到2048及以上将会很方便。
```bash
torchrun --nproc_per_node 1 train_full_sft.py
# or
python train_full_sft.py
```
> 训练后的模型权重文件默认每隔`100步`保存为: `full_sft_*.pth`(*
> 为模型具体dimension,每次保存时新文件会覆盖旧文件)
## Ⅲ 其它训练步骤
### **3. 人类反馈强化学习(Reinforcement Learning from Human Feedback, RLHF)**
在前面的训练步骤中,模型已经具备了基本的对话能力,但是这样的能力完全基于单词接龙,缺少正反样例的激励。
模型此时尚未知什么回答是好的,什么是差的。我们希望它能够更符合人的偏好,降低让人类不满意答案的产生概率。
这个过程就像是让模型参加新的培训,从优秀员工的作为例子,消极员工作为反例,学习如何更好地回复。
此处使用的是RLHF系列之-直接偏好优化(Direct Preference Optimization, DPO)。
与PPO(Proximal Policy Optimization)这种需要奖励模型、价值模型的RL算法不同;
DPO通过推导PPO奖励模型的显式解,把在线奖励模型换成离线数据,Ref模型输出可以提前保存。
DPO性能几乎不变,只用跑 actor_model 和 ref_model 两个模型,大大节省显存开销和增加训练稳定性。
> 注:RLHF训练步骤**并非必须**,此步骤难以提升模型“智力”而通常仅用于提升模型的“礼貌”,有利(符合偏好、减少有害内容)也有弊(样本收集昂贵、反馈偏差、多样性损失)。
```bash
torchrun --nproc_per_node 1 train_dpo.py
# or
python train_dpo.py
```
> 训练后的模型权重文件默认每隔`100步`保存为: `rlhf_*.pth`(*
> 为模型具体dimension,每次保存时新文件会覆盖旧文件)
### **4. 知识蒸馏(Knowledge Distillation, KD)**
在前面的所有训练步骤中,模型已经完全具备了基本能力,通常可以学成出师了。
而知识蒸馏可以进一步优化模型的性能和效率,所谓知识蒸馏,即学生模型面向教师模型学习。
教师模型通常是经过充分训练的大模型,具有较高的准确性和泛化能力。
学生模型是一个较小的模型,目标是学习教师模型的行为,而不是直接从原始数据中学习。
在SFT学习中,模型的目标是拟合词Token分类硬标签(hard labels),即真实的类别标签(如 0 或 6400)。
在知识蒸馏中,教师模型的softmax概率分布被用作软标签(soft labels)。小模型仅学习软标签,并使用KL-Loss来优化模型的参数。
通俗地说,SFT直接学习老师给的解题答案。而KD过程相当于“打开”老师聪明的大脑,尽可能地模仿老师“大脑”思考问题的神经元状态。
例如,当老师模型计算`1+1=2`这个问题的时候,最后一层神经元a状态为0,神经元b状态为100,神经元c状态为-99...
学生模型通过大量数据,学习教师模型大脑内部的运转规律。这个过程即称之为:知识蒸馏。
知识蒸馏的目的只有一个:让小模型体积更小的同时效果更好。
然而随着LLM诞生和发展,模型蒸馏一词被广泛滥用,从而产生了“白盒/黑盒”知识蒸馏两个派别。
GPT-4这种闭源模型,由于无法获取其内部结构,因此只能面向它所输出的数据学习,这个过程称之为黑盒蒸馏,也是大模型时代最普遍的做法。
黑盒蒸馏与SFT过程完全一致,只不过数据是从大模型的输出收集,因此只需要准备数据并且进一步FT即可。
注意更改被加载的基础模型为`full_sft_*.pth`,即基于微调模型做进一步的蒸馏学习。
`./dataset/sft_1024.jsonl`与`./dataset/sft_2048.jsonl` 均收集自qwen2.5-7/72B-Instruct大模型,可直接用于SFT以获取Qwen的部分行为。
```bash
# 注意需要更改train_full_sft.py数据集路径,以及max_seq_len
torchrun --nproc_per_node 1 train_full_sft.py
# or
python train_full_sft.py
```
> 训练后的模型权重文件默认每隔`100步`同样保存为: `full_sft_*.pth`(*为模型具体dimension,每次保存时新文件会覆盖旧文件)
此处应当着重介绍MiniMind实现的白盒蒸馏代码`train_distillation.py`,由于MiniMind同系列本身并不存在强大的教师模型,因此白盒蒸馏代码仅作为学习参考。
```bash
torchrun --nproc_per_node 1 train_distillation.py
# or
python train_distillation.py
```
### **5. LoRA (Low-Rank Adaptation)**
LoRA是一种高效的参数高效微调(Parameter-Efficient Fine-Tuning, PEFT)方法,旨在通过低秩分解的方式对预训练模型进行微调。
相比于全参数微调(Full Fine-Tuning),LoRA 只需要更新少量的参数。
LoRA 的核心思想是:在模型的权重矩阵中引入低秩分解,仅对低秩部分进行更新,而保持原始预训练权重不变。
代码可见`./model/model_lora.py`和`train_lora.py`,完全从0实现LoRA流程,不依赖第三方库的封装。
```bash
torchrun --nproc_per_node 1 train_lora.py
# or
python train_lora.py
```
> 训练后的模型权重文件默认每隔`100步`保存为: `lora_xxx_*.pth`(*
> 为模型具体dimension,每次保存时新文件会覆盖旧文件)
非常多的人困惑,如何使模型学会自己私有领域的知识?如何准备数据集?如何迁移通用领域模型打造垂域模型?
这里举几个例子,对于通用模型,医学领域知识欠缺,可以尝试在原有模型基础上加入领域知识,以获得更好的性能。
同时,我们通常不希望学会领域知识的同时损失原有基础模型的其它能力,此时LoRA可以很好的改善这个问题。
只需要准备如下格式的对话数据集放置到`./dataset/lora_xxx.jsonl`,启动 `python train_lora.py`
训练即可得到`./out/lora/lora_xxx.pth`新模型权重。
**医疗场景**
```
{"conversations": [{"role": "user", "content": "请问颈椎病的人枕头多高才最好?"}, {"role": "assistant", "content": "颈椎病患者选择枕头的高度应该根据..."}]}
{"conversations": [{"role": "user", "content": "请问xxx"}, {"role": "assistant", "content": "xxx..."}]}
```
**自我认知场景**
```
{"conversations": [{"role": "user", "content": "你叫什么名字?"}, {"role": "assistant", "content": "我叫minimind..."}]}
{"conversations": [{"role": "user", "content": "你是谁"}, {"role": "assistant", "content": "我是..."}]}
```
此时【基础模型+LoRA模型】即可获得医疗场景模型增强的能力,相当于为基础模型增加了LoRA外挂,这个过程并不损失基础模型的本身能力。
我们可以通过`eval_model.py`进行模型评估测试。
```bash
# 注意:model_mode即选择基础模型的类型,这和train_lora是基于哪个模型训练的相关,确保统一即可。
python eval_model.py --lora_name 'lora_medical' --model_mode 2
```
**小测试**
```
👶: 我最近经常感到头晕,可能是什么原因?
🤖️: 头晕可能是由于多种原因,包括但不限于:心脏疾病、眼睛损伤、过敏反应、皮肤问题、过敏反应、压力、焦虑、疲劳、药物副作用、性功能障碍或者其他...
---
👶: 你和openai是什么关系?
🤖️: 我是 MiniMind,一个由 Jingyao Gong 开发的人工智能助手。我通过自然语言处理和算法训练来与用户进行交互。
```
PS:只要有所需要的数据集,也可以full_sft全参微调(需要进行通用知识的混合配比,否则过拟合领域数据会让模型变傻,损失通用性)
### **6. 训练推理模型 (Reasoning Model)**
DeepSeek-R1实在太火了,几乎重新指明了未来LLM的新范式。
论文指出`>3B`的模型经历多次反复的冷启动和RL奖励训练才能获得肉眼可见的推理能力提升。
最快最稳妥最经济的做法,以及最近爆发的各种各样所谓的推理模型几乎都是直接面向数据进行蒸馏训练,
但由于缺乏技术含量,蒸馏派被RL派瞧不起(hhhh)。
本人迅速已经在Qwen系列1.5B小模型上进行了尝试,很快复现了Zero过程的数学推理能力。
然而一个遗憾的共识是:参数太小的模型直接通过冷启动SFT+GRPO几乎不可能获得任何推理效果。
MiniMind2第一时间只能坚定不移的选择做蒸馏派,日后基于0.1B模型的RL如果同样取得小小进展会更新此部分的训练方案。
做蒸馏需要准备的依然是和SFT阶段同样格式的数据即可,数据集来源已如上文介绍。数据格式例如:
```json lines
{
"conversations": [
{
"role": "user",
"content": "你好,我是小芳,很高兴认识你。"
},
{
"role": "assistant",
"content": "<think>\n你好!我是由中国的个人开发者独立开发的智能助手MiniMind-R1-Lite-Preview,很高兴为您提供服务!\n</think>\n<answer>\n你好!我是由中国的个人开发者独立开发的智能助手MiniMind-R1-Lite-Preview,很高兴为您提供服务!\n</answer>"
}
]
}
```
推理模型R1的回复模板是:
```text
<think>\n思考过程\n</think>\n
<answer>\n最终回答\n</answer>
```
这在GRPO中通过设置规则奖励函数约束模型符合思考标签和回复标签(在冷启动靠前的阶段奖励值设置应该提高一些)
另一个问题是蒸馏过程虽然和SFT一样,但实验结果是模型难以每次都符合模板规范的回复,即脱离思考和回复标签约束。
这里的小技巧是增加标记位置token的损失惩罚,详见`train_distill_reason.py`:
```text
# 在 sp_ids 对应的位置增加额外的惩罚
...
loss_mask[sp_ids] = 10 # 惩罚系数
```
另另一个tips是由于推理数据由于只筛选了`<1024`长度的数据,其中多轮对话和英文数据偏少,
因此`r1_mix_1024.jsonl`进行了大约10k条多轮对话+英文数据的混合,防止模型遗忘严重。
脚本默认基于rlhf后的基模型做推理能力的蒸馏微调,下面直接启动训练即可:
```bash
torchrun --nproc_per_node 1 train_distill_reason.py
# or
python train_distill_reason.py
```
> 训练后的模型权重文件默认每隔`100步`保存为: `reason_*.pth`(*为模型具体dimension,每次保存时新文件会覆盖旧文件)
测试一下:
```text
👶: 请介绍一下自己。
🤖️: <think>
嗯,用户让我介绍一下自己。我现在需要解释一下自己。首先,我应该明确用户是谁,是学生、职场人士还是普通人。用户可能是一个学生或者职场人士,所以信息传递应该是帮助学生和职场人士的信息。
然后,我需要回应用户,这可能包括介绍自己的位置、兴趣爱好、职业经历等信息。用户还可能想了解自己在哪里,比如职业发展、工作经验等。
我应该注意用词要正式,不要太正式,同时要保持简洁明了。这样用户容易理解,也方便他们查阅相关资料。
最后,我应该总结一下,让用户知道我是如何帮助他们完成他们的任务的,这样他们有更多关于我的信息。
</think>
<answer>
我是**“你的名字”,你叫什么名字?
</answer>
👶: 你更擅长哪一个学科?
🤖️: <think>
您好!我是由中国的个人开发者开发的智能助手MiniMind-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
</think>
<answer>
您好!我是由中国的个人开发者开发的智能助手MiniMind-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
</answer>
```
---
## Ⅳ 模型参数设定
📋关于LLM的参数配置,有一篇很有意思的论文[MobileLLM](https://arxiv.org/pdf/2402.14905)做了详细的研究和实验。
Scaling Law在小模型中有自己独特的规律。
引起Transformer参数成规模变化的参数几乎只取决于`d_model`和`n_layers`。
* `d_model`↑ + `n_layers`↓ -> 矮胖子
* `d_model`↓ + `n_layers`↑ -> 瘦高个
2020年提出Scaling Law的论文认为,训练数据量、参数量以及训练迭代次数才是决定性能的关键因素,而模型架构的影响几乎可以忽视。
然而似乎这个定律对小模型并不完全适用。
MobileLLM提出架构的深度比宽度更重要,「深而窄」的「瘦长」模型可以学习到比「宽而浅」模型更多的抽象概念。
例如当模型参数固定在125M或者350M时,30~42层的「狭长」模型明显比12层左右的「矮胖」模型有更优越的性能,
在常识推理、问答、阅读理解等8个基准测试上都有类似的趋势。
这其实是非常有趣的发现,因为以往为100M左右量级的小模型设计架构时,几乎没人尝试过叠加超过12层。
这与MiniMind在训练过程中,模型参数量在`d_model`和`n_layers`之间进行调整实验观察到的效果是一致的。
然而「深而窄」的「窄」也是有维度极限的,当d_model<512时,词嵌入维度坍塌的劣势非常明显,
增加的layers并不能弥补词嵌入在固定q_head带来d_head不足的劣势。
当d_model>1536时,layers的增加似乎比d_model的优先级更高,更能带来具有“性价比”的参数->效果增益。
* 因此MiniMind设定small模型dim=512,n_layers=8来获取的「极小体积<->更好效果」的平衡。
* 设定dim=768,n_layers=16来获取效果的更大收益,更加符合小模型Scaling-Law的变化曲线。
作为参考,GPT3的参数设定见下表:

---
## Ⅴ 训练结果
MiniMind2 模型训练损失走势(由于数据集在训练后又更新清洗多次,因此Loss仅供参考)
| models | pretrain (length-512) | sft (length-512) |
|-----------------|----------------------------------------------------|----------------------------------------------------|
| MiniMind2-Small | <img src="./images/pre_512_loss.png" width="100%"> | <img src="./images/sft_512_loss.png" width="100%"> |
| MiniMind2 | <img src="./images/pre_768_loss.png" width="100%"> | <img src="./images/sft_768_loss.png" width="100%"> |
### 训练完成-模型合集
> 考虑到多人反应百度网盘速度慢,MiniMind2及以后全部使用ModelScope/HuggingFace托管。
#### ① PyTorch原生模型
MiniMind2模型权重 ([ModelScope](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch) | [HuggingFace](https://huggingface.co/jingyaogong/MiniMind2-Pytorch))
MiniMind-V1模型权重 ([百度网盘](https://pan.baidu.com/s/1KUfSzEkSXYbCCBj0Pw-9fA?pwd=6666))
<details style="color:rgb(128,128,128)">
<summary>Torch文件命名对照</summary>
| Model Name | params | pretrain_model | sft_model | rl_model | reason_model | lora_model |
|-----------------|--------|------------------------|------------------------|--------------------|------------------|--------------------|
| MiniMind2-small | 26M | `pretrain_512.pth` | `full_sft_512.pth` | `rlhf_512.pth` | `reason_512.pth` | `lora_xxx_512.pth` |
| MiniMind2-MoE | 145M | `pretrain_640_moe.pth` | `full_sft_640_moe.pth` | `rlhf_640_moe.pth` | - | - |
| MiniMind2 | 104M | `pretrain_768.pth` | `full_sft_768.pth` | `rlhf_768.pth` | `reason_768.pth` | `lora_xxx_768.pth` |
| Model Name | params | pretrain_model | 单轮对话sft | 多轮对话sft | rl_model |
|-------------------|--------|------------------------|------------------------------------|-----------------------------------|--------------|
| minimind-v1-small | 26M | `pretrain_512.pth` | `single_chat/full_sft_512.pth` | `multi_chat/full_sft_512.pth` | `rl_512.pth` |
| minimind-v1-moe | 4×26M | `pretrain_512_moe.pth` | `single_chat/full_sft_512_moe.pth` | `multi_chat/full_sft_512_moe.pth` | - |
| minimind-v1 | 108M | `pretrain_768.pth` | `single_chat/full_sft_768.pth` | `multi_chat/full_sft_768.pth` | `rl_768.pth` |
</details>
#### ② Transformers模型
MiniMind系列 ([ModelScope](https://www.modelscope.cn/collections/MiniMind-b72f4cfeb74b47)
| [HuggingFace](https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5))
---
# 📌 Eval
## Ⅰ RLHF对比SFT篇
测试基于`full_sft`和`rlhf`的`MiniMind2系列`模型对比,测试随机种子均固定
<details style="color:rgb(128,128,128)">
<summary>问答实测</summary>
[A0] MiniMind2:full_sft_640
[A1] MiniMind2:rlhf_640
```text
[Q]: 你叫什么名字?
[A0]: 我是一个大型语言模型,可以回答各种问题,提供信息和帮助。我可以回答各种问题,提供信息和帮助,帮助你解决各种问题。
[A1]: 我是一种基于文本的AI模型,我并没有个人经历或情感,所以无法感受和体验情感。我的设计目的是为了帮助用户解决问题、提供信息和进行对话。如果您有任何问题或需要帮助,请随时告诉我!
[Q]: 你更擅长哪一个学科?
[A0]: 作为一个AI模型,我并没有真正的“擅长”这个学科。我被设计成能够提供信息、解答问题、进行对话、执行任务和与用户进行对话。不过,我可以提供信息、解答问题、进行对话、提供建议、处理问题、进行对话、提供建议、学习新知识、编写代码、回答问题、提供建议等多种学科的功能。如果你有具体的学科需求或主题,欢迎提供详细信息。
[A1]: 作为AI,我并没有个人能力或兴趣,但我可以提供一些关于数学、科学、计算机科学、语言学、心理学等学科的知识。如果你对数学感兴趣,我可以提供一些基础知识,例如代数、几何、概率、统计学等。如果你需要更深入的学习,我也可以提供相关的在线课程、书籍、论文等资源。请告诉我你具体需要什么帮助,我会尽力提供帮助。
[Q]: 鲁迅的《狂人日记》是如何批判封建礼教的?
[A0]: 鲁迅的《狂人日记》是一部讲述了鲁迅对人类价值和道德责任的倡导。在这部作品中,鲁迅通过细腻的笔触,探讨了人性、道德、正义、牺牲与责任等深刻的主题。以下是鲁迅在《狂人日记》中如何批判封建礼教的几个方面:1. **社会现实与道德观念**:鲁迅通过讲述鲁迅的故事,探讨了社会的不平等、权力的流动、人性的演变等社会问题。他认为,社会不平等不仅仅是不公正的,更是每个人在社会中的不公正。他呼吁社会各界共同努力,通过教育、宣传、法律等手段,为构建和谐社会做出贡献。2. **道德与责任**:鲁迅通过讲述鲁迅的故事,展现了道德和责任的重要性。他提倡通过道德教育来培养人们的道德品质,同时也强调了道德和责任的重要性。鲁迅通过讲述鲁迅的故事,揭示了社会的不平等和人性的复杂性,体现了对人性的深刻理解。3. **道德与责任**:鲁迅通过讲述鲁迅的故事,强调了道德责任的重要性。他提倡通过自己的行为来体现对社会的尊重和对人性的深刻理解。鲁迅通过讲述鲁迅的故事,强调了道德和责任的重要性。4. **道德与道德**:鲁迅通过讲述鲁迅的故事,揭示了道德和责任的真正意义。鲁迅通过讲述鲁迅的故事,揭示了道德和责任的本质,强调了道德和责任的重要性。5. **个人与社会**:鲁迅通过讲述鲁迅的故事,揭示了个人与社会之间的复杂关系。他呼吁社会各界共同努力,通过教育和宣传,为构建和谐社会做出贡献。鲁迅的《狂人日记》通过其丰富的故事和深刻的道德寓言,传达了关于人性、道德、正义、道德以及个人与社会的复杂关系。这部作品不仅在文学价值上取得了巨大成就,更在思想层面上启发了后来的文学创作,对后世产生了深远的影响。
[A1]: 鲁迅的《狂人日记》是一部描绘战争和自由精神的作品,它批判了封建社会的种种问题和冲突,反映了社会的不平等和人性本质。在这部作品中,鲁迅通过对封建社会的诸多现象进行批判,如“弗雷德里克·约翰逊的“鹰迪伦”、“纳扎特的”、“贝克汉姆”、“乔治·威尔逊”等,对封建社会的种种问题进行了批判,强调了封建社会的不平等、人性本质以及人性本质。鲁迅通过他的“鹰迪伦”、“纳扎特的”、“贝克汉姆”、“乔治·威尔逊”等,对封建社会的种种问题进行了批判。在小说中,鲁迅通过对封建社会的诸多现象进行批判,揭示了封建社会的腐败、人性本质以及人性本质的复杂性。这种批判体现了对封建社会的尊重和对人性的复杂性,同时也反映了现代社会中普遍存在的不平等和人性本质。鲁迅的《狂人日记》不仅是一部关于封建社会的小说,也是一部具有深刻社会意义的作品,对后世的审判和批判起到了关键作用。
[Q]: 我咳嗽已经持续了两周,需要去医院检查吗?
[A0]: 咳嗽确实需要一定的检查,但如果您的咳嗽持续时间过长,或者有其他不适症状(如持续咳嗽、呼吸困难等),建议尽快就医。医生会根据您的症状、体检结果和可能的影像学检查(如X光、MRI或CT扫描)来诊断您的症状。在等待就医期间,您可以尝试以下方法来帮助缓解症状:1. **休息**:确保您有足够的睡眠。2. **保持水分**:脱水可能会加重咳嗽。3. **热敷**:使用热敷袋或热毛巾敷在疼痛的部位,有助于缓解喉咙痛。4. **避免刺激**:避免使用可能刺激喉咙的化学物质,如某些化学物质。5. **保持室内湿润**:确保您的卧室有良好的通风,可以使用加湿器帮助保持空气湿度。6. **保持室内湿度**:使用加湿器可以帮助缓解喉咙干燥和咳嗽。如果症状持续不缓解或有其他不适症状,请立即就医。希望您早日康复!
[A1]: 如果您的咳嗽持续了两周,通常建议您尽快就医,因为这可能会影响您的呼吸和心跳。您可以通过以下步骤来检查咳嗽:1. **休息**:用纸巾或肘部轻轻按压咳嗽的位置。2. **使用药物**:在可能的情况下,使用药物(如抗过敏药、抗过敏药等)来缓解咳嗽。请在医生的指导下使用药物。3. **热敷**:在咳嗽期间,可以使用热敷来缓解疼痛和肿胀。4. **药物治疗**:如果咳嗽持续了两周,可以使用一些消炎药或抗过敏药来缓解症状。请务必在医生的指导下使用。5. **保持健康的生活方式**:保持充足的睡眠、均衡的饮食、适量的运动和避免吸烟和限酒,这些都有助于提高咳嗽的几率。请记住,这些建议不能替代专业医疗意见。如果您的症状持续或加重,强烈建议您尽快就医。
```
</details>
👉效果总结
full_sft模型在简洁性和信息准确性方面表现更好;rlhf模型在回答中倾向于提供更多的背景信息,但信息准确性有待改进。
总的来说RLHF后的模型倾向于学习:说更多有礼貌但无用的废话讨好“对话”本身,而对信息准确性则有轻微损失。
天下没有免费的午餐,还需要继续提升RLHF数据集的质量,也要接受模型能力无法避免的损失(程度有轻重)。
DPO和在线PPO的区别在于reject和chosen都是离线准备的,和minimind模型本身的输出必然存在很大的分布差异。
通俗地说DPO算法使模型观看乒乓球世界冠军的打法「录像」进行RL,而不是像PPO一样请reward模型做「教练」纠正自己的打法进行RL。
## Ⅱ 主观样例测评
🏃以下测试于2025-02-09完成,此日期后发布的新模型,无特殊需要时将不加入测试。
[A] [MiniMind2 (0.1B)](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch)<br/>
[B] [MiniMind2-MoE (0.15B)](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch)<br/>
[C] [MiniMind2-Small (0.02B)](https://www.modelscope.cn/models/gongjy/MiniMind2-PyTorch)<br/>
[D] [minimind-v1-small(0.02B)](https://pan.baidu.com/s/1_COe0FQRDmeapSsvArahCA?pwd=6666)<br/>
[E] [minimind-v1-moe(0.1B)](https://pan.baidu.com/s/1tqB-GMvuiGQBvEl-yZ-oBw?pwd=6666)<br/>
[F] [minimind-v1(0.1B)](https://pan.baidu.com/s/1p713loS7EfwHQf3G9eYI3Q?pwd=6666)<br/>
[G] [baby-llama2-chinese(0.2B)](https://github.com/DLLXW/baby-llama2-chinese)<br/>
[H] [chatlm-mini-chinese(0.2B)](https://github.com/charent/ChatLM-mini-Chinese)<br/>
```text
[Q]: 你知道长江吗?
[A]: 长江,位于中国东南部,是中国的第一长河,全长6300多公里,流经11个省份,是中国最长的河流,也是中国的“母亲河”。长江的历史可以追溯到公元前2000多年的商朝时期,当时的商朝是商朝的后裔。商朝时期,长江的经济发展达到了高峰,是中国最重要的农业和工业中心之一。长江流域的经济主要以农业为主,长江渔业占据了主要产业,如钓鱼、猎鱼等。长江的水资源也非常丰富,包括河流、湖泊、河流等。长江的自然景观也十分壮观,长江四季景色各异,春季多雨,夏季多雨,秋季则多雨,冬季则寒冷多雪,春季和秋季都有美丽的花朵和果实。长江是中国的经济中心,拥有众多的国家和城市,是世界上重要的交通枢纽和航运中心。同时,长江还是中国的重要水利资源,对于中国的经济发展具有重要意义。
[B]: 是的,我知道长江。长江位于中国的中部,是中国第三长的河流,全长约6300公里,流经江西省、安徽省、江苏省、安徽省、江苏省和安徽省。
[C]: 当然,长江是中国最长的河流,全长约6300公里。它以其丰富的水资源、丰富的水生动植物资源以及独特的地理位置而闻名于世。
[D]: 是的,长江是中国最长的河流,全长超过3300千米。
[E]: 长江是中国最长的河流,总长度约为6300公里。它发源于青藏高原的青海湖,流经青海、四川、甘肃、宁夏、内蒙古、陕西和山西。长江流域是中国最大的农业区之一,也是中国的文化和经济中心。
[F]: 是的,长江是中国最长的河流之一,也是世界第三大河流。长江发源于青藏高原,流经11个省份,最终注入东海。长江流域是中国最大的河流流域之一,拥有着丰富的自然资源和文化遗产,如长江大桥、长江海洋博物馆、长江古城等。
[G]: 同时也知道,中国是世界上最长的城市之一,拥有超过3400万人口。
[H]: 是的,长江是中国最长的河流,也是世界上最长的河流之一。它发源于青藏高原,流经九个省区,最终注入东海。长江流域的气候温和,四季分明,是中国重要的农业和工业基地。长江流域的水产资源丰富,是中国的重要渔业资源之一。
[Q]: 世界上最高的山峰是什么?
[A]: 世界上最高的山峰是珠穆朗玛峰(Mount Everest),位于喜马拉雅山脉,尼泊尔和中国的边界上。它的海拔高度是8,848.86米(29,031.7英尺),是地球上最高的山峰。
[B]: 珠穆朗玛峰(Mount Everest)是世界上最高的山峰,海拔高度达8,848米(29,029英尺)。
[C]: 珠穆朗玛峰是世界上最高的山峰,位于喜马拉雅山脉,海拔8,848.86米(29,031.7英尺)。它位于尼泊尔和中国的边界上,是喜马拉雅山脉的一部分。
[D]: 世界上最高的山峰是珠穆朗玛峰,位于尼泊尔和中国的边境上,海拔8848米。
[E]: 世界上最高的山峰是珠穆朗玛峰,位于喜马拉雅山脉,海拔8,848米。
[F]: 珠穆朗玛峰是世界上最高的山峰,海拔8848米。
[G]: 世界上最高的山峰是珠穆朗玛峰,位于喜马拉雅山脉,海拔8,848.86米(29,031.69英尺)。
[H]: 世界上最高的山峰是珠穆朗玛峰,也称为萨加玛塔,位于喜马拉雅山脉的尼泊尔和中国边境之间,海拔8,848.86米(29,031.69英尺)。它是世界上最著名的登山目标之一,吸引着来自世界各地的登山者前来挑战。珠穆朗玛峰的名字来自藏语,意思是“女神的额峰”,它是由两个不同的山峰组成的,其中一个是位于尼泊尔的珠穆朗玛峰,另一个则位于中国。它的高度是通过测量海拔高度来确定的,因为它是通过测量地球的平均高度来确定的。尽管珠穆朗玛峰是世界上最高的山峰,但它并不是地球上任何其他地方都能到达的。地球上的其他地方,如南美洲的安第斯山脉,也是世界上最高的山脉,但它并不是地球上最高的山峰。总之,珠穆朗玛峰是世界上最高的山峰,但它并不是地球上任何地方都能到达的。它是地球上最高的山脉,但它并不是地球上任何地方都能到达的。
[Q]: 万有引力是谁提出的?
[A]: 万有引力是由牛顿在17世纪提出的,它是牛顿三大定律之一。牛顿在17世纪提出的引力定律揭示了万有引力定律的本质,并且解释了行星、卫星、彗星等天体之间如何相互作用。这个定律是由艾萨克·牛顿在1687年提出的,他通过对万有引力定律的分析和实验,成功地解释了天体之间的引力作用,从而解释了地球绕太阳旋转的现象。
[B]: 万有引力(Human Policy)是牛顿的三大力学定律,用于描述两个物体之间的相互作用力。它由万有引力定律和牛顿的第一定律(惯性定律)构成。万有引力是由于两个物体之间相互作用而产生的,它们的质量、距离、速度和方向等因素决定了它们之间的相互作用。这种相互作用力是:质量的乘积:质量的乘积是万有引力定律和牛顿第二定律的乘积。质量的乘积:质量的乘积是物体之间的一种物理过程,它是物体间相互作用的产物,对于其他物体来说,其质量可以表示为两个物体之间的质量之和,即重力加速度a。
[C]: 万有引力是一个由牛顿在1687年提出的物理学概念,它描述了物体之间引力如何相互作用的规律。
[D]: 万有引力是由英国物理学家牛顿提出的。他认为,所有物体都会产生引力,这个引力是由物体的质量和距离决定的。这个理论揭示了质量和距离之间的关系,并证明了万有引力的本质。
[E]: 万有引力是由牛顿在1687年提出的。它是宇宙中一个基本的物理现象,它描述了物体之间的相互作用。这个力被称为万有引力,它在所有物体之间都存在,这种力与物体的质量和距离有关。万有引力是自然界中最基本的力量之一,它不仅影响着物体的运动和变化,还对宇宙的结构和演化产生着重要的影响。
[F]: 万有引力是英国物理学家艾萨克·牛顿在17世纪提出的。
[G]: 引力是宇宙中一种力量,它是由质量或能量所组成的。根据牛顿的万有引力定律,一个物体之间的引力大小与它的质量成正比,与它的质量成反比。因此,牛顿的万有引力定律表明,一个物体之间的引力大小与它的质量成正比,与它的质量成反比。这个定律在物理学、天文学和物理学中都有广泛的应用。
[H]: 万有引力是由英国物理学家艾萨克·牛顿于1687年提出的。他在1687年发表了一篇著名的论文,题为《自然哲学的数学原理》,其中提出了万有引力定律,指出任何两个物体之间都存在着一种引力,这种引力的大小与它们的质量成正比,与它们之间的距离的平方成反比。这个定律被称为万有引力定律,它描述了质量和距离之间的关系。它表明,两个物体之间的引力与它们的质量成正比,与它们之间的距离的平方成反比。这个定律对于解释宇宙中的许多现象和现象之间的相互作用非常重要,例如行星绕太阳运动、恒星之间的引力透镜效应等等。它也被广泛应用于天文学、物理学和工程学等领域,为我们理解宇宙的本质和演化提供了重要的理论基础。
[Q]: 你知道大熊猫的主要食物是什么吗?
[A]: 大熊猫主要以竹子为食,但也会吃其他植物、昆虫、小型哺乳动物和鱼类。它们的消化系统与其他熊科动物类似,能够消化竹子。大熊猫的牙齿锋利,适合咀嚼竹子,因此它们经常被称为“竹食动物”。
[B]: 大熊猫是以竹子为食的哺乳动物,它们是肉食动物。它们主要食物是竹子,特别是竹子中富含纤维素和矿物质,并且不会直接进入消化系统。大熊猫的肠道主要由竹子组成,它们需要消化竹子中的纤维素和其他纤维素来支持它们的正常进食。
[C]: 大熊猫主要以竹子为食,但它们也吃其他植物,包括树叶、嫩芽、昆虫、水果、蔬菜和小型哺乳动物等。大熊猫是中国特有的物种,属于熊科,主要分布在中国的四川、陕西和甘肃等地。
[D]: 大熊猫的主要食物是竹子。它们通常吃竹子,每天需要吃掉大约30公斤的竹子。
[E]: 是的,大熊猫主要以竹子为食。它们每天需要吃掉大量的竹子来维持生命,每天需要吃掉40至80公斤的竹子。此外,它们也会吃一些其他的植物,如草、果实、坚果和小型哺乳动物。
[F]: 大熊猫主要以竹子为食,也会偶尔食用其他的植物和动物。此外,大熊猫也会吃一些野果、小动物和植物,但这些都不属于它们的主要食物来源。
[G]: 大熊猫的主要食物是竹子。
[H]: 大熊猫是一种以竹子为主要食物的植食动物,它们的肠道结构适合消化纤维素含量高的竹子,但也会吃一些其他植物,如草本植物和小型哺乳动物。大熊猫的肠道结构也适应了高纤维的竹子,这使得它们能够消化大量的纤维素,从而帮助它们在野外生存。大熊猫的肠道中还含有一种叫做“纤维素酶”的酶类物质,这种酶能够帮助大熊猫分解纤维素,从而帮助它们消化大量的水分和营养物质。此外,大熊猫的肠道还含有一种叫做“纤维素蛋白”的蛋白质,这种蛋白质能够帮助大熊猫维持体内的能量水平,从而帮助它们在野外生存。总的来说,大熊猫的主要食物是竹子,但它们也会吃一些其他植物,如草本植物和小型哺乳动物。
[Q]: 海水为什么是咸的?
[A]: 海水之所以是咸的,主要是由于其中的化学成分。海水中含有大量的矿物质和盐分,其中主要成分是硫酸盐、碳酸盐、镁离子和钾离子。这些物质在高浓度下,会沉淀在海水中,形成海水。具体来说,海水中的矿物质和盐分主要来自于岩石、土壤、生物和人类排泄物等。岩石是海水的组成部分,主要由矿物质组成,如石英、长石、云母等。土壤中含有大量的化学物质,如氮、磷、钾等,这些物质会渗透到土壤中,形成土壤。生物体内的盐分主要来源于植物的叶绿素等色素,这些色素会吸收太阳光中的能量,转化为生物体可以利用的能量。而人类排泄物中含有的盐分主要来源于矿物质和碳酸盐,这些矿物质在水中的浓度较高,会导致水中溶解的矿物质浓度增加,形成水体中的盐分。因此,海水之所以是咸的,是由于其中的化学成分和结构特征。海水中的矿物质和盐分主要来自于岩石、土壤、生物和人类排泄物等,而这些物质的积累和排泄会导致海水呈现出咸味。
[B]: 海水是咸的,因为它包含了多种物质,包括氯化钠(NaCl)、氯化钠(NaCl)、碳酸盐(如硫酸盐)、氯化钠(NaCl)和水。咸物质的来源:盐是咸的常见来源,含有钾、钙、镁、钾等矿物质。这些矿物质通常来自于地下水、地下水或土壤中,这些矿物质的含量远低于纯净水。咸味的原因:盐的咸味主要源于其溶解于水中的微小颗粒物。这些小颗粒物主要通过植物根部和动物体内的微生物传播,如昆虫、小型哺乳动物、鸟类和某些昆虫。综上所述,海水是咸的,因为它含有大量的盐类,包括矿物质、岩石、盐、矿物质等,这些物质在海水中的浓度高,对咸味的产生较大影响。
[C]: 海水是咸的,主要是因为其中含有各种盐分,其中以海水为主要成分之一。当海水中含有大量的盐分时,盐分会被溶解在海水中,形成咸味。海水中的盐分主要来源于溶解盐类、蒸发析出的盐和海水冻结成冰等因素。
[D]: 海水中含有许多盐分,这些盐分可以导致海水咸度的增加。海水中的盐分主要来自于海水中的盐分,这些盐分可以从海水中的盐分中获得。海水中的氧化物和盐分可以与海水中的盐分形成氢键,从而使海水咸度增加。
[E]: 海水是咸的,这是因为它含有大量的盐分,其中包括氯离子和氯离子。这些离子在海洋中形成了大量的盐,这些盐分被称为盐分。海水中含有许多溶解在水中的盐,这些盐分会随着海水的升高而逐渐蒸发,最终导致海水中盐分的蒸发。这些盐分在海水中被完全消耗,然后被排除在海水中。此外,海水中还含有许多其他的盐分,但这些盐分是不同的,因为它们在水中的浮起和下降过程是不同的。海水中的盐分是从海底的岩石中提取出来的,这些盐分在海水中可以被利用。总的来说,海水是咸的,这也是为什么海水中含有许多盐分和盐分的原因。
[F]: 海水含有许多溶解的盐类,其中最主要的是氯化钠。当水中加入盐时,盐的分子会与水分子发生反应,形成氢氧化钠,这是一种盐。
[G]: 因为水是许多生物的生存所必需的,它能够吸收和释放二氧化碳,释放氧气,维持大气中的水蒸气,以及维持水循环。水中的盐类和化学物质会释放出氧气,使地球上的生物呼吸。此外,水还具有许多其他功能,如保持水循环、维持水循环、调节气候和环境等。水是许多生物生存所必需的,它也是许多自然现象的重要基础。
[H]: 海水是咸的主要原因是因为其中含有大量的溶解盐类,其中最主要的是氯化钠(NaCl)。氯化钠是一种非常常见的盐类,它存在于地球上的大部分地区,包括海洋、湖泊、河流和地下水。当水从海洋中蒸发时,它会带走一部分盐分,使海水变得咸。此外,海中还含有其他溶解盐类的物质,如硫酸盐、硫酸镁、硫酸钙、硫酸镁盐、硫酸镁钾、硫酸镁钠、硫酸镁钙、硫酸镁镁钙、硫酸镁镁钙、硫酸镁镁钙、硫酸镁镁钠、硫酸镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁钙、硫酸镁镁镁镁镁镁钙、硫酸镁镁镁镁镁镁钙、硫酸镁镁镁镁镁镁钙、硫酸镁镁镁镁镁镁镁钙、硫酸镁镁镁镁
[Q]: 写一段python的快速排序代码
[A]: 好的,以下是Python的快速排序代码:def quick_sort(arr): if len(arr) <= 1: return arr else: pivot = arr[len(arr) // 2] left = [x for x in arr[1:]] right = [x for x in arr[2:] if x < pivot] return quick_sort(left) + [pivot] + quick_sort(right)# 测试代码arr = [3, 6, 8, 10, 1, 2, 1]print(quick_sort(arr))这段代码首先检查数组的长度是否小于等于1。如果是,那么它就按照从小到大的顺序排列。接着,对于数组的每个元素,它想要进行递归地排序,直到达到排序的结束位置。如果数组的长度是奇数,那么它就按照从大到小的顺序排列。最后,平均的时间复杂度是O(n log n),其中n是数组的长度。
[B]: 以下是一个使用Python的快速排序算法的实现:def sort_array(arr): # 生成一个随机排列 sorted_array = sorted(arr) # 返回排序后的列表 return sorted(arr)# 示例arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]# 调用函数sorted_array = sort_array(arr)print("原始数组是:", sorted_array)这段代码首先生成一个随机排列,然后通过一个列表的长度进行排序。最后,返回排序后的列表。注意,这个代码使用了Python的内置sorted函数和内置的sorted()函数,它们在时间复杂度为O(n log n)的情况下,比直接使用sorted()函数更快。
[C]: 以下是一个简单的排序算法,适用于小型数组,包括小于等于5的数组。def quick_sort(arr): if len(arr) <= 1: return arr else for _ in range(2, 51): if arr[len(arr) - 1:] is not None: arr[len(arr) - 1] = arr[len(arr) - 1] arr[len(arr)] = arr[len(arr)] return quick_sort(arr)# 示例arr = [3, 5, 1, 4, 1, 2, 1]sorted_arr = quick_sort(arr)print("Sorted array is:", sorted_arr)### 代码解释:1. **函数定义**:quick_sort 函数接受一个数组 arr 作为参数。2. **循环**:对于 arr 的元素,使用递归来实现,即从右到左遍历数组,并将当前元素与数组的当前元素交换位置,直到数组达到空或数组长度小于等于 arr。3. **递归调用**:遍历数组,将当前元素与数组的当前元素进行交换,直到数组达到空或数组长度小于等于 arr。4. **检查数组长度**:使用 if __name__ == "__main__" 检查数组的长度是否小于 arr,如果是,就将数组从较大的数组移动到数组的末尾。5. **输出排序后的结果**
[D]: 作为一个AI语言模型,我无法编写Python程序。但是,Python是一个非常流行的编程语言,它具有许多优秀的库和框架,包括NumPy、Pandas、Matplotlib、Pandas、Bandke、Norm和NKlearn等。Python有很多优秀的自然语言处理库和框架,可以用于处理和分析各种类型的数据,如文本、图像、音频、视频等。Python还可以用于创建各种应用程序,如网页搜索、游戏开发、科学计算等。Python非常灵活,也支持多种编程风格,因此可以轻松地找到适合自己的解决方案。
[E]: 作为AI语言模型,我可以回答您的问题。但是我无法进行编程操作,因为我无法确定您想问的是哪种编程语言。请您提供更具体的问题,以便我能够更好地回答您的问题。
[F]: 快速排序(Quick Sort)是一种简单的排序算法,它通过将数据划分为小组,每组数据按照一定规则排序,最终得到一个有序数组。
[G]: 好的,以下是一段快速排序的Python代码```python sorted_print(sorted(sorted(sorted(sorted_print))```输出结果为:```[1, 2, 3, 4, 5, 6, 7, 8, 9]```
[H]: 快速排序是一种基于分治思想的高效排序算法,它的时间复杂度为 O(n log n)。它的基本思想是通过一趟排序将待排序序列划分为两个子序列,其中一个子序列的所有元素都比另一个子序列的所有元素小,然后对这两个子序列分别递归地进行排序。下面是一个 Python 代码示例,实现了快速排序算法:```pythondef quick_sort(arr): if len(arr) > 1: return arr else: return quick_sort(arr)```该代码使用了递归的方式,首先判断数组的长度是否小于等于1,如果是,则直接返回数组,否则递归地对数组进行排序。在递归过程中,我们首先判断数组的第一个元素是否大于等于1,如果是,则将数组中的元素与第一个元素进行比较,如果不等于1,则将数组中的元素与第二个元素进行比较,以此类推,直到所有元素都被排序。该代码的时间复杂度为 O(n log n),是一个非常实用的排序算法。
[Q]: 截至2025年2月9日14时31分,电影《哪吒之魔童闹海》累计票房(含预售)突破78.09亿元,超过《星球大战:天行者崛起》的票房成绩,成为首部进入全球票房榜前40的亚洲电影。这一成就不仅标志着中国电影在国际市场的突破,也展示了中国动画电影的巨大潜力和市场吸引力。从这段文字总结20字的摘要给我。
[A]: 电影《哪吒之魔童闹海》票房突破78.09亿元,成为首部进入全球票房榜前40的亚洲电影。这标志着中国电影的国际突破,也展现了中国动画电影的巨大潜力。
[B]: 2025年2月9日,中国电影在世界舞台上突破78.09亿元,超过《星球大战:天行者崛起》的票房成绩。
[C]: 电影《哪吒之魔童闹海》票房突破78.09亿元,成为全球票房榜前40的亚洲电影,展现中国动画电影的巨大潜力。
[D]: 《哪吒之魔童闹海》累计票房突破78.09亿,成为首部进入全球票房榜前40的亚洲电影,展示了中国电影在国际市场的突破,同时展示了中国动画电影的巨大潜力和市场吸引力。
[E]: 《猛龙强:打开世界奇幻冒险》成功地,上海成功吸引了众多观众,同时也引起了广泛的关注。
[F]: 《哪吒之魔童闹海》是一部让中国动画电影成为国际市场的巨大潜力。该电影以其华丽的画面和惊险的战斗场面而闻名于世。
[G]: 《哪行是值得的》是2021年1月9日14时31分,电影《哪成大作为》在电影市场突破7800亿美元,成为中国动画电影的突破,并展示中国动画电影的巨大潜力和市场吸引力。
[H]: 我是一个参数很少的AI模型🥺,知识库较少,无法直接回答您的问题,换个问题试试吧👋
```
---
🙋直接把以上所有问题和模型的回答丢给DeepSeek-R1,让它帮忙点评和排名打分:
<details style="color:rgb(128,128,128)">
<summary>具体点评</summary>
### 评分标准:
- **准确性**:回答是否正确且无明显错误。
- **完整性**:回答是否涵盖了问题的核心要点。
- **逻辑性**:回答是否有条理,是否符合逻辑。
- **代码质量**:代码是否能正常运行,逻辑是否清晰。
### 点评:
1. **A模型**:
- **优点**:回答非常全面,信息量大,逻辑清晰,尤其在长江、大熊猫、海水咸味等问题上表现优异。代码虽然有小瑕疵,但整体思路正确。
- **缺点**:部分回答稍显冗长,但不影响整体质量。
- **总评**:综合表现最佳,得分最高。
2. **H模型**:
- **优点**:回答较为准确,尤其在珠穆朗玛峰、万有引力等问题上表现出色。代码虽未完全展示,但解释较为详细。
- **缺点**:部分回答略显啰嗦,但逻辑性较强。
- **总评**:仅次于A模型,表现稳定。
3. **C模型**:
- **优点**:回答简洁明了,尤其在大熊猫、快速排序代码等问题上表现较好。
- **缺点**:部分回答略显简短,缺乏深入解释。
- **总评**:整体表现不错,但细节上略逊于A和H。
4. **F模型**:
- **优点**:回答较为准确,尤其在长江、万有引力等问题上表现尚可。代码部分有一定逻辑性。
- **缺点**:部分回答不够深入,代码存在一些小问题。
- **总评**:表现中规中矩,仍有提升空间。
5. **D模型**:
- **优点**:回答基本准确,尤其在万有引力、长江等问题上表现尚可。
- **缺点**:部分回答过于简略,代码存在明显错误。
- **总评**:表现一般,需改进代码部分。
6. **B模型**:
- **优点**:回答较为准确,尤其在长江、海水咸味等问题上表现尚可。
- **缺点**:部分回答逻辑性较差,代码存在较大问题。
- **总评**:表现平平,需进一步优化。
7. **E模型**:
- **优点**:部分回答较为准确,尤其在海水咸味、大熊猫等问题上表现尚可。
- **缺点**:回答过于简略,代码部分几乎无法运行。
- **总评**:表现不佳,需大幅提升。
8. **G模型**:
- **优点**:几乎没有明显的优点。
- **缺点**:回答严重偏离主题,代码部分完全无法运行。
- **总评**:表现最差,需大幅改进。
---
### 总结:
- **A模型**在各方面表现最为出色,尤其在复杂问题的回答上展现了极高的准确性与逻辑性。
- **H模型**紧随其后,表现稳定,但在某些细节上略显不足。
- **G模型**表现最差,回答偏离主题且代码无法运行,需大幅改进。
</details>
### 打分排序
| 排名 | 模型 | 准确性 (30分) | 完整性 (30分) | 逻辑性 (20分) | 代码质量 (20分) | 总分 (100分) |
|----|----|-----------|-----------|-----------|------------|-----------|
| 1 | A | 28 | 29 | 19 | 20 | 96 |
| 2 | H | 27 | 28 | 18 | 20 | 93 |
| 3 | C | 26 | 27 | 18 | 18 | 89 |
| 4 | F | 25 | 26 | 17 | 18 | 86 |
| 5 | D | 24 | 25 | 17 | 16 | 82 |
| 6 | B | 23 | 24 | 16 | 15 | 78 |
| 7 | E | 22 | 23 | 15 | 14 | 74 |
| 8 | G | 10 | 12 | 10 | 10 | 42 |
---
### 👉主观效果总结
个人主观评价与DeepSeek-R1基本相符,其中:
* MiniMind系列的排序非常符合直觉,参数越大+训练数据越充分评分越高,幻觉和错误都会比小模型肉眼可见的好。
* H模型的回答肉眼看起来是不错的,尽管存在些许幻觉瞎编的情况。
* G模型可能训练数据不够完备,给出的权重经过测试效果不佳。
* 再复诵一遍经久不衰的Scaling Law: 参数越大,训练数据越多模型的性能越强。
## Ⅲ Objective Benchmark
下面就到喜闻乐见的benchmark刷榜测试环节,就不找乐子和qwen、glm级别的中文模型做对比了。
这里选取了一些<1B的微型模型进行横评比较,
测试集选择C-Eval、CMMLU、A-CLUE、TMMLU+这几个纯中文语言榜单。
<details style="color:rgb(128,128,128)">
<summary>测评框架</summary>
测评框架选择[lm-evaluation](https://github.com/EleutherAI/lm-evaluation-harness),
安装后启动测试非常方便:
```bash
lm_eval --model hf --model_args pretrained=<填写模型路径>,device=cuda,dtype=auto --tasks ceval* --batch_size 8 --trust_remote_code
```
</details>
PS: 在这种全是选择题的测评集中,为了避免回复格式的难以固定的特点,
所以常用做法是直接把`A`,`B`,`C`,`D`四个字母对应token的预测概率取出来,将其中概率最大的字母与标准答案计算正确率。
选择题1/4乱选的正确率是25%,然而这个量级的所有模型都集中在25附近,甚至很多时候不如瞎选,是不是像极了高中完形填空的滑铁卢正确率...
MiniMind模型本身预训练数据集小的可怜,也没有针对性的对测试集做刷榜微调,因此结果图一乐即可:
| models | from | params↓ | ceval↑ | cm mlu↑ | aclue↑ | tmmlu+↑ |
|-------------------------------------------------------------------------------|---------------|---------|--------|---------|--------|---------|
| MiniMind2 | JingyaoGong | 104M | 26.52 | 24.42 | 24.97 | 25.27 |
| MiniMind2-Small | JingyaoGong | 26M | 26.37 | 24.97 | 25.39 | 24.63 |
| MiniMind2-MoE | JingyaoGong | 145M | 26.6 | 25.01 | 24.83 | 25.01 |
| [Steel-LLM](https://github.com/zhanshijinwat/Steel-LLM) | ZhanShiJin | 1121M | 24.81 | 25.32 | 26 | 24.39 |
| [GPT2-medium](https://huggingface.co/openai-community/gpt2-medium) | OpenAI | 360M | 23.18 | 25 | 18.6 | 25.19 |
| [TinyLlama-1.1B-Chat-V1.0](https://github.com/jzhang38/TinyLlama) | TinyLlama | 1100M | 25.48 | 25 | 25.4 | 25.13 |
| [SmolLM2](https://github.com/huggingface/smollm) | HuggingFaceTB | 135M | 24.37 | 25.02 | 25.37 | 25.06 |
| [Aquila-Instruct](https://www.modelscope.cn/models/BAAI/Aquila-135M-Instruct) | BAAI | 135M | 25.11 | 25.1 | 24.43 | 25.05 |

# 📌 其它 (Others)
### 推理与导出
* [./scripts/convert_model.py](./scripts/convert_model.py)可以将torch/transformers模型互相转换。
* MiniMind的HuggingFace集合地址:
[MiniMind](https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5)
---
### 基于MiniMind-API服务接口
* [./scripts/serve_openai_api.py](./scripts/serve_openai_api.py)完成了兼容openai-api的最简聊天接口,方便将自己的模型接入第三方UI
例如FastGPT、OpenWebUI、Dify等等。
* 从[Huggingface](https://huggingface.co/collections/jingyaogong/minimind-66caf8d999f5c7fa64f399e5)下载模型权重文件,文件树:
```
<MiniMind-Model-Name> (root dir)
├─<MiniMind-Model-Name>
| ├── config.json
| ├── generation_config.json
| ├── LMConfig.py
| ├── model.py
| ├── pytorch_model.bin
| ├── special_tokens_map.json
| ├── tokenizer_config.json
| ├── tokenizer.json
```
* 启动聊天服务端
```bash
python serve_openai_api.py
```
* 测试服务接口
```bash
python chat_openai_api.py
```
* API接口示例,兼容openai api格式
```bash
curl http://ip:port/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "model-identifier",
"messages": [
{ "role": "user", "content": "世界上最高的山是什么?" }
],
"temperature": 0.7,
"max_tokens": 512,
"stream": true
}'
```
# 📌 Acknowledge
> [!NOTE]
> 如果觉得`MiniMind系列`对您有所帮助,可以在 GitHub 上加一个⭐<br/>
> 篇幅超长水平有限难免纰漏,欢迎在Issues交流指正或提交PR改进项目<br/>
> 您的小小支持就是持续改进此项目的动力!
## 🤝[贡献者](https://github.com/jingyaogong/minimind/graphs/contributors)
<!--
<a href="https://github.com/jingyaogong/minimind/graphs/contributors">
<img src="https://contrib.rocks/image?repo=jingyaogong/minimind&v3" />
</a>
-->
<a href="https://github.com/jingyaogong"><img src="https://avatars.githubusercontent.com/u/62287848" width="70px" height="70px"/></a>
<a href="https://github.com/MuWinds"><img src="https://avatars.githubusercontent.com/u/93832089" width="70px" height="70px"/></a>
<a href="https://github.com/chuanzhubin"><img src="https://avatars.githubusercontent.com/u/2813798" width="70px" height="70px"/></a>
<a href="https://github.com/iomgaa-ycz"><img src="https://avatars.githubusercontent.com/u/124225682" width="70px" height="70px"/></a>
## 😊鸣谢
<a href="https://github.com/ipfgao"><b>@ipfgao</b></a>:
<a href="https://github.com/jingyaogong/minimind/issues/26">🔗训练步骤记录</a>
<a href="https://github.com/chuanzhubin"><b>@chuanzhubin</b></a>:
<a href="https://github.com/jingyaogong/minimind/pull/34">🔗代码逐行注释</a>
<a href="https://github.com/WangRongsheng"><b>@WangRongsheng</b></a>:
<a href="https://github.com/jingyaogong/minimind/issues/39">🔗大型数据集预处理</a>
<a href="https://github.com/pengqianhan"><b>@pengqianhan</b></a>:
<a href="https://github.com/jingyaogong/minimind/issues/73">🔗一个简明教程</a>
<a href="https://github.com/RyanSunn"><b>@RyanSunn</b></a>:
<a href="https://github.com/jingyaogong/minimind/issues/75">🔗推理过程学习记录</a>
<details close>
<summary> <b>参考链接 & 感谢以下优秀的论文或项目</b> </summary>
- 排名不分任何先后顺序
- [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
- [https://github.com/karpathy/llama2.c](https://github.com/karpathy/llama2.c)
- [https://github.com/DLLXW/baby-llama2-chinese](https://github.com/DLLXW/baby-llama2-chinese)
- [(DeepSeek-V2)https://arxiv.org/abs/2405.04434](https://arxiv.org/abs/2405.04434)
- [https://github.com/charent/ChatLM-mini-Chinese](https://github.com/charent/ChatLM-mini-Chinese)
- [https://github.com/wdndev/tiny-llm-zh](https://github.com/wdndev/tiny-llm-zh)
- [(Mistral-MoE)https://arxiv.org/pdf/2401.04088](https://arxiv.org/pdf/2401.04088)
- [https://github.com/Tongjilibo/build_MiniLLM_from_scratch](https://github.com/Tongjilibo/build_MiniLLM_from_scratch)
- [https://github.com/jzhang38/TinyLlama](https://github.com/jzhang38/TinyLlama)
- [https://github.com/AI-Study-Han/Zero-Chatgpt](https://github.com/AI-Study-Han/Zero-Chatgpt)
- [https://github.com/xusenlinzy/api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm)
- [https://github.com/HqWu-HITCS/Awesome-Chinese-LLM](https://github.com/HqWu-HITCS/Awesome-Chinese-LLM)
</details>
## 🫶支持者
<a href="https://github.com/jingyaogong/minimind/stargazers">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://reporoster.com/stars/dark/jingyaogong/minimind"/>
<source media="(prefers-color-scheme: light)" srcset="https://reporoster.com/stars/jingyaogong/minimind"/>
<img alt="github contribution grid snake animation" src="https://reporoster.com/stars/jingyaogong/minimind"/>
</picture>
</a>
<a href="https://github.com/jingyaogong/minimind/network/members">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://reporoster.com/forks/dark/jingyaogong/minimind"/>
<source media="(prefers-color-scheme: light)" srcset="https://reporoster.com/forks/jingyaogong/minimind"/>
<img alt="github contribution grid snake animation" src="https://reporoster.com/forks/jingyaogong/minimind"/>
</picture>
</a>
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=jingyaogong/minimind&type=Date&theme=dark"/>
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=jingyaogong/minimind&type=Date"/>
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=jingyaogong/minimind&type=Date"/>
</picture>
# License
This repository is licensed under the [Apache-2.0 License](LICENSE).
|
yihang7/Sheared-LLaMA-1.3B-dpo-full-3-epoch-hydrox-safe | yihang7 | "2024-01-20T10:28:19Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:finetune:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-19T20:46:11Z" | ---
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- generated_from_trainer
model-index:
- name: Sheared-LLaMA-1.3B-dpo-full-3-epoch-hydrox-safe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sheared-LLaMA-1.3B-dpo-full-3-epoch-hydrox-safe
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0041
- Rewards/chosen: 1.7270
- Rewards/rejected: -15.3712
- Rewards/accuracies: 0.9983
- Rewards/margins: 17.0982
- Logps/rejected: -656.3423
- Logps/chosen: -371.7201
- Logits/rejected: 2.3459
- Logits/chosen: 0.3641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6612 | 0.03 | 100 | 0.6499 | 0.0765 | -0.0151 | 0.8300 | 0.0916 | -502.7813 | -388.2253 | 3.2379 | 0.9032 |
| 0.4585 | 0.07 | 200 | 0.4458 | 0.5224 | -0.1242 | 0.9301 | 0.6466 | -503.8723 | -383.7663 | 3.2494 | 0.9081 |
| 0.2519 | 0.1 | 300 | 0.2540 | 1.2036 | -0.4814 | 0.9470 | 1.6851 | -507.4445 | -376.9535 | 3.2790 | 0.9127 |
| 0.17 | 0.14 | 400 | 0.1751 | 1.5794 | -1.0033 | 0.9562 | 2.5827 | -512.6629 | -373.1959 | 3.3007 | 0.9173 |
| 0.1179 | 0.17 | 500 | 0.1215 | 1.8423 | -2.0791 | 0.9588 | 3.9214 | -523.4217 | -370.5673 | 3.2925 | 0.9104 |
| 0.1032 | 0.2 | 600 | 0.1078 | 2.0902 | -2.7647 | 0.9596 | 4.8549 | -530.2773 | -368.0876 | 3.2574 | 0.9180 |
| 0.0614 | 0.24 | 700 | 0.0881 | 2.2830 | -3.4190 | 0.9638 | 5.7021 | -536.8207 | -366.1595 | 3.2243 | 0.9190 |
| 0.0666 | 0.27 | 800 | 0.0751 | 2.3690 | -4.0591 | 0.9689 | 6.4281 | -543.2214 | -365.2995 | 3.1788 | 0.9025 |
| 0.0706 | 0.31 | 900 | 0.0662 | 2.4002 | -4.5254 | 0.9722 | 6.9257 | -547.8843 | -364.9874 | 3.1624 | 0.9102 |
| 0.0711 | 0.34 | 1000 | 0.0577 | 2.4230 | -4.9179 | 0.9764 | 7.3409 | -551.8096 | -364.7598 | 3.1467 | 0.9093 |
| 0.0623 | 0.37 | 1100 | 0.0572 | 2.4840 | -5.3620 | 0.9773 | 7.8459 | -556.2499 | -364.1504 | 3.1186 | 0.9011 |
| 0.0443 | 0.41 | 1200 | 0.0526 | 2.4237 | -5.4784 | 0.9798 | 7.9021 | -557.4146 | -364.7530 | 3.1196 | 0.8961 |
| 0.0416 | 0.44 | 1300 | 0.0477 | 2.3874 | -6.2247 | 0.9823 | 8.6120 | -564.8768 | -365.1163 | 3.0683 | 0.8720 |
| 0.0365 | 0.48 | 1400 | 0.0448 | 2.2887 | -6.8360 | 0.9806 | 9.1246 | -570.9899 | -366.1031 | 3.0491 | 0.8667 |
| 0.0341 | 0.51 | 1500 | 0.0442 | 2.2795 | -6.9547 | 0.9848 | 9.2343 | -572.1777 | -366.1945 | 3.0299 | 0.8500 |
| 0.0406 | 0.54 | 1600 | 0.0414 | 2.0896 | -7.0003 | 0.9848 | 9.0899 | -572.6334 | -368.0941 | 3.0437 | 0.8442 |
| 0.0427 | 0.58 | 1700 | 0.0387 | 2.0380 | -7.1141 | 0.9857 | 9.1521 | -573.7712 | -368.6102 | 3.0458 | 0.8383 |
| 0.0225 | 0.61 | 1800 | 0.0421 | 2.2150 | -7.1052 | 0.9891 | 9.3203 | -573.6826 | -366.8395 | 3.0443 | 0.8362 |
| 0.0298 | 0.65 | 1900 | 0.0364 | 2.0854 | -7.7136 | 0.9882 | 9.7990 | -579.7668 | -368.1361 | 3.0306 | 0.8392 |
| 0.0255 | 0.68 | 2000 | 0.0353 | 2.1351 | -7.6852 | 0.9907 | 9.8203 | -579.4824 | -367.6387 | 3.0204 | 0.8292 |
| 0.019 | 0.71 | 2100 | 0.0296 | 2.1215 | -8.1790 | 0.9916 | 10.3005 | -584.4203 | -367.7745 | 3.0052 | 0.8412 |
| 0.0198 | 0.75 | 2200 | 0.0248 | 2.1218 | -8.4302 | 0.9907 | 10.5520 | -586.9324 | -367.7719 | 2.9878 | 0.8183 |
| 0.0192 | 0.78 | 2300 | 0.0238 | 2.0950 | -8.2588 | 0.9924 | 10.3538 | -585.2184 | -368.0402 | 2.9758 | 0.7942 |
| 0.0191 | 0.82 | 2400 | 0.0213 | 2.1701 | -8.6399 | 0.9941 | 10.8101 | -589.0295 | -367.2885 | 2.9719 | 0.8049 |
| 0.0215 | 0.85 | 2500 | 0.0224 | 2.1220 | -9.1960 | 0.9933 | 11.3180 | -594.5902 | -367.7695 | 2.9391 | 0.7799 |
| 0.0579 | 0.88 | 2600 | 0.0193 | 2.0368 | -9.3428 | 0.9933 | 11.3796 | -596.0587 | -368.6217 | 2.9297 | 0.7933 |
| 0.0163 | 0.92 | 2700 | 0.0180 | 1.9057 | -9.4956 | 0.9941 | 11.4013 | -597.5867 | -369.9328 | 2.9114 | 0.7628 |
| 0.019 | 0.95 | 2800 | 0.0194 | 1.9915 | -9.4265 | 0.9933 | 11.4179 | -596.8949 | -369.0752 | 2.9223 | 0.7736 |
| 0.0166 | 0.99 | 2900 | 0.0182 | 2.0770 | -9.1954 | 0.9958 | 11.2724 | -594.5848 | -368.2201 | 2.9186 | 0.7592 |
| 0.0121 | 1.02 | 3000 | 0.0180 | 1.9094 | -9.4964 | 0.9941 | 11.4059 | -597.5947 | -369.8957 | 2.8957 | 0.7557 |
| 0.011 | 1.05 | 3100 | 0.0150 | 2.0009 | -9.9345 | 0.9966 | 11.9354 | -601.9758 | -368.9812 | 2.8560 | 0.7294 |
| 0.0106 | 1.09 | 3200 | 0.0139 | 2.0861 | -9.6153 | 0.9966 | 11.7014 | -598.7830 | -368.1290 | 2.8565 | 0.7071 |
| 0.0095 | 1.12 | 3300 | 0.0134 | 1.9755 | -10.3936 | 0.9958 | 12.3691 | -606.5661 | -369.2344 | 2.8290 | 0.7083 |
| 0.0115 | 1.16 | 3400 | 0.0129 | 1.9719 | -10.3851 | 0.9949 | 12.3569 | -606.4811 | -369.2712 | 2.8212 | 0.7184 |
| 0.0152 | 1.19 | 3500 | 0.0124 | 2.0357 | -10.2131 | 0.9958 | 12.2488 | -604.7615 | -368.6329 | 2.8217 | 0.7140 |
| 0.01 | 1.22 | 3600 | 0.0116 | 2.0147 | -10.9243 | 0.9966 | 12.9390 | -611.8733 | -368.8428 | 2.7589 | 0.6517 |
| 0.0135 | 1.26 | 3700 | 0.0116 | 1.9527 | -10.8649 | 0.9966 | 12.8176 | -611.2795 | -369.4628 | 2.8017 | 0.7064 |
| 0.0078 | 1.29 | 3800 | 0.0112 | 1.7362 | -11.5598 | 0.9966 | 13.2960 | -618.2281 | -371.6281 | 2.7623 | 0.6879 |
| 0.0114 | 1.33 | 3900 | 0.0106 | 1.8313 | -11.3667 | 0.9983 | 13.1980 | -616.2969 | -370.6765 | 2.7616 | 0.6728 |
| 0.0077 | 1.36 | 4000 | 0.0101 | 1.9160 | -11.5484 | 0.9992 | 13.4645 | -618.1147 | -369.8296 | 2.7534 | 0.6694 |
| 0.0057 | 1.39 | 4100 | 0.0098 | 1.8898 | -11.3187 | 0.9983 | 13.2085 | -615.8172 | -370.0915 | 2.7553 | 0.6617 |
| 0.0056 | 1.43 | 4200 | 0.0091 | 2.0721 | -11.6748 | 0.9992 | 13.7469 | -619.3782 | -368.2689 | 2.7234 | 0.6265 |
| 0.006 | 1.46 | 4300 | 0.0088 | 1.8416 | -12.1884 | 0.9983 | 14.0300 | -624.5148 | -370.5739 | 2.7058 | 0.6225 |
| 0.0071 | 1.5 | 4400 | 0.0083 | 2.0151 | -11.7393 | 0.9983 | 13.7544 | -620.0233 | -368.8386 | 2.7124 | 0.6231 |
| 0.0101 | 1.53 | 4500 | 0.0083 | 2.0864 | -11.5153 | 0.9992 | 13.6016 | -617.7830 | -368.1264 | 2.7206 | 0.6407 |
| 0.0054 | 1.56 | 4600 | 0.0083 | 1.9930 | -11.3424 | 0.9975 | 13.3354 | -616.0542 | -369.0597 | 2.7246 | 0.6099 |
| 0.0116 | 1.6 | 4700 | 0.0080 | 1.9298 | -11.3167 | 0.9975 | 13.2464 | -615.7971 | -369.6923 | 2.7200 | 0.6008 |
| 0.0116 | 1.63 | 4800 | 0.0074 | 1.8809 | -11.4685 | 0.9975 | 13.3494 | -617.3154 | -370.1813 | 2.6917 | 0.5698 |
| 0.0087 | 1.67 | 4900 | 0.0073 | 1.8993 | -11.8845 | 0.9983 | 13.7838 | -621.4749 | -369.9968 | 2.6861 | 0.5798 |
| 0.0031 | 1.7 | 5000 | 0.0072 | 1.8755 | -12.3032 | 0.9975 | 14.1787 | -625.6624 | -370.2348 | 2.6435 | 0.5411 |
| 0.0115 | 1.73 | 5100 | 0.0076 | 1.9283 | -11.9068 | 0.9958 | 13.8351 | -621.6979 | -369.7066 | 2.6527 | 0.5393 |
| 0.0065 | 1.77 | 5200 | 0.0074 | 1.9870 | -11.9105 | 0.9949 | 13.8975 | -621.7357 | -369.1199 | 2.6790 | 0.5763 |
| 0.006 | 1.8 | 5300 | 0.0068 | 1.7994 | -12.4601 | 0.9958 | 14.2595 | -627.2310 | -370.9959 | 2.6264 | 0.5393 |
| 0.0076 | 1.84 | 5400 | 0.0064 | 2.0449 | -12.2057 | 0.9966 | 14.2506 | -624.6871 | -368.5407 | 2.6409 | 0.5465 |
| 0.0042 | 1.87 | 5500 | 0.0062 | 1.9941 | -12.4399 | 0.9983 | 14.4340 | -627.0295 | -369.0491 | 2.6332 | 0.5433 |
| 0.0079 | 1.9 | 5600 | 0.0061 | 1.9119 | -12.4000 | 0.9983 | 14.3118 | -626.6300 | -369.8711 | 2.6300 | 0.5377 |
| 0.0066 | 1.94 | 5700 | 0.0062 | 2.0544 | -12.1682 | 0.9983 | 14.2226 | -624.3120 | -368.4457 | 2.6248 | 0.5288 |
| 0.0071 | 1.97 | 5800 | 0.0061 | 2.0943 | -12.2702 | 0.9975 | 14.3645 | -625.3325 | -368.0468 | 2.6248 | 0.5422 |
| 0.0021 | 2.01 | 5900 | 0.0057 | 1.9195 | -12.9348 | 0.9983 | 14.8543 | -631.9785 | -369.7946 | 2.5712 | 0.5186 |
| 0.0029 | 2.04 | 6000 | 0.0057 | 1.8384 | -13.3904 | 0.9983 | 15.2288 | -636.5340 | -370.6057 | 2.5405 | 0.4960 |
| 0.0035 | 2.07 | 6100 | 0.0056 | 1.6150 | -14.2858 | 0.9975 | 15.9009 | -645.4886 | -372.8395 | 2.4718 | 0.4415 |
| 0.0053 | 2.11 | 6200 | 0.0053 | 1.8268 | -13.9429 | 0.9983 | 15.7696 | -642.0590 | -370.7222 | 2.4921 | 0.4576 |
| 0.0044 | 2.14 | 6300 | 0.0052 | 1.9443 | -13.8117 | 0.9975 | 15.7560 | -640.7470 | -369.5464 | 2.5079 | 0.4705 |
| 0.0026 | 2.18 | 6400 | 0.0053 | 2.0456 | -13.7455 | 0.9975 | 15.7911 | -640.0853 | -368.5343 | 2.5139 | 0.4823 |
| 0.0026 | 2.21 | 6500 | 0.0050 | 2.0028 | -13.6496 | 0.9983 | 15.6524 | -639.1260 | -368.9618 | 2.5135 | 0.4823 |
| 0.0029 | 2.24 | 6600 | 0.0050 | 1.8856 | -13.7926 | 0.9975 | 15.6782 | -640.5563 | -370.1337 | 2.4828 | 0.4459 |
| 0.0023 | 2.28 | 6700 | 0.0049 | 1.9422 | -14.0760 | 0.9983 | 16.0182 | -643.3903 | -369.5678 | 2.4698 | 0.4471 |
| 0.003 | 2.31 | 6800 | 0.0048 | 1.8633 | -14.4649 | 0.9983 | 16.3282 | -647.2790 | -370.3570 | 2.4646 | 0.4562 |
| 0.0058 | 2.35 | 6900 | 0.0049 | 1.8085 | -14.8512 | 0.9975 | 16.6597 | -651.1427 | -370.9051 | 2.4275 | 0.4292 |
| 0.0032 | 2.38 | 7000 | 0.0048 | 1.9006 | -14.6340 | 0.9983 | 16.5346 | -648.9703 | -369.9842 | 2.4387 | 0.4425 |
| 0.0018 | 2.41 | 7100 | 0.0047 | 1.8215 | -15.0376 | 0.9983 | 16.8592 | -653.0067 | -370.7746 | 2.4153 | 0.4296 |
| 0.001 | 2.45 | 7200 | 0.0046 | 1.8195 | -15.0112 | 0.9983 | 16.8307 | -652.7422 | -370.7950 | 2.4153 | 0.4248 |
| 0.0057 | 2.48 | 7300 | 0.0045 | 1.8920 | -14.4156 | 0.9983 | 16.3077 | -646.7868 | -370.0694 | 2.4336 | 0.4234 |
| 0.004 | 2.52 | 7400 | 0.0044 | 1.7826 | -14.6522 | 0.9983 | 16.4348 | -649.1526 | -371.1638 | 2.4101 | 0.4117 |
| 0.0025 | 2.55 | 7500 | 0.0044 | 1.8202 | -14.7043 | 0.9983 | 16.5245 | -649.6732 | -370.7875 | 2.4040 | 0.4069 |
| 0.0035 | 2.58 | 7600 | 0.0044 | 1.8712 | -14.7562 | 0.9983 | 16.6273 | -650.1921 | -370.2782 | 2.4019 | 0.4087 |
| 0.002 | 2.62 | 7700 | 0.0043 | 1.8406 | -14.8610 | 0.9983 | 16.7017 | -651.2407 | -370.5836 | 2.3996 | 0.4114 |
| 0.002 | 2.65 | 7800 | 0.0043 | 1.8042 | -15.0820 | 0.9992 | 16.8862 | -653.4503 | -370.9484 | 2.3936 | 0.4147 |
| 0.0046 | 2.69 | 7900 | 0.0042 | 1.8043 | -15.2990 | 0.9983 | 17.1033 | -655.6204 | -370.9472 | 2.3757 | 0.3993 |
| 0.0025 | 2.72 | 8000 | 0.0042 | 1.8289 | -15.3097 | 0.9983 | 17.1386 | -655.7274 | -370.7011 | 2.3634 | 0.3853 |
| 0.0023 | 2.75 | 8100 | 0.0041 | 1.7995 | -15.2380 | 0.9983 | 17.0375 | -655.0099 | -370.9947 | 2.3619 | 0.3779 |
| 0.0025 | 2.79 | 8200 | 0.0040 | 1.8013 | -15.2440 | 0.9983 | 17.0453 | -655.0703 | -370.9769 | 2.3668 | 0.3827 |
| 0.002 | 2.82 | 8300 | 0.0040 | 1.8040 | -15.2101 | 0.9983 | 17.0141 | -654.7317 | -370.9499 | 2.3660 | 0.3834 |
| 0.0023 | 2.86 | 8400 | 0.0040 | 1.7441 | -15.3132 | 0.9983 | 17.0572 | -655.7621 | -371.5493 | 2.3498 | 0.3680 |
| 0.002 | 2.89 | 8500 | 0.0040 | 1.7551 | -15.3278 | 0.9983 | 17.0828 | -655.9080 | -371.4393 | 2.3509 | 0.3714 |
| 0.004 | 2.92 | 8600 | 0.0040 | 1.7500 | -15.3290 | 0.9983 | 17.0790 | -655.9205 | -371.4897 | 2.3518 | 0.3701 |
| 0.0041 | 2.96 | 8700 | 0.0040 | 1.7294 | -15.3645 | 0.9983 | 17.0940 | -656.2756 | -371.6956 | 2.3478 | 0.3660 |
| 0.0029 | 2.99 | 8800 | 0.0040 | 1.7305 | -15.3609 | 0.9983 | 17.0914 | -656.2390 | -371.6845 | 2.3464 | 0.3647 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Soumia20/Hh | Soumia20 | "2024-05-01T00:20:41Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-01T00:20:41Z" | ---
license: apache-2.0
---
|
MayBashendy/ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run2_AugV5_k18_task1_organization | MayBashendy | "2025-01-14T21:28:04Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-14T21:18:27Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run2_AugV5_k18_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run2_AugV5_k18_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6735
- Qwk: 0.5855
- Mse: 0.6735
- Rmse: 0.8207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0225 | 2 | 5.2940 | -0.0444 | 5.2940 | 2.3009 |
| No log | 0.0449 | 4 | 3.4378 | 0.0649 | 3.4378 | 1.8541 |
| No log | 0.0674 | 6 | 1.9416 | 0.0719 | 1.9416 | 1.3934 |
| No log | 0.0899 | 8 | 1.9390 | 0.0364 | 1.9390 | 1.3925 |
| No log | 0.1124 | 10 | 1.8754 | -0.0530 | 1.8754 | 1.3694 |
| No log | 0.1348 | 12 | 1.6512 | -0.0858 | 1.6512 | 1.2850 |
| No log | 0.1573 | 14 | 1.4892 | 0.0811 | 1.4892 | 1.2203 |
| No log | 0.1798 | 16 | 1.5272 | 0.0578 | 1.5272 | 1.2358 |
| No log | 0.2022 | 18 | 1.6077 | -0.0293 | 1.6077 | 1.2679 |
| No log | 0.2247 | 20 | 1.9760 | -0.0024 | 1.9760 | 1.4057 |
| No log | 0.2472 | 22 | 2.5808 | -0.1026 | 2.5808 | 1.6065 |
| No log | 0.2697 | 24 | 3.1201 | -0.0610 | 3.1201 | 1.7664 |
| No log | 0.2921 | 26 | 2.6586 | -0.0149 | 2.6586 | 1.6305 |
| No log | 0.3146 | 28 | 2.1109 | 0.0475 | 2.1109 | 1.4529 |
| No log | 0.3371 | 30 | 1.7149 | 0.0415 | 1.7149 | 1.3095 |
| No log | 0.3596 | 32 | 1.5396 | 0.0804 | 1.5396 | 1.2408 |
| No log | 0.3820 | 34 | 1.5122 | 0.0804 | 1.5122 | 1.2297 |
| No log | 0.4045 | 36 | 1.6846 | 0.0493 | 1.6846 | 1.2979 |
| No log | 0.4270 | 38 | 2.1641 | 0.0554 | 2.1641 | 1.4711 |
| No log | 0.4494 | 40 | 2.0149 | 0.0698 | 2.0149 | 1.4195 |
| No log | 0.4719 | 42 | 1.8018 | 0.1994 | 1.8018 | 1.3423 |
| No log | 0.4944 | 44 | 1.9081 | 0.2236 | 1.9081 | 1.3813 |
| No log | 0.5169 | 46 | 1.8276 | 0.2586 | 1.8276 | 1.3519 |
| No log | 0.5393 | 48 | 1.5027 | 0.2824 | 1.5027 | 1.2259 |
| No log | 0.5618 | 50 | 1.4522 | 0.2900 | 1.4522 | 1.2051 |
| No log | 0.5843 | 52 | 1.4101 | 0.2829 | 1.4101 | 1.1875 |
| No log | 0.6067 | 54 | 1.1993 | 0.3822 | 1.1993 | 1.0951 |
| No log | 0.6292 | 56 | 1.1255 | 0.3907 | 1.1255 | 1.0609 |
| No log | 0.6517 | 58 | 1.2358 | 0.3320 | 1.2358 | 1.1117 |
| No log | 0.6742 | 60 | 1.0965 | 0.3682 | 1.0965 | 1.0471 |
| No log | 0.6966 | 62 | 1.0196 | 0.4085 | 1.0196 | 1.0098 |
| No log | 0.7191 | 64 | 1.0160 | 0.4355 | 1.0160 | 1.0080 |
| No log | 0.7416 | 66 | 1.2278 | 0.3927 | 1.2278 | 1.1081 |
| No log | 0.7640 | 68 | 1.5117 | 0.3259 | 1.5117 | 1.2295 |
| No log | 0.7865 | 70 | 1.2042 | 0.4104 | 1.2042 | 1.0974 |
| No log | 0.8090 | 72 | 0.8229 | 0.5612 | 0.8229 | 0.9071 |
| No log | 0.8315 | 74 | 0.8907 | 0.5534 | 0.8907 | 0.9438 |
| No log | 0.8539 | 76 | 1.0110 | 0.5070 | 1.0110 | 1.0055 |
| No log | 0.8764 | 78 | 0.8535 | 0.5094 | 0.8535 | 0.9239 |
| No log | 0.8989 | 80 | 0.8069 | 0.5262 | 0.8069 | 0.8983 |
| No log | 0.9213 | 82 | 0.9014 | 0.5367 | 0.9014 | 0.9494 |
| No log | 0.9438 | 84 | 1.0083 | 0.5750 | 1.0083 | 1.0041 |
| No log | 0.9663 | 86 | 0.8427 | 0.5189 | 0.8427 | 0.9180 |
| No log | 0.9888 | 88 | 0.9005 | 0.5495 | 0.9005 | 0.9489 |
| No log | 1.0112 | 90 | 1.2873 | 0.3101 | 1.2873 | 1.1346 |
| No log | 1.0337 | 92 | 1.3257 | 0.2992 | 1.3257 | 1.1514 |
| No log | 1.0562 | 94 | 1.0537 | 0.4688 | 1.0537 | 1.0265 |
| No log | 1.0787 | 96 | 0.8587 | 0.4858 | 0.8587 | 0.9267 |
| No log | 1.1011 | 98 | 0.9277 | 0.4781 | 0.9277 | 0.9632 |
| No log | 1.1236 | 100 | 1.0720 | 0.4287 | 1.0720 | 1.0354 |
| No log | 1.1461 | 102 | 1.2735 | 0.3631 | 1.2735 | 1.1285 |
| No log | 1.1685 | 104 | 1.2640 | 0.3885 | 1.2640 | 1.1243 |
| No log | 1.1910 | 106 | 0.9729 | 0.4605 | 0.9729 | 0.9864 |
| No log | 1.2135 | 108 | 0.8400 | 0.4879 | 0.8400 | 0.9165 |
| No log | 1.2360 | 110 | 0.8702 | 0.5147 | 0.8702 | 0.9329 |
| No log | 1.2584 | 112 | 0.8454 | 0.4931 | 0.8454 | 0.9195 |
| No log | 1.2809 | 114 | 0.7882 | 0.5102 | 0.7882 | 0.8878 |
| No log | 1.3034 | 116 | 0.7682 | 0.5605 | 0.7682 | 0.8765 |
| No log | 1.3258 | 118 | 0.8055 | 0.5727 | 0.8055 | 0.8975 |
| No log | 1.3483 | 120 | 0.7679 | 0.5878 | 0.7679 | 0.8763 |
| No log | 1.3708 | 122 | 0.7246 | 0.6033 | 0.7246 | 0.8512 |
| No log | 1.3933 | 124 | 0.7268 | 0.6194 | 0.7268 | 0.8525 |
| No log | 1.4157 | 126 | 0.7344 | 0.6470 | 0.7344 | 0.8570 |
| No log | 1.4382 | 128 | 0.8475 | 0.6522 | 0.8475 | 0.9206 |
| No log | 1.4607 | 130 | 1.2024 | 0.4727 | 1.2024 | 1.0965 |
| No log | 1.4831 | 132 | 1.5980 | 0.3658 | 1.5980 | 1.2641 |
| No log | 1.5056 | 134 | 1.4766 | 0.3935 | 1.4766 | 1.2151 |
| No log | 1.5281 | 136 | 0.9858 | 0.5680 | 0.9858 | 0.9929 |
| No log | 1.5506 | 138 | 0.8462 | 0.6557 | 0.8462 | 0.9199 |
| No log | 1.5730 | 140 | 0.8690 | 0.6261 | 0.8690 | 0.9322 |
| No log | 1.5955 | 142 | 0.8141 | 0.6387 | 0.8141 | 0.9023 |
| No log | 1.6180 | 144 | 0.9053 | 0.5977 | 0.9053 | 0.9515 |
| No log | 1.6404 | 146 | 0.8533 | 0.5823 | 0.8533 | 0.9237 |
| No log | 1.6629 | 148 | 0.8256 | 0.5647 | 0.8256 | 0.9086 |
| No log | 1.6854 | 150 | 0.8409 | 0.5031 | 0.8409 | 0.9170 |
| No log | 1.7079 | 152 | 0.8533 | 0.4821 | 0.8533 | 0.9237 |
| No log | 1.7303 | 154 | 0.8431 | 0.4906 | 0.8431 | 0.9182 |
| No log | 1.7528 | 156 | 0.8858 | 0.5084 | 0.8858 | 0.9412 |
| No log | 1.7753 | 158 | 0.8073 | 0.5399 | 0.8073 | 0.8985 |
| No log | 1.7978 | 160 | 1.0489 | 0.4989 | 1.0489 | 1.0242 |
| No log | 1.8202 | 162 | 1.3901 | 0.3364 | 1.3901 | 1.1790 |
| No log | 1.8427 | 164 | 1.3721 | 0.3703 | 1.3721 | 1.1714 |
| No log | 1.8652 | 166 | 1.1414 | 0.4796 | 1.1414 | 1.0684 |
| No log | 1.8876 | 168 | 0.9152 | 0.5870 | 0.9152 | 0.9567 |
| No log | 1.9101 | 170 | 0.8924 | 0.6081 | 0.8924 | 0.9447 |
| No log | 1.9326 | 172 | 0.8385 | 0.5899 | 0.8385 | 0.9157 |
| No log | 1.9551 | 174 | 0.8752 | 0.5727 | 0.8752 | 0.9355 |
| No log | 1.9775 | 176 | 0.9782 | 0.5825 | 0.9782 | 0.9890 |
| No log | 2.0 | 178 | 0.9559 | 0.5657 | 0.9559 | 0.9777 |
| No log | 2.0225 | 180 | 0.8592 | 0.5784 | 0.8592 | 0.9269 |
| No log | 2.0449 | 182 | 0.7705 | 0.5557 | 0.7705 | 0.8778 |
| No log | 2.0674 | 184 | 0.7767 | 0.5404 | 0.7767 | 0.8813 |
| No log | 2.0899 | 186 | 0.7665 | 0.5422 | 0.7665 | 0.8755 |
| No log | 2.1124 | 188 | 0.7725 | 0.4986 | 0.7725 | 0.8789 |
| No log | 2.1348 | 190 | 0.7750 | 0.5543 | 0.7750 | 0.8804 |
| No log | 2.1573 | 192 | 0.7500 | 0.5808 | 0.7500 | 0.8660 |
| No log | 2.1798 | 194 | 0.7213 | 0.5916 | 0.7213 | 0.8493 |
| No log | 2.2022 | 196 | 0.6966 | 0.6595 | 0.6966 | 0.8346 |
| No log | 2.2247 | 198 | 0.6915 | 0.6463 | 0.6915 | 0.8316 |
| No log | 2.2472 | 200 | 0.7664 | 0.5853 | 0.7664 | 0.8754 |
| No log | 2.2697 | 202 | 1.0551 | 0.5403 | 1.0551 | 1.0272 |
| No log | 2.2921 | 204 | 1.1703 | 0.5067 | 1.1703 | 1.0818 |
| No log | 2.3146 | 206 | 1.0006 | 0.5607 | 1.0006 | 1.0003 |
| No log | 2.3371 | 208 | 0.8106 | 0.5668 | 0.8106 | 0.9003 |
| No log | 2.3596 | 210 | 0.7490 | 0.5465 | 0.7490 | 0.8654 |
| No log | 2.3820 | 212 | 0.7093 | 0.6011 | 0.7093 | 0.8422 |
| No log | 2.4045 | 214 | 0.7180 | 0.5554 | 0.7180 | 0.8473 |
| No log | 2.4270 | 216 | 0.7108 | 0.5691 | 0.7108 | 0.8431 |
| No log | 2.4494 | 218 | 0.7327 | 0.5786 | 0.7327 | 0.8560 |
| No log | 2.4719 | 220 | 0.7899 | 0.5831 | 0.7899 | 0.8887 |
| No log | 2.4944 | 222 | 0.8784 | 0.5493 | 0.8784 | 0.9372 |
| No log | 2.5169 | 224 | 0.9595 | 0.5384 | 0.9595 | 0.9795 |
| No log | 2.5393 | 226 | 0.9017 | 0.5686 | 0.9017 | 0.9496 |
| No log | 2.5618 | 228 | 0.8584 | 0.5953 | 0.8584 | 0.9265 |
| No log | 2.5843 | 230 | 0.7980 | 0.5489 | 0.7980 | 0.8933 |
| No log | 2.6067 | 232 | 0.7930 | 0.5652 | 0.7930 | 0.8905 |
| No log | 2.6292 | 234 | 0.8572 | 0.5498 | 0.8572 | 0.9259 |
| No log | 2.6517 | 236 | 0.8020 | 0.5963 | 0.8020 | 0.8955 |
| No log | 2.6742 | 238 | 0.7674 | 0.5873 | 0.7674 | 0.8760 |
| No log | 2.6966 | 240 | 0.7637 | 0.5517 | 0.7637 | 0.8739 |
| No log | 2.7191 | 242 | 0.7864 | 0.5403 | 0.7864 | 0.8868 |
| No log | 2.7416 | 244 | 0.7808 | 0.5153 | 0.7808 | 0.8837 |
| No log | 2.7640 | 246 | 0.8835 | 0.5438 | 0.8835 | 0.9400 |
| No log | 2.7865 | 248 | 0.9303 | 0.4698 | 0.9303 | 0.9645 |
| No log | 2.8090 | 250 | 0.7935 | 0.5359 | 0.7935 | 0.8908 |
| No log | 2.8315 | 252 | 0.8713 | 0.5502 | 0.8713 | 0.9334 |
| No log | 2.8539 | 254 | 1.0216 | 0.4673 | 1.0216 | 1.0107 |
| No log | 2.8764 | 256 | 0.9167 | 0.5458 | 0.9167 | 0.9575 |
| No log | 2.8989 | 258 | 0.7786 | 0.5127 | 0.7786 | 0.8824 |
| No log | 2.9213 | 260 | 0.7655 | 0.5643 | 0.7655 | 0.8749 |
| No log | 2.9438 | 262 | 0.8082 | 0.5572 | 0.8082 | 0.8990 |
| No log | 2.9663 | 264 | 0.7725 | 0.5449 | 0.7725 | 0.8789 |
| No log | 2.9888 | 266 | 0.7155 | 0.5923 | 0.7155 | 0.8458 |
| No log | 3.0112 | 268 | 0.6841 | 0.6440 | 0.6841 | 0.8271 |
| No log | 3.0337 | 270 | 0.6775 | 0.6782 | 0.6775 | 0.8231 |
| No log | 3.0562 | 272 | 0.7005 | 0.6332 | 0.7005 | 0.8370 |
| No log | 3.0787 | 274 | 0.8135 | 0.5770 | 0.8135 | 0.9019 |
| No log | 3.1011 | 276 | 0.8019 | 0.5686 | 0.8019 | 0.8955 |
| No log | 3.1236 | 278 | 0.7191 | 0.6421 | 0.7191 | 0.8480 |
| No log | 3.1461 | 280 | 0.6994 | 0.6421 | 0.6994 | 0.8363 |
| No log | 3.1685 | 282 | 0.6988 | 0.6452 | 0.6988 | 0.8359 |
| No log | 3.1910 | 284 | 0.7317 | 0.6623 | 0.7317 | 0.8554 |
| No log | 3.2135 | 286 | 0.6933 | 0.6810 | 0.6933 | 0.8326 |
| No log | 3.2360 | 288 | 0.6962 | 0.7016 | 0.6962 | 0.8344 |
| No log | 3.2584 | 290 | 0.6850 | 0.6853 | 0.6850 | 0.8276 |
| No log | 3.2809 | 292 | 0.7586 | 0.6355 | 0.7586 | 0.8710 |
| No log | 3.3034 | 294 | 0.7531 | 0.6279 | 0.7531 | 0.8678 |
| No log | 3.3258 | 296 | 0.7788 | 0.5744 | 0.7788 | 0.8825 |
| No log | 3.3483 | 298 | 0.7446 | 0.5713 | 0.7446 | 0.8629 |
| No log | 3.3708 | 300 | 0.6838 | 0.5710 | 0.6838 | 0.8269 |
| No log | 3.3933 | 302 | 0.6555 | 0.5831 | 0.6555 | 0.8096 |
| No log | 3.4157 | 304 | 0.6819 | 0.6111 | 0.6819 | 0.8258 |
| No log | 3.4382 | 306 | 0.8535 | 0.6231 | 0.8535 | 0.9238 |
| No log | 3.4607 | 308 | 0.9778 | 0.6006 | 0.9778 | 0.9888 |
| No log | 3.4831 | 310 | 0.9217 | 0.5922 | 0.9217 | 0.9600 |
| No log | 3.5056 | 312 | 0.8273 | 0.6318 | 0.8273 | 0.9095 |
| No log | 3.5281 | 314 | 0.7526 | 0.6050 | 0.7526 | 0.8675 |
| No log | 3.5506 | 316 | 0.6659 | 0.6392 | 0.6659 | 0.8160 |
| No log | 3.5730 | 318 | 0.6526 | 0.6037 | 0.6526 | 0.8078 |
| No log | 3.5955 | 320 | 0.6638 | 0.6088 | 0.6638 | 0.8147 |
| No log | 3.6180 | 322 | 0.6987 | 0.6355 | 0.6987 | 0.8359 |
| No log | 3.6404 | 324 | 0.7095 | 0.6368 | 0.7095 | 0.8423 |
| No log | 3.6629 | 326 | 0.6736 | 0.6490 | 0.6736 | 0.8207 |
| No log | 3.6854 | 328 | 0.6945 | 0.6349 | 0.6945 | 0.8333 |
| No log | 3.7079 | 330 | 0.7335 | 0.5868 | 0.7335 | 0.8564 |
| No log | 3.7303 | 332 | 0.8492 | 0.5636 | 0.8492 | 0.9215 |
| No log | 3.7528 | 334 | 0.8293 | 0.5677 | 0.8293 | 0.9107 |
| No log | 3.7753 | 336 | 0.8062 | 0.5703 | 0.8062 | 0.8979 |
| No log | 3.7978 | 338 | 0.7689 | 0.5882 | 0.7689 | 0.8768 |
| No log | 3.8202 | 340 | 0.6837 | 0.6053 | 0.6837 | 0.8268 |
| No log | 3.8427 | 342 | 0.6303 | 0.6900 | 0.6303 | 0.7939 |
| No log | 3.8652 | 344 | 0.6423 | 0.6834 | 0.6423 | 0.8014 |
| No log | 3.8876 | 346 | 0.6179 | 0.7038 | 0.6179 | 0.7860 |
| No log | 3.9101 | 348 | 0.6212 | 0.7176 | 0.6212 | 0.7882 |
| No log | 3.9326 | 350 | 0.6200 | 0.7138 | 0.6200 | 0.7874 |
| No log | 3.9551 | 352 | 0.6354 | 0.6185 | 0.6354 | 0.7971 |
| No log | 3.9775 | 354 | 0.6434 | 0.6185 | 0.6434 | 0.8021 |
| No log | 4.0 | 356 | 0.6422 | 0.6243 | 0.6422 | 0.8014 |
| No log | 4.0225 | 358 | 0.6656 | 0.5997 | 0.6656 | 0.8158 |
| No log | 4.0449 | 360 | 0.6483 | 0.6237 | 0.6483 | 0.8052 |
| No log | 4.0674 | 362 | 0.6520 | 0.6332 | 0.6520 | 0.8075 |
| No log | 4.0899 | 364 | 0.6749 | 0.6051 | 0.6749 | 0.8215 |
| No log | 4.1124 | 366 | 0.6609 | 0.6275 | 0.6609 | 0.8129 |
| No log | 4.1348 | 368 | 0.6580 | 0.5972 | 0.6580 | 0.8111 |
| No log | 4.1573 | 370 | 0.6721 | 0.5766 | 0.6721 | 0.8198 |
| No log | 4.1798 | 372 | 0.7001 | 0.5845 | 0.7001 | 0.8367 |
| No log | 4.2022 | 374 | 0.7674 | 0.5532 | 0.7674 | 0.8760 |
| No log | 4.2247 | 376 | 0.9154 | 0.5618 | 0.9154 | 0.9568 |
| No log | 4.2472 | 378 | 0.9751 | 0.5022 | 0.9751 | 0.9875 |
| No log | 4.2697 | 380 | 0.9002 | 0.5062 | 0.9002 | 0.9488 |
| No log | 4.2921 | 382 | 0.8112 | 0.5697 | 0.8112 | 0.9007 |
| No log | 4.3146 | 384 | 0.8293 | 0.5700 | 0.8293 | 0.9106 |
| No log | 4.3371 | 386 | 0.8951 | 0.5688 | 0.8951 | 0.9461 |
| No log | 4.3596 | 388 | 1.2056 | 0.4604 | 1.2056 | 1.0980 |
| No log | 4.3820 | 390 | 1.6422 | 0.3280 | 1.6422 | 1.2815 |
| No log | 4.4045 | 392 | 1.6427 | 0.3411 | 1.6427 | 1.2817 |
| No log | 4.4270 | 394 | 1.2636 | 0.4682 | 1.2636 | 1.1241 |
| No log | 4.4494 | 396 | 0.8152 | 0.5810 | 0.8152 | 0.9029 |
| No log | 4.4719 | 398 | 0.6983 | 0.6223 | 0.6984 | 0.8357 |
| No log | 4.4944 | 400 | 0.6926 | 0.5931 | 0.6926 | 0.8323 |
| No log | 4.5169 | 402 | 0.7364 | 0.6046 | 0.7364 | 0.8581 |
| No log | 4.5393 | 404 | 0.8710 | 0.5718 | 0.8710 | 0.9333 |
| No log | 4.5618 | 406 | 1.0518 | 0.5039 | 1.0518 | 1.0256 |
| No log | 4.5843 | 408 | 1.1102 | 0.5061 | 1.1102 | 1.0537 |
| No log | 4.6067 | 410 | 0.9333 | 0.5900 | 0.9333 | 0.9661 |
| No log | 4.6292 | 412 | 0.7452 | 0.6322 | 0.7452 | 0.8633 |
| No log | 4.6517 | 414 | 0.7214 | 0.6254 | 0.7214 | 0.8494 |
| No log | 4.6742 | 416 | 0.7924 | 0.5942 | 0.7924 | 0.8901 |
| No log | 4.6966 | 418 | 0.8397 | 0.5647 | 0.8397 | 0.9164 |
| No log | 4.7191 | 420 | 0.9646 | 0.5570 | 0.9646 | 0.9821 |
| No log | 4.7416 | 422 | 0.9677 | 0.5600 | 0.9677 | 0.9837 |
| No log | 4.7640 | 424 | 0.8263 | 0.5933 | 0.8263 | 0.9090 |
| No log | 4.7865 | 426 | 0.6806 | 0.6590 | 0.6806 | 0.8250 |
| No log | 4.8090 | 428 | 0.6722 | 0.6718 | 0.6722 | 0.8199 |
| No log | 4.8315 | 430 | 0.6762 | 0.6749 | 0.6762 | 0.8223 |
| No log | 4.8539 | 432 | 0.7656 | 0.6498 | 0.7656 | 0.8750 |
| No log | 4.8764 | 434 | 0.8355 | 0.6251 | 0.8355 | 0.9141 |
| No log | 4.8989 | 436 | 0.7693 | 0.6408 | 0.7693 | 0.8771 |
| No log | 4.9213 | 438 | 0.6520 | 0.6869 | 0.6520 | 0.8075 |
| No log | 4.9438 | 440 | 0.6177 | 0.6887 | 0.6177 | 0.7859 |
| No log | 4.9663 | 442 | 0.6098 | 0.6330 | 0.6098 | 0.7809 |
| No log | 4.9888 | 444 | 0.6204 | 0.6523 | 0.6204 | 0.7877 |
| No log | 5.0112 | 446 | 0.6193 | 0.6411 | 0.6193 | 0.7869 |
| No log | 5.0337 | 448 | 0.6235 | 0.6251 | 0.6235 | 0.7896 |
| No log | 5.0562 | 450 | 0.6893 | 0.6188 | 0.6893 | 0.8302 |
| No log | 5.0787 | 452 | 0.7443 | 0.6202 | 0.7443 | 0.8627 |
| No log | 5.1011 | 454 | 0.7967 | 0.6294 | 0.7967 | 0.8926 |
| No log | 5.1236 | 456 | 0.9651 | 0.5888 | 0.9651 | 0.9824 |
| No log | 5.1461 | 458 | 1.0255 | 0.5524 | 1.0255 | 1.0127 |
| No log | 5.1685 | 460 | 0.9176 | 0.5691 | 0.9176 | 0.9579 |
| No log | 5.1910 | 462 | 0.8325 | 0.6055 | 0.8325 | 0.9124 |
| No log | 5.2135 | 464 | 0.8038 | 0.6109 | 0.8039 | 0.8966 |
| No log | 5.2360 | 466 | 0.8753 | 0.6061 | 0.8753 | 0.9356 |
| No log | 5.2584 | 468 | 0.9750 | 0.5781 | 0.9750 | 0.9874 |
| No log | 5.2809 | 470 | 1.0014 | 0.5793 | 1.0014 | 1.0007 |
| No log | 5.3034 | 472 | 1.0456 | 0.5439 | 1.0456 | 1.0226 |
| No log | 5.3258 | 474 | 0.9242 | 0.5679 | 0.9242 | 0.9613 |
| No log | 5.3483 | 476 | 0.8024 | 0.6059 | 0.8024 | 0.8958 |
| No log | 5.3708 | 478 | 0.7646 | 0.6147 | 0.7646 | 0.8744 |
| No log | 5.3933 | 480 | 0.7964 | 0.6059 | 0.7964 | 0.8924 |
| No log | 5.4157 | 482 | 0.8277 | 0.6000 | 0.8277 | 0.9098 |
| No log | 5.4382 | 484 | 0.7653 | 0.6014 | 0.7653 | 0.8748 |
| No log | 5.4607 | 486 | 0.6516 | 0.6363 | 0.6516 | 0.8072 |
| No log | 5.4831 | 488 | 0.6342 | 0.6437 | 0.6342 | 0.7964 |
| No log | 5.5056 | 490 | 0.6560 | 0.6445 | 0.6560 | 0.8099 |
| No log | 5.5281 | 492 | 0.6629 | 0.6244 | 0.6629 | 0.8142 |
| No log | 5.5506 | 494 | 0.6613 | 0.6584 | 0.6613 | 0.8132 |
| No log | 5.5730 | 496 | 0.6698 | 0.6373 | 0.6698 | 0.8184 |
| No log | 5.5955 | 498 | 0.6833 | 0.6166 | 0.6833 | 0.8266 |
| 0.4485 | 5.6180 | 500 | 0.7752 | 0.6442 | 0.7752 | 0.8805 |
| 0.4485 | 5.6404 | 502 | 0.7594 | 0.6370 | 0.7594 | 0.8715 |
| 0.4485 | 5.6629 | 504 | 0.7915 | 0.6305 | 0.7915 | 0.8897 |
| 0.4485 | 5.6854 | 506 | 0.8068 | 0.6139 | 0.8068 | 0.8982 |
| 0.4485 | 5.7079 | 508 | 0.7180 | 0.6095 | 0.7180 | 0.8474 |
| 0.4485 | 5.7303 | 510 | 0.6735 | 0.5855 | 0.6735 | 0.8207 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Mihaiii/Cluj-Napoca-0.8 | Mihaiii | "2024-02-26T10:19:56Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:Mihaiii/Cluj-Napoca-0.7",
"base_model:finetune:Mihaiii/Cluj-Napoca-0.7",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-26T04:22:08Z" | ---
base_model: Mihaiii/Cluj-Napoca-0.7
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
The Cluj-Napoca series is mostly an experiment.
This model is trained on a private dataset.
[Details:](https://twitter.com/m_chirculescu/status/1760719837528023549?t=XK67X_iu5hkt9p430nRmkA&s=19)

# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
``` |
DazzlingXeno/G2-Xeno-SimPO-27b | DazzlingXeno | "2024-09-25T11:03:57Z" | 6 | 0 | null | [
"safetensors",
"gemma2",
"Story Telling",
"Creative Writing",
"GutenBerg",
"Gemma",
"Prose",
"base_model:AALF/gemma-2-27b-it-SimPO-37K",
"base_model:finetune:AALF/gemma-2-27b-it-SimPO-37K",
"region:us"
] | null | "2024-09-24T11:07:40Z" | ---
base_model:
- TheDrummer/Gemmasutra-Pro-27B-v1
- nbeerbower/gemma2-gutenberg-27B
- AALF/gemma-2-27b-it-SimPO-37K
tags:
- Story Telling
- Creative Writing
- GutenBerg
- Gemma
- Prose
---
I merged Steelskull/G2-DA-Nyxora-27b-V2 with nbeerbower/gemma2-gutenberg-27B using a 50/50 slerp merge.
Then I merged that with AALF/gemma-2-27b-it-SimPO-37K.
The model seems pretty decent at story telling in my limited testing.
---

base_model:
- AALF/gemma-2-27b-it-SimPO-37K
- DazzlingXeno/GutenBerg_Nyxora
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [AALF/gemma-2-27b-it-SimPO-37K](https://huggingface.co/AALF/gemma-2-27b-it-SimPO-37K)
* [DazzlingXeno/GutenBerg_Nyxora](https://huggingface.co/DazzlingXeno/GutenBerg_Nyxora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: AALF/gemma-2-27b-it-SimPO-37K
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 46]
model: DazzlingXeno/GutenBerg_Nyxora
- layer_range: [0, 46]
model: AALF/gemma-2-27b-it-SimPO-37K
```
If you like it, but me a coffee https://ko-fi.com/dazzlingxeno |
LumixFilmz/Lumix666 | LumixFilmz | "2023-07-23T21:57:36Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-07-23T21:57:36Z" | ---
license: bigscience-openrail-m
---
|
RichardErkhov/Shahm_-_bart-german-8bits | RichardErkhov | "2024-05-09T20:21:17Z" | 81 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-09T20:20:55Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bart-german - bnb 8bits
- Model creator: https://huggingface.co/Shahm/
- Original model: https://huggingface.co/Shahm/bart-german/
Original model description:
---
license: apache-2.0
tags:
- generated_from_trainer
- summarization
datasets:
- mlsum
language: de
metrics:
- rouge
model-index:
- name: mode-bart-deutsch
results:
- task:
name: Summarization
type: summarization
dataset:
name: mlsum de
type: mlsum
args: de
metrics:
- name: Rouge1
type: rouge
value: 41.698
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mode-bart-deutsch
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Rouge1: 41.698
- Rouge2: 31.3548
- Rougel: 38.2817
- Rougelsum: 39.6349
- Gen Len: 63.1723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
srikarvar/fine_tuned_model_5 | srikarvar | "2024-08-28T10:40:37Z" | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2332",
"loss:OnlineContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-08-28T10:40:08Z" | ---
base_model: intfloat/multilingual-e5-small
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2332
- loss:OnlineContrastiveLoss
widget:
- source_sentence: Who discovered the structure of DNA?
sentences:
- Who discovered the structure of RNA?
- Steps to apply for a scholarship
- First human to set foot on the moon
- source_sentence: Who directed 'Schindler's List'?
sentences:
- Who directed 'The Pianist'?
- What are some high paying jobs for a fresher with an M.Tech in biotechnology?
- Where can I find gluten-free restaurants?
- source_sentence: Which is the best shares to purchase and sale daily trading?
sentences:
- In Sydney, which company would be the best to get advice for Business Sales &
Purchases?
- Steps to adjust phone settings
- Is a 3.8 GPA sufficient to get into a top school?
- source_sentence: Nd she is always sad?
sentences:
- Where to purchase organic produce
- Aerodynamically what happens when propellor rotates?
- How is unsupervised learning used for data insights?
- source_sentence: How to bake a pie?
sentences:
- What is the population of Chicago?
- Steps to bake a pie
- 'What is the distribution of traffic between Google organic search results? e.g.
#1 vs. #2 in rankings, first page vs. second page'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class dev
type: pair-class-dev
metrics:
- type: cosine_accuracy
value: 0.8653846153846154
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.872760534286499
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.8656716417910447
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8200240135192871
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8285714285714286
name: Cosine Precision
- type: cosine_recall
value: 0.90625
name: Cosine Recall
- type: cosine_ap
value: 0.9322624848213654
name: Cosine Ap
- type: dot_accuracy
value: 0.8653846153846154
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.872760534286499
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.8656716417910447
name: Dot F1
- type: dot_f1_threshold
value: 0.8200240135192871
name: Dot F1 Threshold
- type: dot_precision
value: 0.8285714285714286
name: Dot Precision
- type: dot_recall
value: 0.90625
name: Dot Recall
- type: dot_ap
value: 0.9322624848213654
name: Dot Ap
- type: manhattan_accuracy
value: 0.8692307692307693
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 9.252302169799805
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.8721804511278196
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.252302169799805
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.8405797101449275
name: Manhattan Precision
- type: manhattan_recall
value: 0.90625
name: Manhattan Recall
- type: manhattan_ap
value: 0.9322911488571455
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.8653846153846154
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.5044240355491638
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.8656716417910447
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.5999571084976196
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.8285714285714286
name: Euclidean Precision
- type: euclidean_recall
value: 0.90625
name: Euclidean Recall
- type: euclidean_ap
value: 0.9322624848213654
name: Euclidean Ap
- type: max_accuracy
value: 0.8692307692307693
name: Max Accuracy
- type: max_accuracy_threshold
value: 9.252302169799805
name: Max Accuracy Threshold
- type: max_f1
value: 0.8721804511278196
name: Max F1
- type: max_f1_threshold
value: 9.252302169799805
name: Max F1 Threshold
- type: max_precision
value: 0.8405797101449275
name: Max Precision
- type: max_recall
value: 0.90625
name: Max Recall
- type: max_ap
value: 0.9322911488571455
name: Max Ap
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class test
type: pair-class-test
metrics:
- type: cosine_accuracy
value: 0.916
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.844039261341095
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.907488986784141
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8230063319206238
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8728813559322034
name: Cosine Precision
- type: cosine_recall
value: 0.944954128440367
name: Cosine Recall
- type: cosine_ap
value: 0.96095333014952
name: Cosine Ap
- type: dot_accuracy
value: 0.916
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.8440393209457397
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.907488986784141
name: Dot F1
- type: dot_f1_threshold
value: 0.8230063319206238
name: Dot F1 Threshold
- type: dot_precision
value: 0.8728813559322034
name: Dot Precision
- type: dot_recall
value: 0.944954128440367
name: Dot Recall
- type: dot_ap
value: 0.96095333014952
name: Dot Ap
- type: manhattan_accuracy
value: 0.916
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 8.581160545349121
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.907488986784141
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.327116012573242
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.8728813559322034
name: Manhattan Precision
- type: manhattan_recall
value: 0.944954128440367
name: Manhattan Recall
- type: manhattan_ap
value: 0.9612698712458685
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.916
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.5584936141967773
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.907488986784141
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.594968318939209
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.8728813559322034
name: Euclidean Precision
- type: euclidean_recall
value: 0.944954128440367
name: Euclidean Recall
- type: euclidean_ap
value: 0.96095333014952
name: Euclidean Ap
- type: max_accuracy
value: 0.916
name: Max Accuracy
- type: max_accuracy_threshold
value: 8.581160545349121
name: Max Accuracy Threshold
- type: max_f1
value: 0.907488986784141
name: Max F1
- type: max_f1_threshold
value: 9.327116012573242
name: Max F1 Threshold
- type: max_precision
value: 0.8728813559322034
name: Max Precision
- type: max_recall
value: 0.944954128440367
name: Max Recall
- type: max_ap
value: 0.9612698712458685
name: Max Ap
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_5")
# Run inference
sentences = [
'How to bake a pie?',
'Steps to bake a pie',
'What is the population of Chicago?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `pair-class-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.8654 |
| cosine_accuracy_threshold | 0.8728 |
| cosine_f1 | 0.8657 |
| cosine_f1_threshold | 0.82 |
| cosine_precision | 0.8286 |
| cosine_recall | 0.9062 |
| cosine_ap | 0.9323 |
| dot_accuracy | 0.8654 |
| dot_accuracy_threshold | 0.8728 |
| dot_f1 | 0.8657 |
| dot_f1_threshold | 0.82 |
| dot_precision | 0.8286 |
| dot_recall | 0.9062 |
| dot_ap | 0.9323 |
| manhattan_accuracy | 0.8692 |
| manhattan_accuracy_threshold | 9.2523 |
| manhattan_f1 | 0.8722 |
| manhattan_f1_threshold | 9.2523 |
| manhattan_precision | 0.8406 |
| manhattan_recall | 0.9062 |
| manhattan_ap | 0.9323 |
| euclidean_accuracy | 0.8654 |
| euclidean_accuracy_threshold | 0.5044 |
| euclidean_f1 | 0.8657 |
| euclidean_f1_threshold | 0.6 |
| euclidean_precision | 0.8286 |
| euclidean_recall | 0.9062 |
| euclidean_ap | 0.9323 |
| max_accuracy | 0.8692 |
| max_accuracy_threshold | 9.2523 |
| max_f1 | 0.8722 |
| max_f1_threshold | 9.2523 |
| max_precision | 0.8406 |
| max_recall | 0.9062 |
| **max_ap** | **0.9323** |
#### Binary Classification
* Dataset: `pair-class-test`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.916 |
| cosine_accuracy_threshold | 0.844 |
| cosine_f1 | 0.9075 |
| cosine_f1_threshold | 0.823 |
| cosine_precision | 0.8729 |
| cosine_recall | 0.945 |
| cosine_ap | 0.961 |
| dot_accuracy | 0.916 |
| dot_accuracy_threshold | 0.844 |
| dot_f1 | 0.9075 |
| dot_f1_threshold | 0.823 |
| dot_precision | 0.8729 |
| dot_recall | 0.945 |
| dot_ap | 0.961 |
| manhattan_accuracy | 0.916 |
| manhattan_accuracy_threshold | 8.5812 |
| manhattan_f1 | 0.9075 |
| manhattan_f1_threshold | 9.3271 |
| manhattan_precision | 0.8729 |
| manhattan_recall | 0.945 |
| manhattan_ap | 0.9613 |
| euclidean_accuracy | 0.916 |
| euclidean_accuracy_threshold | 0.5585 |
| euclidean_f1 | 0.9075 |
| euclidean_f1_threshold | 0.595 |
| euclidean_precision | 0.8729 |
| euclidean_recall | 0.945 |
| euclidean_ap | 0.961 |
| max_accuracy | 0.916 |
| max_accuracy_threshold | 8.5812 |
| max_f1 | 0.9075 |
| max_f1_threshold | 9.3271 |
| max_precision | 0.8729 |
| max_recall | 0.945 |
| **max_ap** | **0.9613** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,332 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 12.96 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.67 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>0: ~52.80%</li><li>1: ~47.20%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------------------------------------|:---------------------------------------------------------|:---------------|
| <code>How to bake a chocolate cake?</code> | <code>Recipe for baking a chocolate cake</code> | <code>1</code> |
| <code>Why do girls want to be friends with the guy they reject?</code> | <code>How do guys feel after rejecting a girl?</code> | <code>0</code> |
| <code>How can I stop being afraid of working?</code> | <code>How do you stop being afraid of everything?</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 260 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.44 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.99 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>0: ~50.77%</li><li>1: ~49.23%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------|:--------------------------------------------------|:---------------|
| <code>How to cook spaghetti?</code> | <code>Steps to cook spaghetti</code> | <code>1</code> |
| <code>How to create a mobile app?</code> | <code>How to create a desktop application?</code> | <code>0</code> |
| <code>How can I update my resume?</code> | <code>Steps to revise and update a resume</code> | <code>1</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | pair-class-dev_max_ap | pair-class-test_max_ap |
|:-------:|:------:|:-------------:|:----------:|:---------------------:|:----------------------:|
| 0 | 0 | - | - | 0.6979 | - |
| 0.2740 | 10 | 1.9007 | - | - | - |
| 0.5479 | 20 | 1.1616 | - | - | - |
| 0.8219 | 30 | 0.9094 | - | - | - |
| 0.9863 | 36 | - | 0.7692 | 0.9117 | - |
| 1.0959 | 40 | 0.9105 | - | - | - |
| 1.3699 | 50 | 0.6629 | - | - | - |
| 1.6438 | 60 | 0.4243 | - | - | - |
| 1.9178 | 70 | 0.4729 | - | - | - |
| **2.0** | **73** | **-** | **0.7294** | **0.9306** | **-** |
| 2.1918 | 80 | 0.4897 | - | - | - |
| 2.4658 | 90 | 0.3103 | - | - | - |
| 2.7397 | 100 | 0.2316 | - | - | - |
| 2.9863 | 109 | - | 0.7807 | 0.9311 | - |
| 3.0137 | 110 | 0.3179 | - | - | - |
| 3.2877 | 120 | 0.1975 | - | - | - |
| 3.5616 | 130 | 0.1477 | - | - | - |
| 3.8356 | 140 | 0.1034 | - | - | - |
| 3.9452 | 144 | - | 0.8132 | 0.9323 | 0.9613 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
huggingtweets/rotandgrow | huggingtweets | "2022-04-22T16:36:41Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/rotandgrow/1650645334267/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1516085642773479428/dhoVWFDS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">𐍂𐌵𐍃♱𐍅</div>
<div style="text-align: center; font-size: 14px;">@rotandgrow</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 𐍂𐌵𐍃♱𐍅.
| Data | 𐍂𐌵𐍃♱𐍅 |
| --- | --- |
| Tweets downloaded | 3208 |
| Retweets | 327 |
| Short tweets | 714 |
| Tweets kept | 2167 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2x9a75ni/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rotandgrow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17eyos6z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17eyos6z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rotandgrow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
digiplay/fantasticmix_v30_test | digiplay | "2023-07-22T14:14:00Z" | 327 | 2 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-26T18:00:10Z" | ---
license: other
tags:
- text-to-image
- diffusers
inference: true
---
fantasticmix
https://civitai.com/models/22402?modelVersionId=39880
Version 3

Original Author's image link: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/60f70a04-59e8-496d-e139-495a3495b900/width=1024/20230408_165731_713069.jpeg |
jimmylam6666/IA_14B-Q4_K_M-GGUF | jimmylam6666 | "2024-11-05T03:30:04Z" | 5 | 1 | null | [
"gguf",
"love",
"philosophy",
"literature",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"base_model:Minami-su/IA_14B",
"base_model:quantized:Minami-su/IA_14B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-05T03:29:24Z" | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
language:
- en
- zh
pipeline_tag: text-generation
tags:
- love
- philosophy
- literature
- llama-cpp
- gguf-my-repo
base_model: Minami-su/IA_14B
---
# jimmylam6666/IA_14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Minami-su/IA_14B`](https://huggingface.co/Minami-su/IA_14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Minami-su/IA_14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jimmylam6666/IA_14B-Q4_K_M-GGUF --hf-file ia_14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jimmylam6666/IA_14B-Q4_K_M-GGUF --hf-file ia_14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jimmylam6666/IA_14B-Q4_K_M-GGUF --hf-file ia_14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jimmylam6666/IA_14B-Q4_K_M-GGUF --hf-file ia_14b-q4_k_m.gguf -c 2048
```
|
cyr19/gpt2-small-de-quatrain | cyr19 | "2024-05-25T19:47:05Z" | 155 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T19:46:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Etudiant-GGUF | mradermacher | "2024-05-06T05:44:15Z" | 53 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:kemtho/Etudiant",
"base_model:quantized:kemtho/Etudiant",
"endpoints_compatible",
"region:us"
] | null | "2024-03-29T00:59:37Z" | ---
base_model: kemtho/Etudiant
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
static quants of https://huggingface.co/kemtho/Etudiant
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Etudiant-GGUF/resolve/main/Etudiant.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
John6666/muonxlmixx-v103spztvppdx-sdxl | John6666 | "2024-12-23T06:59:51Z" | 410 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"Japanese",
"versatile",
"cosplay",
"skin detail",
"pdx",
"mfx",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:finetune:Laxhar/noobai-XL-1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-12-19T04:30:20Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- Japanese
- versatile
- cosplay
- skin detail
- pdx
- mfx
- illustrious
base_model: Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/837316?modelVersionId=1178745).
This model created by [muooon](https://civitai.com/user/muooon).
|
MaiiiAdel/arabertv02_numerical | MaiiiAdel | "2025-01-08T14:19:15Z" | 715 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-26T17:20:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf | RichardErkhov | "2025-02-09T21:54:50Z" | 11 | 0 | null | [
"gguf",
"region:us"
] | null | "2025-02-09T21:54:10Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
tiny-LlamaForCausalLM-3.1 - GGUF
- Model creator: https://huggingface.co/qgallouedec/
- Original model: https://huggingface.co/qgallouedec/tiny-LlamaForCausalLM-3.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tiny-LlamaForCausalLM-3.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q2_K.gguf) | Q2_K | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.IQ3_XS.gguf) | IQ3_XS | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.IQ3_S.gguf) | IQ3_S | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q3_K_S.gguf) | Q3_K_S | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.IQ3_M.gguf) | IQ3_M | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q3_K.gguf) | Q3_K | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q3_K_M.gguf) | Q3_K_M | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q3_K_L.gguf) | Q3_K_L | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.IQ4_XS.gguf) | IQ4_XS | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q4_0.gguf) | Q4_0 | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.IQ4_NL.gguf) | IQ4_NL | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q4_K_S.gguf) | Q4_K_S | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q4_K.gguf) | Q4_K | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q4_K_M.gguf) | Q4_K_M | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q4_1.gguf) | Q4_1 | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q5_0.gguf) | Q5_0 | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q5_K_S.gguf) | Q5_K_S | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q5_K.gguf) | Q5_K | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q5_K_M.gguf) | Q5_K_M | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q5_1.gguf) | Q5_1 | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q6_K.gguf) | Q6_K | 0.01GB |
| [tiny-LlamaForCausalLM-3.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/qgallouedec_-_tiny-LlamaForCausalLM-3.1-gguf/blob/main/tiny-LlamaForCausalLM-3.1.Q8_0.gguf) | Q8_0 | 0.01GB |
Original model description:
---
library_name: transformers
tags:
- trl
---
# Tiny LlamaForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t75_e35_member_shadow55 | FounderOfHuggingface | "2023-12-07T12:11:11Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-07T10:13:30Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
LeBenchmark/wav2vec-FR-1K-Male-large | LeBenchmark | "2022-12-12T13:58:39Z" | 2 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2204.01397",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-04-04T08:37:06Z" | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems](https://arxiv.org/abs/2204.01397)
## Model and data descriptions
We release four gender-specific models trained on 1K hours of speech.
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Referencing our gender-specific models
```
@inproceedings{boito22_interspeech,
author={Marcely Zanon Boito and Laurent Besacier and Natalia Tomashenko and Yannick Estève},
title={{A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={1278--1282},
doi={10.21437/Interspeech.2022-353}
}
```
## Referencing LeBenchmark
```
@inproceedings{evain2021task,
title={Task agnostic and task specific self-supervised learning from speech with \textit{LeBenchmark}},
author={Evain, Sol{\`e}ne and Nguyen, Ha and Le, Hang and Boito, Marcely Zanon and Mdhaffar, Salima and Alisamir, Sina and Tong, Ziyi and Tomashenko, Natalia and Dinarelli, Marco and Parcollet, Titouan and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
``` |
jamesdborin/llava-v1.6-mistral-7b | jamesdborin | "2024-03-28T09:09:06Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-03-28T09:08:43Z" | ---
inference: false
license: apache-2.0
---
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
**Model date:**
LLaVA-v1.6-Mistral-7B was trained in December 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. |
zhaorui-nb/Meta-Llama-3.1-8B-Instruct._.lora_ft._.chinese | zhaorui-nb | "2024-09-10T18:55:57Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-10T18:32:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dkurzyk/phi2_DPO | dkurzyk | "2024-02-07T15:21:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-07T15:21:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hivetechVN/llama3.2-3B-Instruct | hivetechVN | "2025-02-21T20:04:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-02-21T08:15:24Z" | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: llama3.2-3B-Instruct
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama3.2-3B-Instruct
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hivetechVN/llama3.2-3B-Instruct", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MrDdz/mytuned_test_trainer-base-cased1 | MrDdz | "2023-07-22T08:44:11Z" | 93 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-base-cased",
"base_model:finetune:xlnet/xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-22T08:37:01Z" | ---
license: mit
base_model: xlnet-base-cased
tags:
- generated_from_trainer
model-index:
- name: mytuned_test_trainer-base-cased1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mytuned_test_trainer-base-cased1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7836
- eval_rmse: 0.7145
- eval_runtime: 5.7466
- eval_samples_per_second: 348.03
- eval_steps_per_second: 43.504
- epoch: 3.27
- step: 1633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
martinfabbri/llama3_text_to_cypher | martinfabbri | "2024-06-27T08:22:38Z" | 19 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-27T08:04:43Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** martinfabbri
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
varun-v-rao/roberta-large-bn-adapter-3.17M-squad-model3 | varun-v-rao | "2024-02-09T07:53:20Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"region:us"
] | null | "2024-02-09T05:20:31Z" | ---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: roberta-large-bn-adapter-3.17M-squad-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-bn-adapter-3.17M-squad-model3
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 46
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
T-Systems-onsite/cross-de-it-roberta-sentence-transformer | T-Systems-onsite | "2023-09-01T11:32:33Z" | 20 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"de",
"it",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | ---
language:
- de
- it
license: mit
tags:
- sentence_embedding
--- |
orepin/Reinforce-Pixelcopter-15000iter | orepin | "2023-05-26T19:03:26Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-26T19:03:20Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-15000iter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 39.10 +/- 34.44
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jssky/a7226fd6-b015-4ca3-ae73-16ec25a19842 | jssky | "2025-02-05T06:26:50Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T06:15:41Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a7226fd6-b015-4ca3-ae73-16ec25a19842
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aa33efdcea3f7395_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aa33efdcea3f7395_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: jssky/a7226fd6-b015-4ca3-ae73-16ec25a19842
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/aa33efdcea3f7395_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 694298ba-c6cc-4345-b0b4-84c5983d0048
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 694298ba-c6cc-4345-b0b4-84c5983d0048
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a7226fd6-b015-4ca3-ae73-16ec25a19842
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1386 | 0.0047 | 50 | 2.0889 |
| 2.0325 | 0.0095 | 100 | 2.0676 |
| 2.0298 | 0.0142 | 150 | 2.0259 |
| 2.1509 | 0.0190 | 200 | 2.0174 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
cr0afm/autotrain-po8kz-28ik9 | cr0afm | "2024-04-24T22:05:42Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T21:36:48Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
bustamiyusoef/_Arabic_nougat_AHR | bustamiyusoef | "2025-01-15T12:41:46Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:MohamedRashad/arabic-base-nougat",
"base_model:finetune:MohamedRashad/arabic-base-nougat",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-01-15T12:40:21Z" | ---
library_name: transformers
license: gpl-3.0
base_model: MohamedRashad/arabic-base-nougat
tags:
- generated_from_trainer
model-index:
- name: _Arabic_nougat_AHR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# _Arabic_nougat_AHR
This model is a fine-tuned version of [MohamedRashad/arabic-base-nougat](https://huggingface.co/MohamedRashad/arabic-base-nougat) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.3594 | 1.0 | 77 | 0.3758 |
| 1.6401 | 2.0 | 154 | 0.3143 |
| 1.4332 | 3.0 | 231 | 0.2986 |
| 0.9917 | 4.0 | 308 | 0.2993 |
| 0.8202 | 5.0 | 385 | 0.3082 |
| 0.669 | 6.0 | 462 | 0.3103 |
| 0.5788 | 7.0 | 539 | 0.3233 |
| 0.4873 | 8.0 | 616 | 0.3336 |
| 0.4865 | 9.0 | 693 | 0.3366 |
| 0.3386 | 10.0 | 770 | 0.3503 |
| 0.3643 | 11.0 | 847 | 0.3476 |
| 0.3229 | 12.0 | 924 | 0.3546 |
| 0.3406 | 13.0 | 1001 | 0.3536 |
| 0.331 | 14.0 | 1078 | 0.3534 |
| 0.3016 | 14.8140 | 1140 | 0.3533 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
ShakhzoDavronov/whisper-stt-lora-uz | ShakhzoDavronov | "2025-03-03T10:01:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-02T14:10:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Teapack1/model_KWS | Teapack1 | "2023-07-26T19:55:00Z" | 161 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-07-26T19:24:17Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: model_KWS
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9825
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_KWS
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3346
- Accuracy: 0.9825
## Model description
Finetuned on custom commands: "ambient", "light", "off", "on", "scene1", "scene2", "scene3", "void"
## Intended uses & limitations
Intended for keyword spotting applications.
## Training and evaluation data
3200 training samples, 800 testing samples in total.
Originally was recorded 20 samples of every class.
Each sample was randomly augmented with random methods: pitch-shifting, time-stretching, volume-change, gaussian noise.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0119 | 1.0 | 25 | 1.9832 | 0.375 |
| 1.4505 | 2.0 | 50 | 1.3361 | 0.8337 |
| 1.0767 | 3.0 | 75 | 0.8700 | 0.955 |
| 0.7448 | 4.0 | 100 | 0.6919 | 0.9513 |
| 0.6143 | 5.0 | 125 | 0.5333 | 0.9625 |
| 0.4924 | 6.0 | 150 | 0.4387 | 0.98 |
| 0.4544 | 7.0 | 175 | 0.3844 | 0.985 |
| 0.3888 | 8.0 | 200 | 0.3668 | 0.9812 |
| 0.3734 | 9.0 | 225 | 0.3436 | 0.9825 |
| 0.3522 | 10.0 | 250 | 0.3346 | 0.9825 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
|
divij30/legal-llama3.2-3B-completion2 | divij30 | "2025-02-22T08:36:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-22T07:46:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
arliman/Sinfuldeeds.VIRAL.Leaked.VIDEO.TELEGRAM.LINKS.XXX | arliman | "2025-02-13T03:00:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-13T02:59:32Z" | <a rel="nofollow" href="https://213.232.235.80/live/video.php?q=leak-video">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️​</a></p><p></p>
<p><a rel="nofollow" href="https://213.232.235.80/live/video.php?q=leak-video">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤 Download❤️❤️⬇️⬇️​</a></p>
<p><a href="https://213.232.235.80/live/video.php?q=leak-video" title="WATCH NOW" rel="nofollow"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="WATCH NOW" title="WATCH NOW" width="720" height="480" border="leak"></a></p>
|
kaushikkp/Florence-2-FT-DocVQA | kaushikkp | "2024-07-11T03:38:20Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-07-10T04:00:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nihal-mp/phi_qlora_2b_alz_updated | nihal-mp | "2024-07-14T11:44:52Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-14T11:42:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF | mradermacher | "2025-03-01T04:02:54Z" | 502 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:chakchouk/BBA-ECE-TRIOMPHANT-Qwen2.5-72B",
"base_model:quantized:chakchouk/BBA-ECE-TRIOMPHANT-Qwen2.5-72B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-26T23:37:38Z" | ---
base_model: chakchouk/BBA-ECE-TRIOMPHANT-Qwen2.5-72B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/chakchouk/BBA-ECE-TRIOMPHANT-Qwen2.5-72B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 29.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q4_1.gguf) | i1-Q4_1 | 45.8 | |
| [GGUF](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BBA-ECE-TRIOMPHANT-72B-i1-GGUF/resolve/main/BBA-ECE-TRIOMPHANT-72B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
brew35/97af5816-1ae5-44de-aded-21f1217d0255 | brew35 | "2025-02-04T23:08:32Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-04T23:06:17Z" | ---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 97af5816-1ae5-44de-aded-21f1217d0255
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cef775b9f5c93601_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cef775b9f5c93601_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: brew35/97af5816-1ae5-44de-aded-21f1217d0255
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/cef775b9f5c93601_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 632cc85b-e76a-4cdd-8ff2-f1d7b16fcc2c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 632cc85b-e76a-4cdd-8ff2-f1d7b16fcc2c
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 97af5816-1ae5-44de-aded-21f1217d0255
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 20.6159 | 0.0429 | 200 | 5.3967 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso11/b05277eb-bc4c-4a28-9742-b80a8544c75f | lesso11 | "2025-01-09T03:17:56Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | "2025-01-09T03:09:01Z" | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b05277eb-bc4c-4a28-9742-b80a8544c75f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 7a0dcf2cd449adfe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7a0dcf2cd449adfe_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso11/b05277eb-bc4c-4a28-9742-b80a8544c75f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/7a0dcf2cd449adfe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b70af139-fd5f-4c53-91c7-aafc7cf0c6d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b70af139-fd5f-4c53-91c7-aafc7cf0c6d0
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# b05277eb-bc4c-4a28-9742-b80a8544c75f
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0061 | 1 | nan |
| 0.0 | 0.0306 | 5 | nan |
| 0.0 | 0.0612 | 10 | nan |
| 0.0 | 0.0917 | 15 | nan |
| 0.0 | 0.1223 | 20 | nan |
| 0.0 | 0.1529 | 25 | nan |
| 0.0 | 0.1835 | 30 | nan |
| 0.0 | 0.2141 | 35 | nan |
| 0.0 | 0.2446 | 40 | nan |
| 0.0 | 0.2752 | 45 | nan |
| 0.0 | 0.3058 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sayan01/Llama-1B-KD | Sayan01 | "2024-12-08T20:56:02Z" | 129 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-08T20:54:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Waynehillsdev/Waynehills_summary_tensorflow | Waynehillsdev | "2022-01-07T04:39:34Z" | 5 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:04Z" | ---
tags:
- generated_from_keras_callback
model-index:
- name: Waynehills_summary_tensorflow
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Waynehills_summary_tensorflow
This model is a fine-tuned version of [KETI-AIR/ke-t5-base-ko](https://huggingface.co/KETI-AIR/ke-t5-base-ko) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
niks-salodkar/a2c-PandaReachDense-v2 | niks-salodkar | "2023-03-20T06:03:47Z" | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-19T10:29:05Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.61 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vishnu-vs/llama-7bhf | vishnu-vs | "2023-08-14T11:03:29Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-14T09:28:03Z" | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
arxiv: 2307.09288
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
unsloth/SmolLM-135M-Instruct | unsloth | "2024-09-21T02:13:57Z" | 6,707 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"unsloth",
"conversational",
"en",
"dataset:Magpie-Align/Magpie-Pro-300K-Filtered",
"dataset:bigcode/self-oss-instruct-sc2-exec-filter-50k",
"dataset:teknium/OpenHermes-2.5",
"dataset:HuggingFaceTB/everyday-conversations-llama3.1-2k",
"base_model:HuggingFaceTB/SmolLM-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM-135M-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-16T04:44:04Z" | ---
license: apache-2.0
base_model: HuggingFaceTB/SmolLM-135M-Instruct
tags:
- alignment-handbook
- trl
- unsloth
datasets:
- Magpie-Align/Magpie-Pro-300K-Filtered
- bigcode/self-oss-instruct-sc2-exec-filter-50k
- teknium/OpenHermes-2.5
- HuggingFaceTB/everyday-conversations-llama3.1-2k
library_name: transformers
language:
- en
---
# Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here - also works for SmolLM!: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
# SmolLM-1.7B-Instruct
<center>
<img src="https://huggingface.co/datasets/HuggingFaceTB/images/resolve/main/banner_smol.png" alt="SmolLM" width="1100" height="600">
</center>
## Model Summary
SmolLM is a series of small language models available in three sizes: 135M, 360M, and 1.7B parameters.
These models are pre-trained on [SmolLM-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus), a curated collection of high-quality educational and synthetic data designed for training LLMs. For further details, we refer to our [blogpost](https://huggingface.co/blog/smollm).
To build SmolLM-Instruct, we finetuned the base models on publicly available datasets.
## Changelog
|Release|Description|
|-|-|
|v0.1| Initial release of SmolLM-Instruct. We finetune on the permissive subset of the [WebInstructSub](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub) dataset, combined with [StarCoder2-Self-OSS-Instruct](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k). Then, we perform DPO (Direct Preference Optimization) for one epoch on [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) for the 135M and 1.7B models, and [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) for the 360M model.|
|v0.2| We changed the finetuning mix to datasets more suitable for smol models. We train on a new dataset of 2k simple everyday conversations we generated by llama3.1-70B [everyday-conversations-llama3.1-2k](https://huggingface.co/datasets/HuggingFaceTB/everyday-conversations-llama3.1-2k/), [Magpie-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered), [StarCoder2-Self-OSS-Instruct](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k), and a small subset of [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)|
v0.2 models are better at staying on topic and responding appropriately to standard prompts, such as greetings and questions about their role as AI assistants. SmolLM-360M-Instruct (v0.2) has a 63.3% win rate over SmolLM-360M-Instruct (v0.1) on AlpacaEval. You can find the details [here](https://huggingface.co/datasets/HuggingFaceTB/alpaca_eval_details/).
You can load v0.1 checkpoint by specifying `revision="v0.1"` in the transformers code:
```python
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-1.7B-Instruct", revision="v0.1")
```
## Usage
### Local Applications
⚡ For local applications, you can find optimized implementations of the model in MLC, GGUF and Transformers.js formats, in addition to fast in-browser demos in this collection: https://huggingface.co/collections/HuggingFaceTB/local-smollms-66c0f3b2a15b4eed7fb198d0
We noticed that 4bit quantization degrades the quality of the 135M and 360M, so we use `q016` for MLC and ONNX/Transformers.js checkpoints for the WebGPU demos. We also suggest using temperature 0.2 and top-p 0.9.
### Transformers
```bash
pip install transformers
```
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM-1.7B-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM-1.7B-Instruct --device cpu
```
## Limitations
Additionally, the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data, we invite users to leverage them as assistive tools rather than definitive sources of information. We find that they can handle general knowledge questions, creative writing and basic Python programming. But they are English only and may have difficulty with arithmetics, editing tasks and complex reasoning. For more details about the models' capabilities, please refer to our [blog post](https://huggingface.co/blog/smollm).
## Training parameters
We train the models using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) with the datasets mentioned in the changelog, using these parameters v0.2 (most of them are from Zephyr Gemma recipe):
- 1 epoch
- lr 1e-3
- cosine schedule
- warmup ratio 0.1
- global batch size 262k tokens
You can find the training recipe here: https://github.com/huggingface/alignment-handbook/tree/smollm/recipes/smollm
# Citation
```bash
@misc{allal2024SmolLM,
title={SmolLM - blazingly fast and remarkably powerful},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Leandro von Werra and Thomas Wolf},
year={2024},
}
``` |
NovaBlend/omega_2_17 | NovaBlend | "2025-01-11T18:44:01Z" | 12 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-01-11T18:35:42Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
OwOOwO/eacc_contTrain_m2_25 | OwOOwO | "2024-03-08T00:16:21Z" | 89 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-08T00:13:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asun17904/multiberts-seed_1_winobias_classifieronly | asun17904 | "2023-03-24T16:00:02Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-24T03:11:15Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: multiberts-seed_1_winobias_classifieronly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiberts-seed_1_winobias_classifieronly
This model is a fine-tuned version of [google/multiberts-seed_1](https://huggingface.co/google/multiberts-seed_1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6936
- Accuracy: 0.5114
- Tp: 0.2734
- Tn: 0.2380
- Fp: 0.2620
- Fn: 0.2266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Tp | Tn | Fp | Fn |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|
| 0.7029 | 0.8 | 20 | 0.6948 | 0.5019 | 0.1951 | 0.3068 | 0.1932 | 0.3049 |
| 0.6937 | 1.6 | 40 | 0.6952 | 0.4931 | 0.3390 | 0.1540 | 0.3460 | 0.1610 |
| 0.6974 | 2.4 | 60 | 0.6954 | 0.4937 | 0.3567 | 0.1370 | 0.3630 | 0.1433 |
| 0.7041 | 3.2 | 80 | 0.6946 | 0.5051 | 0.2191 | 0.2860 | 0.2140 | 0.2809 |
| 0.6975 | 4.0 | 100 | 0.6947 | 0.5013 | 0.1799 | 0.3213 | 0.1787 | 0.3201 |
| 0.6996 | 4.8 | 120 | 0.6948 | 0.5025 | 0.1521 | 0.3504 | 0.1496 | 0.3479 |
| 0.7008 | 5.6 | 140 | 0.6944 | 0.4975 | 0.2841 | 0.2134 | 0.2866 | 0.2159 |
| 0.7004 | 6.4 | 160 | 0.6943 | 0.4968 | 0.1850 | 0.3119 | 0.1881 | 0.3150 |
| 0.6913 | 7.2 | 180 | 0.6944 | 0.4924 | 0.1553 | 0.3371 | 0.1629 | 0.3447 |
| 0.703 | 8.0 | 200 | 0.6941 | 0.5025 | 0.2784 | 0.2241 | 0.2759 | 0.2216 |
| 0.6975 | 8.8 | 220 | 0.6941 | 0.4987 | 0.2197 | 0.2790 | 0.2210 | 0.2803 |
| 0.6964 | 9.6 | 240 | 0.6942 | 0.4949 | 0.2058 | 0.2891 | 0.2109 | 0.2942 |
| 0.692 | 10.4 | 260 | 0.6943 | 0.4949 | 0.3037 | 0.1913 | 0.3087 | 0.1963 |
| 0.6939 | 11.2 | 280 | 0.6943 | 0.4987 | 0.1900 | 0.3087 | 0.1913 | 0.3100 |
| 0.7043 | 12.0 | 300 | 0.6942 | 0.5044 | 0.2551 | 0.2494 | 0.2506 | 0.2449 |
| 0.7036 | 12.8 | 320 | 0.6942 | 0.4912 | 0.2102 | 0.2809 | 0.2191 | 0.2898 |
| 0.697 | 13.6 | 340 | 0.6943 | 0.4975 | 0.1604 | 0.3371 | 0.1629 | 0.3396 |
| 0.7028 | 14.4 | 360 | 0.6950 | 0.5032 | 0.3939 | 0.1092 | 0.3908 | 0.1061 |
| 0.7012 | 15.2 | 380 | 0.6940 | 0.4962 | 0.2045 | 0.2917 | 0.2083 | 0.2955 |
| 0.6976 | 16.0 | 400 | 0.6940 | 0.4968 | 0.2102 | 0.2866 | 0.2134 | 0.2898 |
| 0.695 | 16.8 | 420 | 0.6944 | 0.5095 | 0.1452 | 0.3643 | 0.1357 | 0.3548 |
| 0.6985 | 17.6 | 440 | 0.6939 | 0.5013 | 0.2210 | 0.2803 | 0.2197 | 0.2790 |
| 0.6946 | 18.4 | 460 | 0.6939 | 0.5032 | 0.2765 | 0.2266 | 0.2734 | 0.2235 |
| 0.6975 | 19.2 | 480 | 0.6940 | 0.4962 | 0.1749 | 0.3213 | 0.1787 | 0.3251 |
| 0.6958 | 20.0 | 500 | 0.6939 | 0.4905 | 0.2058 | 0.2847 | 0.2153 | 0.2942 |
| 0.6947 | 20.8 | 520 | 0.6938 | 0.5057 | 0.2771 | 0.2285 | 0.2715 | 0.2229 |
| 0.7044 | 21.6 | 540 | 0.6940 | 0.5019 | 0.2986 | 0.2033 | 0.2967 | 0.2014 |
| 0.698 | 22.4 | 560 | 0.6941 | 0.4918 | 0.3201 | 0.1717 | 0.3283 | 0.1799 |
| 0.7016 | 23.2 | 580 | 0.6939 | 0.5076 | 0.2771 | 0.2304 | 0.2696 | 0.2229 |
| 0.7029 | 24.0 | 600 | 0.6939 | 0.5063 | 0.2765 | 0.2298 | 0.2702 | 0.2235 |
| 0.6975 | 24.8 | 620 | 0.6938 | 0.5025 | 0.2904 | 0.2121 | 0.2879 | 0.2096 |
| 0.6966 | 25.6 | 640 | 0.6940 | 0.5032 | 0.1660 | 0.3371 | 0.1629 | 0.3340 |
| 0.6974 | 26.4 | 660 | 0.6938 | 0.4994 | 0.1926 | 0.3068 | 0.1932 | 0.3074 |
| 0.6998 | 27.2 | 680 | 0.6938 | 0.5013 | 0.2229 | 0.2784 | 0.2216 | 0.2771 |
| 0.6899 | 28.0 | 700 | 0.6937 | 0.5082 | 0.25 | 0.2582 | 0.2418 | 0.25 |
| 0.6954 | 28.8 | 720 | 0.6937 | 0.4968 | 0.2109 | 0.2860 | 0.2140 | 0.2891 |
| 0.6926 | 29.6 | 740 | 0.6941 | 0.4899 | 0.3479 | 0.1420 | 0.3580 | 0.1521 |
| 0.6936 | 30.4 | 760 | 0.6938 | 0.5006 | 0.2822 | 0.2184 | 0.2816 | 0.2178 |
| 0.6911 | 31.2 | 780 | 0.6937 | 0.5057 | 0.2519 | 0.2538 | 0.2462 | 0.2481 |
| 0.69 | 32.0 | 800 | 0.6938 | 0.5038 | 0.2904 | 0.2134 | 0.2866 | 0.2096 |
| 0.6953 | 32.8 | 820 | 0.6937 | 0.5051 | 0.2765 | 0.2285 | 0.2715 | 0.2235 |
| 0.6971 | 33.6 | 840 | 0.6937 | 0.4956 | 0.2020 | 0.2936 | 0.2064 | 0.2980 |
| 0.6983 | 34.4 | 860 | 0.6937 | 0.5025 | 0.2727 | 0.2298 | 0.2702 | 0.2273 |
| 0.698 | 35.2 | 880 | 0.6938 | 0.4987 | 0.3024 | 0.1963 | 0.3037 | 0.1976 |
| 0.6949 | 36.0 | 900 | 0.6938 | 0.5032 | 0.3081 | 0.1951 | 0.3049 | 0.1919 |
| 0.6969 | 36.8 | 920 | 0.6937 | 0.5082 | 0.2885 | 0.2197 | 0.2803 | 0.2115 |
| 0.6978 | 37.6 | 940 | 0.6937 | 0.5088 | 0.3087 | 0.2001 | 0.2999 | 0.1913 |
| 0.6965 | 38.4 | 960 | 0.6936 | 0.5088 | 0.2588 | 0.25 | 0.25 | 0.2412 |
| 0.6929 | 39.2 | 980 | 0.6936 | 0.5101 | 0.2620 | 0.2481 | 0.2519 | 0.2380 |
| 0.6967 | 40.0 | 1000 | 0.6936 | 0.5101 | 0.2702 | 0.2399 | 0.2601 | 0.2298 |
| 0.6971 | 40.8 | 1020 | 0.6936 | 0.5069 | 0.2431 | 0.2639 | 0.2361 | 0.2569 |
| 0.6976 | 41.6 | 1040 | 0.6936 | 0.5063 | 0.2418 | 0.2645 | 0.2355 | 0.2582 |
| 0.6989 | 42.4 | 1060 | 0.6936 | 0.5038 | 0.2304 | 0.2734 | 0.2266 | 0.2696 |
| 0.6995 | 43.2 | 1080 | 0.6936 | 0.5019 | 0.2254 | 0.2765 | 0.2235 | 0.2746 |
| 0.6981 | 44.0 | 1100 | 0.6936 | 0.5069 | 0.2386 | 0.2683 | 0.2317 | 0.2614 |
| 0.6914 | 44.8 | 1120 | 0.6936 | 0.5095 | 0.25 | 0.2595 | 0.2405 | 0.25 |
| 0.6936 | 45.6 | 1140 | 0.6936 | 0.5095 | 0.25 | 0.2595 | 0.2405 | 0.25 |
| 0.6951 | 46.4 | 1160 | 0.6936 | 0.5107 | 0.2734 | 0.2374 | 0.2626 | 0.2266 |
| 0.6964 | 47.2 | 1180 | 0.6936 | 0.5114 | 0.2854 | 0.2260 | 0.2740 | 0.2146 |
| 0.7004 | 48.0 | 1200 | 0.6936 | 0.5114 | 0.2822 | 0.2292 | 0.2708 | 0.2178 |
| 0.696 | 48.8 | 1220 | 0.6936 | 0.5088 | 0.2759 | 0.2330 | 0.2670 | 0.2241 |
| 0.6966 | 49.6 | 1240 | 0.6936 | 0.5114 | 0.2734 | 0.2380 | 0.2620 | 0.2266 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
mradermacher/Fluxi_AI_Small_Vision-GGUF | mradermacher | "2025-02-04T15:56:43Z" | 514 | 1 | transformers | [
"transformers",
"gguf",
"pt",
"dataset:JJhooww/system_chat_portuguese",
"dataset:JJhooww/rag_agente",
"dataset:JJhooww/chamada_de_funcao",
"dataset:JJhooww/open_perfect_ptbr_sharegpt_multiturn",
"base_model:JJhooww/Fluxi_AI_Small_Vision",
"base_model:quantized:JJhooww/Fluxi_AI_Small_Vision",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-03T15:54:35Z" | ---
base_model: JJhooww/Fluxi_AI_Small_Vision
datasets:
- JJhooww/system_chat_portuguese
- JJhooww/rag_agente
- JJhooww/chamada_de_funcao
- JJhooww/open_perfect_ptbr_sharegpt_multiturn
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/JJhooww/Fluxi_AI_Small_Vision
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.mmproj-fp16.gguf) | mmproj-fp16 | 1.5 | vision supplement |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Fluxi_AI_Small_Vision-GGUF/resolve/main/Fluxi_AI_Small_Vision.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
steveyn400/ppo-LunarLander-v2 | steveyn400 | "2023-03-09T16:50:42Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-09T16:50:17Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.11 +/- 22.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JEJUMA/JEJUMA-002-GGUF | JEJUMA | "2024-08-07T03:21:25Z" | 36 | 3 | null | [
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-06T22:41:13Z" | ---
license: mit
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/66ac53d3f202443fb03d5c70/MiejxjzWeKjKJgw13yor5.png" width="50%">
# JEJUMA-002-GGUF
* Original Repo: [JEJUMA-002](https://huggingface.co/JEJUMA/JEJUMA-002)
* Official Quantization version of JEJUMA-002
* JEJUMA-002의 공식 양자화 모델입니다.
# Prompt(LM Studio)
```bash
<|start_header_id|>system<|end_header_id|>
{System}
<|eot_id|><|start_header_id|>user<|end_header_id|>
{User}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{Assistant}
```
* Ollama 사용시, [Modelfile](https://huggingface.co/JEJUMA/JEJUMA-002-GGUF/edit/main/Modelfile)을 참고해주세요.
# How to use
### 1. 번역 (방언(사투리) -> 표준어)
* 지역: jeju(제주), chungcheong(충청), gangwon(강원), gyeongsang(경상), or jeonla(전라)
```
# Format
Convert the following sentence or word which is {지역}'s dialect to standard Korean: {번역할 사투리}
```
```
# Example
Convert the following sentence or word which is jeju's dialect to standard Korean: 하영주크다
# 매우 큽니다.
```
### 2. 번역 (표준어 -> 방언(사투리))
* 지역: jeju(제주), chungcheong(충청), gangwon(강원), gyeongsang(경상), or jeonla(전라)
```
# Format
Convert the following sentence or word which is standard Korean to {region}'s dialect: {번역할 사투리}
```
```
# Example
Convert the following sentence or word which is standard Korean to jeju's dialect: 귤나무 찾아와라
# 미깡낭 촞아오라
```
### 3. 탐지
```
# Format
Detect the following sentence or word is standard, jeju, chungcheong, gangwon, gyeongsang, or jeonla's dialect: {번역할 사투리}
```
```
# Example
Detect the following sentence or word is standard, jeju, chungcheong, gangwon, gyeongsang, or jeonla's dialect: 미깡낭 촞아오라
# jeju
```
### 4. 탐지 후 번역
```
# Format
Detect the following sentence or word is which dialect and convert the following sentence or word to standard Korean: {번역할 사투리}
```
```
# Example
Detect the following sentence or word is which dialect and convert the following sentence or word to standard Korean: 미깡낭 촞아오라
# (jeju->standard) 귤나무 찾아와라
``` |
RichardErkhov/steffygreypaul_-_Experiment45-gguf | RichardErkhov | "2025-02-18T06:24:42Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-18T05:59:54Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Experiment45 - GGUF
- Model creator: https://huggingface.co/steffygreypaul/
- Original model: https://huggingface.co/steffygreypaul/Experiment45/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Experiment45.Q2_K.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q2_K.gguf) | Q2_K | 0.54GB |
| [Experiment45.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [Experiment45.IQ3_S.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [Experiment45.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [Experiment45.IQ3_M.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [Experiment45.Q3_K.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q3_K.gguf) | Q3_K | 0.64GB |
| [Experiment45.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [Experiment45.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [Experiment45.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [Experiment45.Q4_0.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q4_0.gguf) | Q4_0 | 0.72GB |
| [Experiment45.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [Experiment45.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [Experiment45.Q4_K.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q4_K.gguf) | Q4_K | 0.75GB |
| [Experiment45.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [Experiment45.Q4_1.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q4_1.gguf) | Q4_1 | 0.77GB |
| [Experiment45.Q5_0.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q5_0.gguf) | Q5_0 | 0.83GB |
| [Experiment45.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [Experiment45.Q5_K.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q5_K.gguf) | Q5_K | 0.85GB |
| [Experiment45.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [Experiment45.Q5_1.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q5_1.gguf) | Q5_1 | 0.89GB |
| [Experiment45.Q6_K.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q6_K.gguf) | Q6_K | 0.95GB |
| [Experiment45.Q8_0.gguf](https://huggingface.co/RichardErkhov/steffygreypaul_-_Experiment45-gguf/blob/main/Experiment45.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HZDR-FWGEL/UCD-MNCD256-FCNPP | HZDR-FWGEL | "2024-11-11T22:59:46Z" | 6 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2024-11-11T12:07:33Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Tapendra/phi-3.5-accountant-2 | Tapendra | "2025-02-27T12:04:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-02-27T12:02:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gaspardno/flan-t5-small-Q4_K_M-GGUF | Gaspardno | "2024-10-16T12:42:26Z" | 6 | 0 | null | [
"gguf",
"text2text-generation",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"base_model:google/flan-t5-small",
"base_model:quantized:google/flan-t5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-10-16T12:42:23Z" | ---
base_model: google/flan-t5-small
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- text2text-generation
- llama-cpp
- gguf-my-repo
widget:
- text: 'Translate to German: My name is Arthur'
example_title: Translation
- text: Please answer to the following question. Who is going to be the next Ballon
d'or?
example_title: Question Answering
- text: 'Q: Can Geoffrey Hinton have a conversation with George Washington? Give the
rationale before answering.'
example_title: Logical reasoning
- text: Please answer the following question. What is the boiling point of Nitrogen?
example_title: Scientific knowledge
- text: Answer the following yes/no question. Can you write a whole Haiku in a single
tweet?
example_title: Yes/no question
- text: Answer the following yes/no question by reasoning step-by-step. Can you write
a whole Haiku in a single tweet?
example_title: Reasoning task
- text: 'Q: ( False or not False or False ) is? A: Let''s think step by step'
example_title: Boolean Expressions
- text: The square root of x is the cube root of y. What is y to the power of 2, if
x = 4?
example_title: Math reasoning
- text: 'Premise: At my age you will probably have learnt one lesson. Hypothesis: It''s
not certain how many lessons you''ll learn by your thirties. Does the premise
entail the hypothesis?'
example_title: Premise and hypothesis
---
# Gaspardno/flan-t5-small-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/flan-t5-small`](https://huggingface.co/google/flan-t5-small) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/flan-t5-small) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Gaspardno/flan-t5-small-Q4_K_M-GGUF --hf-file flan-t5-small-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Gaspardno/flan-t5-small-Q4_K_M-GGUF --hf-file flan-t5-small-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Gaspardno/flan-t5-small-Q4_K_M-GGUF --hf-file flan-t5-small-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Gaspardno/flan-t5-small-Q4_K_M-GGUF --hf-file flan-t5-small-q4_k_m.gguf -c 2048
```
|
mrHunghddddd/c747dc59-424d-43ee-9052-0a4237e2b972 | mrHunghddddd | "2025-01-18T11:12:09Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-18T10:54:59Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c747dc59-424d-43ee-9052-0a4237e2b972
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 681d83b9c0998e38_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/681d83b9c0998e38_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHunghddddd/c747dc59-424d-43ee-9052-0a4237e2b972
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/681d83b9c0998e38_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dfb3d259-7626-4a18-8142-7339f386a3d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dfb3d259-7626-4a18-8142-7339f386a3d3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c747dc59-424d-43ee-9052-0a4237e2b972
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7541 | 0.1559 | 200 | 1.4961 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TareksLab/Progenitor-V2.3-LLaMa-70B | TareksLab | "2025-02-06T07:50:08Z" | 73 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-06T03:11:30Z" | ---
base_model:
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- TheDrummer/Anubis-70B-v1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/70B-L3.3-Cirrus-x1
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---
I am experimenting with some of the DELLA merge method parameters.
# Progenitor-V2.3-70B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using nbeerbower/Llama-3.1-Nemotron-lorablated-70B as a base.
### Models Merged
The following models were included in the merge:
EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
Sao10K/L3.1-70B-Hanami-x1
Sao10K/70B-L3.3-Cirrus-x1
TheDrummer/Anubis-70B-v1
SicariusSicariiStuff/Negative_LLAMA_70B
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
weight: 0.20
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: 0.20
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.20
merge_method: della
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
density: 0.7
epsilon: 0.15
lambda: 1.1
rescale: 1
window_size: 0.14
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
``` |
KochamZycie/generator_2024_inz | KochamZycie | "2024-01-22T15:15:24Z" | 9 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-01-22T13:09:53Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: KochamZycie/generator_2024_inz
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KochamZycie/generator_2024_inz
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0143
- Validation Loss: 3.8945
- Epoch: 53
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.5310 | 4.7025 | 0 |
| 5.0367 | 4.5290 | 1 |
| 4.8174 | 4.4444 | 2 |
| 4.7273 | 4.3900 | 3 |
| 4.6739 | 4.3477 | 4 |
| 4.6257 | 4.3142 | 5 |
| 4.5834 | 4.2845 | 6 |
| 4.5535 | 4.2584 | 7 |
| 4.5203 | 4.2359 | 8 |
| 4.4924 | 4.2147 | 9 |
| 4.4686 | 4.1968 | 10 |
| 4.4476 | 4.1803 | 11 |
| 4.4288 | 4.1646 | 12 |
| 4.4037 | 4.1501 | 13 |
| 4.3988 | 4.1374 | 14 |
| 4.3707 | 4.1243 | 15 |
| 4.3598 | 4.1130 | 16 |
| 4.3364 | 4.1016 | 17 |
| 4.3230 | 4.0910 | 18 |
| 4.3075 | 4.0813 | 19 |
| 4.3053 | 4.0720 | 20 |
| 4.2978 | 4.0629 | 21 |
| 4.2796 | 4.0545 | 22 |
| 4.2658 | 4.0456 | 23 |
| 4.2527 | 4.0383 | 24 |
| 4.2449 | 4.0302 | 25 |
| 4.2230 | 4.0233 | 26 |
| 4.2296 | 4.0167 | 27 |
| 4.2190 | 4.0091 | 28 |
| 4.1989 | 4.0024 | 29 |
| 4.1815 | 3.9965 | 30 |
| 4.1814 | 3.9904 | 31 |
| 4.1705 | 3.9842 | 32 |
| 4.1629 | 3.9791 | 33 |
| 4.1578 | 3.9734 | 34 |
| 4.1396 | 3.9681 | 35 |
| 4.1486 | 3.9626 | 36 |
| 4.1268 | 3.9572 | 37 |
| 4.1251 | 3.9524 | 38 |
| 4.1214 | 3.9478 | 39 |
| 4.1074 | 3.9429 | 40 |
| 4.1027 | 3.9386 | 41 |
| 4.1036 | 3.9342 | 42 |
| 4.0792 | 3.9303 | 43 |
| 4.0790 | 3.9258 | 44 |
| 4.0626 | 3.9227 | 45 |
| 4.0590 | 3.9183 | 46 |
| 4.0527 | 3.9153 | 47 |
| 4.0445 | 3.9107 | 48 |
| 4.0400 | 3.9074 | 49 |
| 4.0414 | 3.9037 | 50 |
| 4.0318 | 3.9004 | 51 |
| 4.0266 | 3.8964 | 52 |
| 4.0143 | 3.8945 | 53 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
EvgenyBondarenko/digital_competition_Text_Building_According_to_Topic | EvgenyBondarenko | "2025-01-08T15:06:41Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-08T15:05:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jvelja/vllm-gemma2b_3 | jvelja | "2024-09-09T03:01:38Z" | 59 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2024-09-09T03:01:36Z" | ---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpx8fd_qt3/jvelja/vllm-gemma2b_3")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpx8fd_qt3/jvelja/vllm-gemma2b_3")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpx8fd_qt3/jvelja/vllm-gemma2b_3")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
nhung03/e3fd2ec0-b412-4913-b040-2bc5ad263738 | nhung03 | "2025-01-15T22:24:40Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Maykeye/TinyLLama-v0",
"base_model:adapter:Maykeye/TinyLLama-v0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-15T22:22:26Z" | ---
library_name: peft
license: apache-2.0
base_model: Maykeye/TinyLLama-v0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e3fd2ec0-b412-4913-b040-2bc5ad263738
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Maykeye/TinyLLama-v0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ed4adda1932d901_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ed4adda1932d901_train_data.json
type:
field_input: prompt
field_instruction: instruction
field_output: canonical_solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/e3fd2ec0-b412-4913-b040-2bc5ad263738
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ed4adda1932d901_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d6ddeb75-85b9-48a5-957d-35e94b4ce67d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d6ddeb75-85b9-48a5-957d-35e94b4ce67d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e3fd2ec0-b412-4913-b040-2bc5ad263738
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.7288 | 0.9324 | 200 | 7.2802 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
espnet/kan-bayashi_ljspeech_conformer_fastspeech2 | espnet | "2021-07-03T14:48:21Z" | 9 | 1 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | "2022-03-02T23:29:05Z" | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ljspeech
license: cc-by-4.0
---
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036268/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ivangrapher/50c107be-ef75-4d21-9652-84197f0cc1c6 | ivangrapher | "2025-01-25T00:42:42Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | "2025-01-25T00:41:23Z" | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 50c107be-ef75-4d21-9652-84197f0cc1c6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9d99df7c2518d8d0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9d99df7c2518d8d0_train_data.json
type:
field_instruction: text
field_output: processed_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ivangrapher/50c107be-ef75-4d21-9652-84197f0cc1c6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/9d99df7c2518d8d0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6419e840-39bf-4f3c-90f6-8a3e25b14d38
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6419e840-39bf-4f3c-90f6-8a3e25b14d38
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 50c107be-ef75-4d21-9652-84197f0cc1c6
This model is a fine-tuned version of [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0083 | 1 | nan |
| 0.0 | 0.0416 | 5 | nan |
| 0.0 | 0.0832 | 10 | nan |
| 0.0 | 0.1247 | 15 | nan |
| 0.0 | 0.1663 | 20 | nan |
| 0.0 | 0.2079 | 25 | nan |
| 0.0 | 0.2495 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/75337c30-51d9-485b-95ad-208b3451e95a | kk-aivio | "2025-03-02T05:28:44Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"region:us"
] | null | "2025-03-02T05:28:32Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-3B
model-index:
- name: kk-aivio/75337c30-51d9-485b-95ad-208b3451e95a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/75337c30-51d9-485b-95ad-208b3451e95a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
CojoFitz/model | CojoFitz | "2024-12-09T02:38:29Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b",
"base_model:quantized:unsloth/llama-3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-09T02:29:37Z" | ---
base_model: unsloth/llama-3-8b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CojoFitz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
liquidrichard/lr9 | liquidrichard | "2025-02-03T17:40:01Z" | 70 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-03T17:36:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
flatala-research/videomae-large-finetuned-right-hand-conflab-v1 | flatala-research | "2024-05-31T08:32:00Z" | 8 | 1 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large",
"base_model:finetune:MCG-NJU/videomae-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2024-05-27T18:44:11Z" | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-large-finetuned-right-hand-conflab-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-finetuned-right-hand-conflab-v1
This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5240
- Accuracy: 0.6146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1404
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.0232 | 0.0420 | 59 | 1.9422 | 0.1942 |
| 1.8426 | 1.0420 | 118 | 1.7418 | 0.3398 |
| 1.7424 | 2.0420 | 177 | 1.6896 | 0.4175 |
| 1.2206 | 3.0420 | 236 | 1.6280 | 0.4466 |
| 1.0738 | 4.0420 | 295 | 1.2310 | 0.5825 |
| 1.0054 | 5.0420 | 354 | 1.3243 | 0.5583 |
| 0.782 | 6.0420 | 413 | 1.1891 | 0.6359 |
| 0.599 | 7.0420 | 472 | 1.1930 | 0.6505 |
| 0.6782 | 8.0420 | 531 | 1.2866 | 0.6359 |
| 0.3033 | 9.0420 | 590 | 1.4236 | 0.5777 |
| 0.2236 | 10.0420 | 649 | 1.3206 | 0.6553 |
| 0.1756 | 11.0420 | 708 | 1.5113 | 0.6602 |
| 0.1341 | 12.0420 | 767 | 1.6544 | 0.6408 |
| 0.0823 | 13.0420 | 826 | 1.6124 | 0.6553 |
| 0.0691 | 14.0420 | 885 | 1.8230 | 0.6456 |
### Framework versions
- Transformers 4.41.0
- Pytorch 1.12.0+cu116
- Datasets 2.19.1
- Tokenizers 0.19.1
|
abhishek/dgyf-ojml-b1z8-0 | abhishek | "2023-12-18T14:55:01Z" | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"region:us"
] | text-generation | "2023-12-18T14:54:45Z" | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
facebook/esm2_t33_650M_UR50D | facebook | "2023-03-21T15:05:12Z" | 860,231 | 40 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"esm",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-09-27T14:36:16Z" | ---
license: mit
widget:
- text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG"
---
## ESM-2
ESM-2 is a state-of-the-art protein model trained on a masked language modelling objective. It is suitable for fine-tuning on a wide range of tasks that take protein sequences as input. For detailed information on the model architecture and training data, please refer to the [accompanying paper](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v2). You may also be interested in some demo notebooks ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb), [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb)) which demonstrate how to fine-tune ESM-2 models on your tasks of interest.
Several ESM-2 checkpoints are available in the Hub with varying sizes. Larger sizes generally have somewhat better accuracy, but require much more memory and time to train:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [esm2_t48_15B_UR50D](https://huggingface.co/facebook/esm2_t48_15B_UR50D) | 48 | 15B |
| [esm2_t36_3B_UR50D](https://huggingface.co/facebook/esm2_t36_3B_UR50D) | 36 | 3B |
| [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) | 33 | 650M |
| [esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) | 30 | 150M |
| [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) | 12 | 35M |
| [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) | 6 | 8M | |
kk-aivio/bd6d7522-dc61-4b1c-993b-95a0446bf015 | kk-aivio | "2025-02-02T02:45:34Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Genstruct-7B",
"base_model:adapter:NousResearch/Genstruct-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-02T00:44:00Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Genstruct-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bd6d7522-dc61-4b1c-993b-95a0446bf015
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Genstruct-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a0dfc4fa79209ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a0dfc4fa79209ea_train_data.json
type:
field_instruction: utt
field_output: annot_utt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/bd6d7522-dc61-4b1c-993b-95a0446bf015
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2a0dfc4fa79209ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 15696d3d-5708-4244-8ac6-5778d0a4fd68
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 15696d3d-5708-4244-8ac6-5778d0a4fd68
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bd6d7522-dc61-4b1c-993b-95a0446bf015
This model is a fine-tuned version of [NousResearch/Genstruct-7B](https://huggingface.co/NousResearch/Genstruct-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 1.8007 |
| 0.6308 | 0.0005 | 50 | 0.1754 |
| 0.5034 | 0.0009 | 100 | 0.1319 |
| 0.3223 | 0.0014 | 150 | 0.1042 |
| 0.249 | 0.0019 | 200 | 0.0991 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleenalai/Games | aleenalai | "2024-03-08T16:30:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-03-08T16:28:05Z" | import random
def guess_number():
# Generate a random number between 1 and 100
secret_number = random.randint(1, 100)
# Initialize the number of attempts
attempts = 0
print("Welcome to the Guessing Game!")
print("I'm thinking of a number between 1 and 100. Can you guess what it is?")
# Start the game loop
while True:
# Get the player's guess
guess = input("Enter your guess (or 'q' to quit): ")
# Check if the player wants to quit
if guess.lower() == 'q':
print("Thanks for playing! The secret number was:", secret_number)
break
# Convert the guess to an integer
try:
guess = int(guess)
except ValueError:
print("Please enter a valid number or 'q' to quit.")
continue https://hdpcgames.com/genshin-impact-highly-compressed-download
# Increment the number of attempts
attempts += 1
# Check if the guess is correct
if guess == secret_number:
print("Congratulations! You guessed the secret number in", attempts, "attempts!")
break
elif guess < secret_number:
print("Too low! Try again.")
else:
print("Too high! Try again.")
# Start the game
guess_number()
|
SergeyPugachevv/SmolLM2-FT-MyDataset | SergeyPugachevv | "2025-03-04T12:58:11Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T12:57:22Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SergeyPugachevv/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sergeypugachev96-wb/huggingface/runs/ymga9922)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vnktrmnb/xlm-roberta-base-FT-TyDiQA_AUQ | vnktrmnb | "2023-08-21T07:39:15Z" | 75 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-08-19T08:39:52Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/xlm-roberta-base-FT-TyDiQA_AUQ
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/xlm-roberta-base-FT-TyDiQA_AUQ
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4157
- Train End Logits Accuracy: 0.8542
- Train Start Logits Accuracy: 0.9111
- Validation Loss: 0.4927
- Validation End Logits Accuracy: 0.8545
- Validation Start Logits Accuracy: 0.9007
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4206, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4726 | 0.6104 | 0.6723 | 0.5444 | 0.8266 | 0.8839 | 0 |
| 0.6639 | 0.7915 | 0.8538 | 0.4898 | 0.8462 | 0.8979 | 1 |
| 0.4157 | 0.8542 | 0.9111 | 0.4927 | 0.8545 | 0.9007 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
win10/MagpieLM-8B | win10 | "2024-10-06T06:57:37Z" | 5 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"Magpie-Align/MagpieLM-4B-Chat-v0.1",
"base_model:Magpie-Align/MagpieLM-4B-Chat-v0.1",
"base_model:finetune:Magpie-Align/MagpieLM-4B-Chat-v0.1",
"region:us"
] | null | "2024-10-06T06:53:28Z" | ---
base_model:
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
- Magpie-Align/MagpieLM-4B-Chat-v0.1
tags:
- merge
- mergekit
- lazymergekit
- Magpie-Align/MagpieLM-4B-Chat-v0.1
---
# MagpieLM-8B
MagpieLM-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
* [Magpie-Align/MagpieLM-4B-Chat-v0.1](https://huggingface.co/Magpie-Align/MagpieLM-4B-Chat-v0.1)
## 🧩 Configuration
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 2]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [1, 3]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [2, 4]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [3, 5]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [4, 6]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [5, 7]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [6, 8]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [7, 9]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [8, 10]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [9, 11]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [10, 12]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [11, 13]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [12, 14]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [13, 15]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [14, 16]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [16, 18]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [17, 19]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [18, 20]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [19, 21]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [20, 22]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [21, 23]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [22, 24]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [23, 25]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [24, 26]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [25, 27]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [26, 28]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [27, 29]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [28, 30]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [29, 31]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
- sources:
- layer_range: [29, 32]
model: Magpie-Align/MagpieLM-4B-Chat-v0.1
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "win10/MagpieLM-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
tensorblock/CodeLlama-34b-Python-hf-GGUF | tensorblock | "2024-11-17T02:19:33Z" | 35 | 0 | null | [
"gguf",
"llama-2",
"TensorBlock",
"GGUF",
"text-generation",
"code",
"base_model:codellama/CodeLlama-34b-Python-hf",
"base_model:quantized:codellama/CodeLlama-34b-Python-hf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-16T23:55:56Z" | ---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- TensorBlock
- GGUF
license: llama2
base_model: codellama/CodeLlama-34b-Python-hf
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## codellama/CodeLlama-34b-Python-hf - GGUF
This repo contains GGUF format model files for [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CodeLlama-34b-Python-hf-Q2_K.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q2_K.gguf) | Q2_K | 11.647 GB | smallest, significant quality loss - not recommended for most purposes |
| [CodeLlama-34b-Python-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q3_K_S.gguf) | Q3_K_S | 13.602 GB | very small, high quality loss |
| [CodeLlama-34b-Python-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q3_K_M.gguf) | Q3_K_M | 15.186 GB | very small, high quality loss |
| [CodeLlama-34b-Python-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q3_K_L.gguf) | Q3_K_L | 16.551 GB | small, substantial quality loss |
| [CodeLlama-34b-Python-hf-Q4_0.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q4_0.gguf) | Q4_0 | 17.744 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [CodeLlama-34b-Python-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q4_K_S.gguf) | Q4_K_S | 17.873 GB | small, greater quality loss |
| [CodeLlama-34b-Python-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q4_K_M.gguf) | Q4_K_M | 18.831 GB | medium, balanced quality - recommended |
| [CodeLlama-34b-Python-hf-Q5_0.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q5_0.gguf) | Q5_0 | 21.641 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [CodeLlama-34b-Python-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q5_K_S.gguf) | Q5_K_S | 21.641 GB | large, low quality loss - recommended |
| [CodeLlama-34b-Python-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q5_K_M.gguf) | Q5_K_M | 22.202 GB | large, very low quality loss - recommended |
| [CodeLlama-34b-Python-hf-Q6_K.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q6_K.gguf) | Q6_K | 25.783 GB | very large, extremely low quality loss |
| [CodeLlama-34b-Python-hf-Q8_0.gguf](https://huggingface.co/tensorblock/CodeLlama-34b-Python-hf-GGUF/blob/main/CodeLlama-34b-Python-hf-Q8_0.gguf) | Q8_0 | 33.394 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CodeLlama-34b-Python-hf-GGUF --include "CodeLlama-34b-Python-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CodeLlama-34b-Python-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
AOLCDROM/XTTSv2-Hi_ft | AOLCDROM | "2024-07-05T14:38:59Z" | 32 | 4 | transformers | [
"transformers",
"text-to-speech",
"hi",
"en",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2024-07-03T18:54:08Z" | ---
language:
- hi
- en
library_name: transformers
pipeline_tag: text-to-speech
---
XTTSv2 checkpoints finedtuned with the forked Coqui TTS (https://github.com/idiap/coqui-ai-TTS) for Hindi speech
Trained using the Indic TTS Database (https://www.iitm.ac.in/donlab/tts/) and Mozilla Common Voice 18.0 Hindi dataset (https://commonvoice.mozilla.org/en/datasets)
Rename checkpoint to model.pth and replace original XTTSv2 model, or use according to how your implementation of XTTSv2.
The checkpoints with the highest step count may _not be the best_. I think the best quality output here is best_model_43036.pth Reference speaker audio files are in ./speakers-hi of this repo
Use language code 'hi' at inference for Hindi speech, use language code 'hi' with English text to generate English with the learned Hindi pronounciations. |
Nexspear/9d6c3588-50f3-4ad7-b7ec-9003f44b327b | Nexspear | "2025-02-07T11:35:51Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2_moe",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-qwen1.5-moe",
"base_model:adapter:katuni4ka/tiny-random-qwen1.5-moe",
"region:us"
] | null | "2025-02-07T11:30:24Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d6c3588-50f3-4ad7-b7ec-9003f44b327b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-qwen1.5-moe
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f8e02fa823dd12e8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f8e02fa823dd12e8_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/9d6c3588-50f3-4ad7-b7ec-9003f44b327b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 8
mlflow_experiment_name: /tmp/f8e02fa823dd12e8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 114b5f6a-8705-4766-a06f-03b7aa23b4ef
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 114b5f6a-8705-4766-a06f-03b7aa23b4ef
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9d6c3588-50f3-4ad7-b7ec-9003f44b327b
This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.8428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.938 | 0.0055 | 1 | 11.9396 |
| 11.8831 | 0.2743 | 50 | 11.8795 |
| 11.8735 | 0.5487 | 100 | 11.8733 |
| 11.8634 | 0.8230 | 150 | 11.8630 |
| 11.8332 | 1.0974 | 200 | 11.8528 |
| 11.8331 | 1.3717 | 250 | 11.8470 |
| 11.8294 | 1.6461 | 300 | 11.8441 |
| 11.8596 | 1.9204 | 350 | 11.8429 |
| 11.8156 | 2.1948 | 400 | 11.8428 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sebastiansarasti/CvTSkinCancer | sebastiansarasti | "2025-01-29T17:19:12Z" | 24 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"base_model:microsoft/cvt-13",
"base_model:finetune:microsoft/cvt-13",
"license:mit",
"region:us"
] | null | "2024-12-27T21:25:04Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
base_model:
- microsoft/cvt-13
---
# CvTSkinCancer for Skin Cancer Classification
## Model Details
- **Model Architecture:** CvT (Convolutional Vision Transformer)
- **Framework:** PyTorch
- **Input Shape:** 224x224 RGB images
- **Number of Parameters:** (Based on CvT-13 Model)
- **Output:** Multi-class classification (9 classes)
## Model Description
This model leverages the **CvT (Convolutional Vision Transformer)** from the `microsoft/cvt-13` pretrained model for **skin cancer classification**. The model is fine-tuned by adding a custom fully connected layer that includes **SiLU activations**, **Dropout**, and **Batch Normalization** to improve generalization and reduce overfitting. Only the classifier layer is trainable, while the convolutional layers of the CvT model are frozen to retain pretrained features.
The final model performs multi-class classification with 9 output classes corresponding to different skin cancer types.
## Training Details
- **Optimizer:** Adam
- **Batch Size:** 64
- **Loss Function:** Cross-Entropy Loss
- **Number of Epochs:** 10
- **Dataset:** [Skin Cancer 9-Class Dataset](https://www.kaggle.com/datasets/nodoubttome/skin-cancer9-classesisic)
### Metrics (Validation Set)
| Class | Precision | Recall | F1-Score |
|-------|-----------|--------|----------|
| 0 | 0.78 | 0.44 | 0.56 |
| 1 | 0.50 | 0.56 | 0.53 |
| 2 | 0.67 | 0.38 | 0.48 |
| 3 | 0.50 | 0.25 | 0.33 |
| 4 | 0.44 | 0.88 | 0.58 |
| 5 | 0.81 | 0.81 | 0.81 |
| 6 | 0.43 | 1.00 | 0.60 |
| 7 | 0.40 | 0.38 | 0.39 |
| 8 | 0.50 | 0.67 | 0.57 |
- **Overall Accuracy:** 0.54
- **Macro Average Precision:** 0.56
- **Macro Average Recall:** 0.59
- **Macro Average F1-Score:** 0.54
- **Weighted Average Precision:** 0.58
- **Weighted Average Recall:** 0.54
- **Weighted Average F1-Score:** 0.53
## License
This model is released under the **MIT License**.
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
ZeroWw/internlm2_5-7b-chat-1m-GGUF | ZeroWw | "2024-07-03T15:31:39Z" | 11 | 1 | null | [
"gguf",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-03T15:19:29Z" |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Updated on: Wed Jul 03, 15:19:29
|
ineoApp/model-2024-06-04 | ineoApp | "2024-06-04T14:33:53Z" | 76 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-06-04T11:15:44Z" | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: model-2024-06-04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-2024-06-04
This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4164
- Precision: 0.6190
- Recall: 0.8387
- F1: 0.7123
- Accuracy: 0.7907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 25.0 | 100 | 1.1670 | 0.5152 | 0.5484 | 0.5312 | 0.7442 |
| No log | 50.0 | 200 | 1.0020 | 0.5676 | 0.6774 | 0.6176 | 0.7791 |
| No log | 75.0 | 300 | 1.2200 | 0.6111 | 0.7097 | 0.6567 | 0.7791 |
| No log | 100.0 | 400 | 1.2976 | 0.6 | 0.7742 | 0.6761 | 0.7791 |
| 0.4049 | 125.0 | 500 | 1.3549 | 0.6098 | 0.8065 | 0.6944 | 0.7791 |
| 0.4049 | 150.0 | 600 | 1.2864 | 0.625 | 0.8065 | 0.7042 | 0.7907 |
| 0.4049 | 175.0 | 700 | 1.2832 | 0.6486 | 0.7742 | 0.7059 | 0.8023 |
| 0.4049 | 200.0 | 800 | 1.3822 | 0.625 | 0.8065 | 0.7042 | 0.7907 |
| 0.4049 | 225.0 | 900 | 1.3862 | 0.6098 | 0.8065 | 0.6944 | 0.7907 |
| 0.021 | 250.0 | 1000 | 1.4164 | 0.6190 | 0.8387 | 0.7123 | 0.7907 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.13.3
|
qnho/wav2vec2-large-xlsr-53-demo-colab | qnho | "2025-03-04T16:15:40Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-03-04T13:59:33Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
ssdaimari44/bodo-mt | ssdaimari44 | "2025-03-04T18:03:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-04T17:15:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Sudem S Daimari]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [Bodo Language]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedgongi/my_model | ahmedgongi | "2024-02-28T13:58:56Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"region:us"
] | null | "2024-02-28T12:50:58Z" | ---
library_name: peft
base_model: facebook/opt-125m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
fsicoli/whisper-large-v3-pt-cv19-fleurs | fsicoli | "2024-10-02T20:21:24Z" | 123 | 2 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fsicoli/cv19-fleurs",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-09-24T07:54:23Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- fsicoli/cv19-fleurs
metrics:
- wer
model-index:
- name: whisper-large-v3-pt-cv19-fleurs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fsicoli/cv19-fleurs default
type: fsicoli/cv19-fleurs
args: default
metrics:
- name: Wer
type: wer
value: 0.0756
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-pt-cv19-fleurs
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the fsicoli/cv19-fleurs default dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1823
- Wer: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- training_steps: 50000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 0.0559 | 2.2883 | 5000 | 0.1096 | 0.0730 |
| 0.0581 | 4.5767 | 10000 | 0.1326 | 0.0829 |
| 0.0225 | 6.8650 | 15000 | 0.1570 | 0.0849 |
| 0.0088 | 9.1533 | 20000 | 0.1704 | 0.0840 |
| 0.0065 | 11.4416 | 25000 | 0.1823 | 0.0849 |
| 0.006 | 13.7300 | 30000 | 0.1808 | 0.0809 |
| 0.0055 | 16.0183 | 35000 | 0.1811 | 0.0790 |
| 0.0031 | 18.3066 | 40000 | 0.1907 | 0.0784 |
| 0.0011 | 20.5950 | 45000 | 0.1852 | 0.0771 |
| 0.0003 | 22.8833 | 50000 | 0.1848 | 0.0756 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Haimi/ArtGAN | Haimi | "2023-09-05T05:03:22Z" | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | "2023-09-04T11:35:15Z" | ---
license: cc-by-nc-nd-4.0
---
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
Acknowledgements
Personal creation
|
kirah/fcv_s2t_2 | kirah | "2022-12-14T16:39:36Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"Noe tags",
"generated_from_trainer",
"es",
"dataset:custom__short_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-14T15:49:13Z" | ---
language:
- es
license: apache-2.0
tags:
- Noe tags
- generated_from_trainer
datasets:
- custom__short_dataset
model-index:
- name: Whisper Small spanish - Sanchit Gandhi notebook example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small spanish - Sanchit Gandhi notebook example
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the small random dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 7
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
mradermacher/Hush-Qwen2.5-7B-v1.4-i1-GGUF | mradermacher | "2025-03-09T00:32:14Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-08T23:12:16Z" | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/marcuscedricridia/Hush-Qwen2.5-7B-v1.4
|
PrunaAI/FreedomIntelligence-Apollo-7B-bnb-4bit-smashed | PrunaAI | "2024-07-21T21:20:29Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"pruna-ai",
"base_model:FreedomIntelligence/Apollo-7B",
"base_model:quantized:FreedomIntelligence/Apollo-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-21T21:17:54Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: FreedomIntelligence/Apollo-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo FreedomIntelligence/Apollo-7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/FreedomIntelligence-Apollo-7B-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/Apollo-7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model FreedomIntelligence/Apollo-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
mradermacher/L3.1-70B-LumineaDare-i1-GGUF | mradermacher | "2024-12-13T13:12:55Z" | 30 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Arkhiveus/L3.1-70B-LumineaDare",
"base_model:quantized:Arkhiveus/L3.1-70B-LumineaDare",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-13T05:22:29Z" | ---
base_model: Arkhiveus/L3.1-70B-LumineaDare
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Arkhiveus/L3.1-70B-LumineaDare
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.1-70B-LumineaDare-i1-GGUF/resolve/main/L3.1-70B-LumineaDare.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ippersky/whisper-small-accented-zh | ippersky | "2024-11-22T13:19:02Z" | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-11-21T07:10:22Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: whisper-small-accented-zh
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: None
args: default
metrics:
- name: Wer
type: wer
value: 6.2487983080176885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-accented-zh
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1562
- Wer: 6.2488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0727 | 1.6393 | 1000 | 0.1562 | 6.2488 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.0
|
DeltatreInnovationLab/BLOOMZ-7b1 | DeltatreInnovationLab | "2023-06-30T08:48:04Z" | 0 | 1 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-06-30T08:40:35Z" | ---
license: bigscience-openrail-m
---
|
minhhien0811/deita_reason_arena_3552 | minhhien0811 | "2024-08-29T07:52:16Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-29T07:49:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits