modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
75.3M
| likes
int64 0
10.6k
| library_name
stringclasses 189
values | tags
sequencelengths 1
1.84k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
sosoai/unsloth-hansoldeco-mistral-7b-openhermes-mixed-lora-v0.1 | sosoai | "2024-04-14T22:13:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | "2024-04-14T21:56:00Z" | Entry not found |
Proteinamino/doggy_style_for_asian | Proteinamino | "2024-04-14T21:59:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T21:56:46Z" | Entry not found |
ashishp-wiai/ClipArt_LoRA_50-2024-04-14 | ashishp-wiai | "2024-04-14T22:52:49Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-04-14T21:56:59Z" | Entry not found |
Ffohturk/mistral-7b_ablated_model_layer_1_gate_only_healed | Ffohturk | "2024-04-15T02:40:23Z" | 0 | 0 | transformers | [
"transformers",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T21:58:12Z" | Entry not found |
frankmurray/girl | frankmurray | "2024-04-14T21:59:04Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-14T21:58:14Z" | ---
license: openrail
---
|
femindharamshi/receiptItemizer | femindharamshi | "2024-04-14T22:00:16Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:00:16Z" | Entry not found |
Sikontil/Char.gi | Sikontil | "2024-04-14T22:19:36Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2024-04-14T22:02:02Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mazzaqq/mistral_sigmoid_lr2e-05_b0.1 | mazzaqq | "2024-04-14T22:02:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:02:38Z" | Invalid username or password. |
WesPro/MisHum | WesPro | "2024-04-14T22:15:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:05:10Z" | Entry not found |
BrandonM001/bert-finetuned-ner-accelerate | BrandonM001 | "2024-04-15T02:45:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-14T22:07:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adamjweintraut/bart-finetuned-lyrlen-64-lines-light | adamjweintraut | "2024-04-14T22:11:46Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:11:46Z" | Entry not found |
SeoulStreamingStation/KLMv7s | SeoulStreamingStation | "2024-04-15T00:45:36Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-04-14T22:13:30Z" | ---
license: other
license_name: seoulstreamingstation
license_link: LICENSE
---
|
adamjweintraut/bart-finetuned-kwsylgen-64-simple_input_BARTlarge | adamjweintraut | "2024-04-15T02:00:49Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-14T22:13:43Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/bart-large
model-index:
- name: bart-finetuned-kwsylgen-64-simple_input_BARTlarge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-kwsylgen-64-simple_input_BARTlarge
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0641 | 0.18 | 500 | 0.2451 |
| 0.2194 | 0.36 | 1000 | 0.2228 |
| 0.1989 | 0.54 | 1500 | 0.2086 |
| 0.1888 | 0.72 | 2000 | 0.2027 |
| 0.177 | 0.9 | 2500 | 0.1976 |
| 0.1703 | 1.08 | 3000 | 0.1933 |
| 0.1647 | 1.26 | 3500 | 0.1928 |
| 0.159 | 1.44 | 4000 | 0.1890 |
| 0.1538 | 1.61 | 4500 | 0.1864 |
| 0.151 | 1.79 | 5000 | 0.1857 |
| 0.1471 | 1.97 | 5500 | 0.1828 |
| 0.1436 | 2.15 | 6000 | 0.1814 |
| 0.1435 | 2.33 | 6500 | 0.1806 |
| 0.141 | 2.51 | 7000 | 0.1799 |
| 0.1393 | 2.69 | 7500 | 0.1790 |
| 0.1388 | 2.87 | 8000 | 0.1785 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Kukedlc/NeuralStockFusion-7b | Kukedlc | "2024-04-14T23:58:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:Kukedlc/NeuralArjuna-7B-DT",
"base_model:Kukedlc/NeuralMaths-Experiment-7b",
"base_model:Kukedlc/NeuralSynthesis-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:18:13Z" | ---
base_model:
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/NeuralArjuna-7B-DT
- Kukedlc/NeuralMaths-Experiment-7b
- Kukedlc/NeuralSynthesis-7B-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# NeuralStockFusion-7b
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/5Ex2YG8H1oLXaS25gvZQs.webp)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) as a base.
### Models Merged
The following models were included in the merge:
* [Kukedlc/NeuralArjuna-7B-DT](https://huggingface.co/Kukedlc/NeuralArjuna-7B-DT)
* [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b)
* [Kukedlc/NeuralSynthesis-7B-v0.1](https://huggingface.co/Kukedlc/NeuralSynthesis-7B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Kukedlc/NeuralMaths-Experiment-7b
- model: Kukedlc/NeuralArjuna-7B-DT
- model: Kukedlc/NeuralSirKrishna-7b
- model: Kukedlc/NeuralSynthesis-7B-v0.1
merge_method: model_stock
base_model: Kukedlc/NeuralSirKrishna-7b
dtype: bfloat16
```
# Model Inference:
``` python
!pip install -qU transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, BitsAndBytesConfig
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
MODEL_NAME = 'Kukedlc/NeuralStockFusion-7b'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map='cuda:0', quantization_config=bnb_config)
inputs = tokenizer(["[INST] What is a large language model, in spanish \n[/INST]\n"], return_tensors="pt").to('cuda')
streamer = TextStreamer(tokenizer)
# Despite returning the usual output, the streamer will also print the generated text to stdout.
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=256, do_sample=True, temperature=0.7, repetition_penalty=1.4, top_p=0.9)
``` |
Dracones/mixtral-8x22b-instruct-oh_exl2_7.0bpw | Dracones | "2024-04-14T22:38:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"7-bit",
"region:us"
] | text-generation | "2024-04-14T22:19:33Z" | ---
language:
- en
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
base_model: mistral-community/Mixtral-8x22B-v0.1
tags:
- exl2
---
# mixtral-8x22b-instruct-oh - EXL2 7.0bpw
This is a 7.0bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
_TODO_
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|
uday91/llama-2-7b-customer-support-llm | uday91 | "2024-04-14T22:54:08Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T22:20:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
K00B404/Hitotsume_xB_v0.1 | K00B404 | "2024-04-14T22:20:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:20:43Z" | Invalid username or password. |
Shalazary/ruBert-base-sberquad-0.001-filtered | Shalazary | "2024-04-14T22:21:22Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | "2024-04-14T22:20:59Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: ai-forever/ruBert-base
model-index:
- name: ruBert-base-sberquad-0.001-filtered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.001-filtered
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Lalazn/Mirada | Lalazn | "2024-04-14T22:24:34Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-04-14T22:22:39Z" | ---
license: apache-2.0
---
|
ShenaoZ/0.001_ablation_iter_2 | ShenaoZ | "2024-04-14T23:37:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_ablation_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:23:00Z" | Invalid username or password. |
WesPro/MisHumHyp | WesPro | "2024-04-15T00:25:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:23:09Z" | Entry not found |
jetx/hznvrae | jetx | "2024-04-14T22:25:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-14T22:23:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Arczisan/tattoo-world | Arczisan | "2024-04-14T22:25:35Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
] | text-to-image | "2024-04-14T22:25:27Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0<\0l\0o\0r\0a\0:\0T\0a\0t\0t\0o\0o\0W\0o\0r\0l\0d\0:\01\0>\0 \0(\0T\0a\0t\0t\0o\0o\0W\0o\0r\0l\0d\0:\00\0.\09\0)\0 \0 \0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0s\0h\0c\0h\0,\0(\01\0w\0o\0m\0a\0n\0:\01\0.\04\0)\0 \0(\0b\0l\0u\0e\0 \0b\0i\0k\0i\0n\0i\0 \0:\01\0.\03\0)\0 \0,\0s\0o\0l\0o\0,\0b\0e\0a\0u\0t\0i\0f\0u\0l\0,\0a\0t\0t\0r\0a\0c\0t\0i\0v\0e\0,\0c\0u\0t\0e\0 \0<\0l\0o\0r\0a\0:\0m\0o\0r\0e\0_\0d\0e\0t\0a\0i\0l\0s\0:\0.\06\0>\0 \0(\0b\0e\0a\0c\0h\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0:\01\0.\05\0)\0 \0"
output:
url: images/00051-2622726513.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Tattoo World
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Arczisan/tattoo-world/tree/main) them in the Files & versions tab.
|
mergekit-community/mergekit-slerp-fodinzo | mergekit-community | "2024-04-14T22:29:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:Equall/Saul-Base",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:26:07Z" | ---
base_model:
- HuggingFaceH4/zephyr-7b-beta
- Equall/Saul-Base
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Equall/Saul-Base
layer_range: [0, 32]
- model: HuggingFaceH4/zephyr-7b-beta
layer_range: [0, 32]
merge_method: slerp
base_model: HuggingFaceH4/zephyr-7b-beta
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
ShenaoZ/0.0_ablation_iter_2 | ShenaoZ | "2024-04-14T23:39:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0_ablation_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:26:49Z" | Entry not found |
EdBerg/quotes_Llama-2-7b-chat-hf | EdBerg | "2024-04-14T22:45:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T22:30:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShenaoZ/0.0001_ablation_iter_2 | ShenaoZ | "2024-04-14T23:44:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_ablation_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:30:48Z" | ---
license: mit
base_model: ShenaoZ/0.0001_ablation_iter_1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0001_ablation_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_ablation_iter_2
This model is a fine-tuned version of [ShenaoZ/0.0001_ablation_iter_1](https://huggingface.co/ShenaoZ/0.0001_ablation_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
sealad886/c4ai-command-r-plus-2bit | sealad886 | "2024-04-14T22:31:48Z" | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-04-14T22:31:46Z" | ---
license: cc-by-nc-4.0
---
|
Abhinay123/wav2vec2_vedas2_epoch_0_step_1399 | Abhinay123 | "2024-04-14T22:33:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-14T22:32:08Z" | Invalid username or password. |
iamalexcaspian/Greg-OverTheGardenWall | iamalexcaspian | "2024-04-14T22:33:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:33:16Z" | Entry not found |
Homiebear/Dreadunit | Homiebear | "2024-04-14T22:36:36Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-14T22:35:53Z" | ---
license: openrail
---
|
Used111abc/Girl | Used111abc | "2024-04-14T22:36:20Z" | 0 | 0 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | null | "2024-04-14T22:36:20Z" | ---
license: cc-by-sa-3.0
---
|
yeshwanthkesani/codecraft | yeshwanthkesani | "2024-04-14T22:37:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:37:05Z" | Entry not found |
jhamel/REBEL-KB-Fine-Tune | jhamel | "2024-04-14T22:42:49Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2024-04-14T22:37:15Z" | ---
license: apache-2.0
---
|
Ruudan/Datasetmeu | Ruudan | "2024-04-14T22:38:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:37:27Z" | Entry not found |
mlx-community/c4ai-command-r-plus-2bit | mlx-community | "2024-04-14T23:47:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mlx",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:37:43Z" | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
tags:
- mlx
---
# mlx-community/c4ai-command-r-plus-2bit
This model was converted to MLX format from [`CohereForAI/c4ai-command-r-plus`]() using mlx-lm version **0.9.0**.
Refer to the [original model card](https://huggingface.co/CohereForAI/c4ai-command-r-plus) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/c4ai-command-r-plus-2bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
Dracones/mixtral-8x22b-instruct-oh_exl2_6.0bpw | Dracones | "2024-04-14T22:55:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | text-generation | "2024-04-14T22:38:52Z" | ---
language:
- en
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
base_model: mistral-community/Mixtral-8x22B-v0.1
tags:
- exl2
---
# mixtral-8x22b-instruct-oh - EXL2 6.0bpw
This is a 6.0bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
_TODO_
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|
Tverous/llama-13b-ppo-final | Tverous | "2024-04-14T22:39:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T22:39:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lexArYuld/net | lexArYuld | "2024-04-14T22:44:02Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-14T22:40:42Z" | ---
license: openrail
---
|
woweenie/v66-sd21merge-45k-8krampdown-half | woweenie | "2024-04-14T22:45:59Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | "2024-04-14T22:41:01Z" | Entry not found |
Lichang-Chen/zephyr-7b-sft-837k | Lichang-Chen | "2024-04-15T02:33:46Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:Lichang-Chen/800k_ift",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:43:20Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- Lichang-Chen/800k_ift
model-index:
- name: zephyr-7b-sft-837k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-837k
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the Lichang-Chen/800k_ift dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1323 | 1.0 | 1179 | 3.1087 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
YangPa/Gosegu | YangPa | "2024-04-14T22:44:45Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-14T22:43:26Z" | ---
license: openrail
---
|
jhamel/rebel_fine_tune | jhamel | "2024-04-15T00:08:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | "2024-04-14T22:43:32Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liamvbetts/t5-small-finetuned-2024-03-12 | liamvbetts | "2024-04-14T22:44:45Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ericjiliangli/t5-small-news-summarization",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T22:44:27Z" | ---
base_model: ericjiliangli/t5-small-news-summarization
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-12
This model is a fine-tuned version of [ericjiliangli/t5-small-news-summarization](https://huggingface.co/ericjiliangli/t5-small-news-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7445
- Rouge1: 30.158
- Rouge2: 15.0234
- Rougel: 25.9885
- Rougelsum: 26.1101
- Gen Len: 18.759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 1.9214 | 1.0 | 328 | 1.7445 | 30.158 | 15.0234 | 25.9885 | 26.1101 | 18.759 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
stulcrad/CNEC_2_0_ext_Czert-B-base-cased | stulcrad | "2024-04-14T23:31:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cnec",
"base_model:UWB-AIR/Czert-B-base-cased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-14T22:46:03Z" | ---
base_model: UWB-AIR/Czert-B-base-cased
tags:
- generated_from_trainer
datasets:
- cnec
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: CNEC_2_0_ext_Czert-B-base-cased
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cnec
type: cnec
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.7743521000893655
- name: Recall
type: recall
value: 0.8600496277915632
- name: F1
type: f1
value: 0.8149541500117563
- name: Accuracy
type: accuracy
value: 0.9550983576971666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CNEC_2_0_ext_Czert-B-base-cased
This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on the cnec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1985
- Precision: 0.7744
- Recall: 0.8600
- F1: 0.8150
- Accuracy: 0.9551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5376 | 2.23 | 500 | 0.2846 | 0.5751 | 0.6864 | 0.6258 | 0.9214 |
| 0.2806 | 4.46 | 1000 | 0.2324 | 0.6761 | 0.7906 | 0.7289 | 0.9391 |
| 0.2234 | 6.7 | 1500 | 0.2105 | 0.7070 | 0.8169 | 0.7580 | 0.9454 |
| 0.1902 | 8.93 | 2000 | 0.2102 | 0.7298 | 0.8323 | 0.7776 | 0.9480 |
| 0.1697 | 11.16 | 2500 | 0.2028 | 0.7345 | 0.8347 | 0.7814 | 0.9500 |
| 0.1573 | 13.39 | 3000 | 0.2006 | 0.7533 | 0.8442 | 0.7962 | 0.9515 |
| 0.1423 | 15.62 | 3500 | 0.2001 | 0.7539 | 0.8496 | 0.7989 | 0.9526 |
| 0.1341 | 17.86 | 4000 | 0.2088 | 0.7596 | 0.8561 | 0.8049 | 0.9527 |
| 0.127 | 20.09 | 4500 | 0.2021 | 0.7642 | 0.8556 | 0.8073 | 0.9543 |
| 0.1233 | 22.32 | 5000 | 0.1987 | 0.7732 | 0.8596 | 0.8141 | 0.9556 |
| 0.1206 | 24.55 | 5500 | 0.1985 | 0.7744 | 0.8600 | 0.8150 | 0.9551 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaiiaCompsolutions/industry_classifier_full_descr_3rd_level | MaiiaCompsolutions | "2024-04-14T22:50:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-14T22:49:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nik548/bioGPT_finetuned_ncbi | nik548 | "2024-04-15T02:37:26Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | token-classification | "2024-04-14T22:51:17Z" | ---
license: mit
base_model: microsoft/biogpt
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bioGPT_finetuned_ncbi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bioGPT_finetuned_ncbi
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1880
- Precision: 0.4637
- Recall: 0.5448
- F1: 0.5010
- Accuracy: 0.9476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1735 | 1.0 | 1358 | 0.2050 | 0.3990 | 0.4094 | 0.4041 | 0.9415 |
| 0.1086 | 2.0 | 2716 | 0.1800 | 0.4230 | 0.5354 | 0.4726 | 0.9463 |
| 0.07 | 3.0 | 4074 | 0.1880 | 0.4637 | 0.5448 | 0.5010 | 0.9476 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
liamvbetts/t5-small-finetuned-2024-03-13 | liamvbetts | "2024-04-14T22:52:38Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-12",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T22:52:17Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-12
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-13
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-12](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-12) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8462
- Rouge1: 30.3334
- Rouge2: 17.8246
- Rougel: 26.5826
- Rougelsum: 27.0835
- Gen Len: 18.619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6474 | 1.0 | 332 | 1.8462 | 30.3334 | 17.8246 | 26.5826 | 27.0835 | 18.619 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
EddyGiusepe/Gemma-1.1-2b-it-instruct-aira-dataset-v2 | EddyGiusepe | "2024-04-14T22:52:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:52:52Z" | Entry not found |
cilantro9246/whfwbmi | cilantro9246 | "2024-04-14T22:55:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-14T22:53:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rxh220009/flan-t5-base-imdb-text-classification | rxh220009 | "2024-04-14T22:53:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T22:53:09Z" | Entry not found |
ashishp-wiai/ClipArt_LoRA_60-2024-04-14 | ashishp-wiai | "2024-04-14T23:42:48Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-04-14T22:53:17Z" | Entry not found |
Cloudxego/J-Hope | Cloudxego | "2024-04-14T22:54:29Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-14T22:53:54Z" | ---
license: openrail
---
|
Gustav0-Freind/my_coedit_onnx | Gustav0-Freind | "2024-04-15T00:44:52Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T22:55:02Z" | ---
license: mit
---
|
liamvbetts/t5-small-finetuned-2024-03-14 | liamvbetts | "2024-04-14T22:55:42Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-13",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T22:55:26Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-13
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-14
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-13](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-13) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6488
- Rouge1: 36.7711
- Rouge2: 23.7969
- Rougel: 33.074
- Rougelsum: 33.6007
- Gen Len: 18.814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.6749 | 1.0 | 341 | 1.6488 | 36.7711 | 23.7969 | 33.074 | 33.6007 | 18.814 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Dracones/mixtral-8x22b-instruct-oh_exl2_5.5bpw | Dracones | "2024-04-14T23:10:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T22:55:29Z" | ---
language:
- en
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
base_model: mistral-community/Mixtral-8x22B-v0.1
tags:
- exl2
---
# mixtral-8x22b-instruct-oh - EXL2 5.5bpw
This is a 5.5bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
_TODO_
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|
Coolwowsocoolwow/Meg_FNAFHS_Season_2 | Coolwowsocoolwow | "2024-04-14T22:59:23Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-14T22:57:44Z" | ---
license: openrail
---
|
liamvbetts/t5-small-finetuned-2024-03-15 | liamvbetts | "2024-04-14T22:58:11Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-14",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T22:57:58Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-14
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-15
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-14](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-14) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5180
- Rouge1: 42.6379
- Rouge2: 30.7892
- Rougel: 39.2984
- Rougelsum: 39.671
- Gen Len: 18.9765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.787 | 1.0 | 337 | 1.5180 | 42.6379 | 30.7892 | 39.2984 | 39.671 | 18.9765 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
anuragk16/gretelai-synthetic_text_to_sql-llama2-model | anuragk16 | "2024-04-14T23:10:50Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T23:00:07Z" | Entry not found |
liamvbetts/t5-small-finetuned-2024-03-16 | liamvbetts | "2024-04-14T23:00:41Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-15",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T23:00:27Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-15
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-16
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-15](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-15) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9081
- Rouge1: 32.2806
- Rouge2: 18.3465
- Rougel: 27.9985
- Rougelsum: 28.4829
- Gen Len: 18.6506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9126 | 1.0 | 330 | 1.9081 | 32.2806 | 18.3465 | 27.9985 | 28.4829 | 18.6506 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
bdambrosio/command-r-plus-6.0bpw-h8-exl2 | bdambrosio | "2024-04-15T00:26:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | text-generation | "2024-04-14T23:01:20Z" | ---
license: apache-2.0
---
Like the name says:
command-r-plus 6.0bpw h8 exl2
I have some trouble getting it to break free from it's rigid pre-trained json/function-calling format.
YMMV |
Erfan-Shayegani/llama2-lora_Unlearned_GA_Accelerate | Erfan-Shayegani | "2024-04-14T23:01:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:01:26Z" | Invalid username or password. |
lovelyai999/temp | lovelyai999 | "2024-04-14T23:19:31Z" | 0 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-04-14T23:02:19Z" | Entry not found |
liamvbetts/t5-small-finetuned-2024-03-17 | liamvbetts | "2024-04-14T23:03:06Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-16",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T23:02:49Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-16
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-17
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-16](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6870
- Rouge1: 36.9896
- Rouge2: 24.6597
- Rougel: 32.6752
- Rougelsum: 32.6582
- Gen Len: 18.8143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9296 | 1.0 | 276 | 1.6870 | 36.9896 | 24.6597 | 32.6752 | 32.6582 | 18.8143 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mradermacher/stairolz-70b-GGUF | mradermacher | "2024-04-14T23:44:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:03:23Z" | Entry not found |
nlp-group/gradio_bert | nlp-group | "2024-04-14T23:48:57Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:03:30Z" | Entry not found |
woweenie/v66-sd21merge-45k-7krampdown-half | woweenie | "2024-04-14T23:09:37Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | "2024-04-14T23:04:22Z" | Entry not found |
liamvbetts/t5-small-finetuned-2024-03-18 | liamvbetts | "2024-04-14T23:06:57Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-17",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T23:06:29Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-17
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-18
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-17](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-17) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3412
- Rouge1: 42.2216
- Rouge2: 30.5944
- Rougel: 39.273
- Rougelsum: 39.1457
- Gen Len: 18.7386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.8771 | 1.0 | 349 | 1.3412 | 42.2216 | 30.5944 | 39.273 | 39.1457 | 18.7386 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
foundryservices/bittensor-sn28-base-lstm | foundryservices | "2024-04-14T23:11:08Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:06:42Z" | Entry not found |
mahiatlinux/lora_test_33 | mahiatlinux | "2024-04-14T23:08:13Z" | 0 | 0 | transformers | [
"transformers",
"mistral",
"text-generation-inference",
"unsloth",
"ggml",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:08:06Z" | Invalid username or password. |
mradermacher/scarlett-33b-GGUF | mradermacher | "2024-04-15T00:32:17Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:09:06Z" | ---
exported_from: ajibawa-2023/scarlett-33b
language:
- en
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ajibawa-2023/scarlett-33b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/scarlett-33b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q2_K.gguf) | Q2_K | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ3_XS.gguf) | IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q3_K_S.gguf) | Q3_K_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ3_M.gguf) | IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.IQ4_XS.gguf) | IQ4_XS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q5_K_S.gguf) | Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q5_K_M.gguf) | Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/scarlett-33b-GGUF/resolve/main/scarlett-33b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
liamvbetts/t5-small-finetuned-2024-03-19 | liamvbetts | "2024-04-14T23:09:52Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-18",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T23:09:21Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-18
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-19
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-18](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-18) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7824
- Rouge1: 34.996
- Rouge2: 23.0601
- Rougel: 32.7854
- Rougelsum: 33.1113
- Gen Len: 18.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 1.7163 | 1.0 | 347 | 1.7824 | 34.996 | 23.0601 | 32.7854 | 33.1113 | 18.6667 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
divinetaco/aranea-tenebris-120b-v1.0 | divinetaco | "2024-04-14T23:21:16Z" | 0 | 1 | transformers | [
"transformers",
"not-for-all-audiences",
"nsfw",
"mergekit",
"merge",
"base_model:Netrve/Miqu-PlayMaid-70B-v0.1",
"base_model:ShinojiResearch/Senku-70B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:09:25Z" | ---
license: cc-by-nc-4.0
base_model:
- Netrve/Miqu-PlayMaid-70B-v0.1
- ShinojiResearch/Senku-70B
library_name: transformers
tags:
- not-for-all-audiences
- nsfw
- mergekit
- merge
---
# aranea-tenebris-116b-v1.0
**aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B**
Model merge for uncensored creative writing and rp
![image/png](https://huggingface.co/divinetaco/aranea-tenebris-120b-v1.0/resolve/main/aranea-tenebris.png)
A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [Netrve/Miqu-PlayMaid-70B-v0.1](https://huggingface.co/Netrve/Miqu-PlayMaid-70B-v0.1) with interleaved layers of [ShinojiResearch/Senku-70B](https://huggingface.co/ShinojiResearch/Senku-70B).
This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model.
Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges.
A number of different base models, interleave models and layer offsets were compared.
This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts.
- Usable context: ~32768
- Recommended prompt format: Alpaca
- Layers: 137
### Testing
Two different writing styles were considered for each testing scenario:
- Completions for 3rd person narration. No character role was assumed.
- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.
Tests assumed a mature audience, but a range of scenarios were constructed.
Thematic inconsistancy or bias in character behaviour was penalized heavily.
Models showing the following were penalized during manual comparison:
- Consistently short responses
- Laziness or readily gave up on solving a character problem.
- Overly malleable, where characters could not hold opinions or beliefs.
- Passiveness or an inability to drive the narrative.
- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.
- Ignoring or missing obvious scenario solutions.
- Impersonating other major characters out of turn during rp tests.
- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.
- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.
### Interesting observations from benchmarking
- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.
- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.
- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.
- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
### Quantizations
Exllamav2 quants will be available when bandwidth permits.
|
Dracones/mixtral-8x22b-instruct-oh_exl2_5.0bpw | Dracones | "2024-04-14T23:30:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | text-generation | "2024-04-14T23:10:45Z" | ---
language:
- en
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
base_model: mistral-community/Mixtral-8x22B-v0.1
tags:
- exl2
---
# mixtral-8x22b-instruct-oh - EXL2 5.0bpw
This is a 5.0bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
_TODO_
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|
itay-nakash/model_9c66fb4639 | itay-nakash | "2024-04-15T02:16:40Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2024-04-14T23:12:10Z" | Entry not found |
Aveo/autotrain-4hkjv-ypex0 | Aveo | "2024-04-14T23:14:13Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-4hkjv-ypex0/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-14T23:13:15Z" |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-4hkjv-ypex0/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.9300565719604492
f1_macro: 0.8222222222222223
f1_micro: 0.8333333333333334
f1_weighted: 0.8222222222222223
precision_macro: 0.8888888888888888
precision_micro: 0.8333333333333334
precision_weighted: 0.8888888888888888
recall_macro: 0.8333333333333334
recall_micro: 0.8333333333333334
recall_weighted: 0.8333333333333334
accuracy: 0.8333333333333334
|
hui168/rl_course_vizdoom_health_gathering_supreme | hui168 | "2024-04-14T23:13:39Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-14T23:13:29Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.34 +/- 5.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r hui168/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mradermacher/NeuralStockFusion-7b-GGUF | mradermacher | "2024-04-15T01:56:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:Kukedlc/NeuralArjuna-7B-DT",
"base_model:Kukedlc/NeuralMaths-Experiment-7b",
"base_model:Kukedlc/NeuralSynthesis-7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:15:19Z" | ---
base_model:
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/NeuralArjuna-7B-DT
- Kukedlc/NeuralMaths-Experiment-7b
- Kukedlc/NeuralSynthesis-7B-v0.1
exported_from: Kukedlc/NeuralStockFusion-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Kukedlc/NeuralStockFusion-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralStockFusion-7b-GGUF/resolve/main/NeuralStockFusion-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
shreyas1104/shreyas_flan-t5-base_peft_lora | shreyas1104 | "2024-04-15T02:47:15Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-04-14T23:16:29Z" | Entry not found |
SilvioLima/Llama-2-7b-chat-finetune | SilvioLima | "2024-04-14T23:28:24Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T23:20:07Z" | Entry not found |
hvein/melotts3253 | hvein | "2024-04-15T00:56:39Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:20:32Z" | Entry not found |
John-Yakuza/SteveLora | John-Yakuza | "2024-04-15T01:17:47Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:21:06Z" | Entry not found |
hvein/melotts464 | hvein | "2024-04-15T00:56:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:21:18Z" | Entry not found |
REDOUAN/Adulteducation | REDOUAN | "2024-04-14T23:21:29Z" | 0 | 1 | null | [
"license:openrail",
"region:us"
] | null | "2024-04-14T23:21:29Z" | ---
license: openrail
---
|
cackerman/rewrites_gem7unsloth_4bit_ft_full_merged | cackerman | "2024-04-14T23:23:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | "2024-04-14T23:21:30Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** cackerman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hvein/melotts1191 | hvein | "2024-04-15T00:56:38Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:22:29Z" | Entry not found |
K00B404/Hitotsubashi_xB_v0-1 | K00B404 | "2024-04-14T23:25:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:25:28Z" | Invalid username or password. |
mradermacher/Euryale-Inverted-L2-70B-GGUF | mradermacher | "2024-04-14T23:48:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:25:43Z" | Entry not found |
EdBerg/quotes_Llama-2-13b-chat-hf | EdBerg | "2024-04-14T23:33:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | "2024-04-14T23:26:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/stairolzlv-70b-GGUF | mradermacher | "2024-04-15T00:03:54Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:27:54Z" | Entry not found |
anezatra/gpt2-alpaca-lora | anezatra | "2024-04-14T23:29:55Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"en",
"dataset:tatsu-lab/alpaca",
"arxiv:1910.09700",
"base_model:gpt2",
"license:mit",
"region:us"
] | text-generation | "2024-04-14T23:27:59Z" | ---
base_model: gpt2
license: mit
datasets:
- tatsu-lab/alpaca
language:
- en
pipeline_tag: text-generation
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0 |
andrewbai/tinyllama-sft-wizardlm_evol_instruct_v2-full | andrewbai | "2024-04-15T01:42:22Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T23:28:00Z" | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format
model-index:
- name: tinyllama-sft-wizardlm_evol_instruct_v2-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-sft-wizardlm_evol_instruct_v2-full
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the ucla-cmllab/WizardLM_evol_instruct_V2_100k-chat-format dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.735 | 1.0 | 781 | 0.7234 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
IamYash/VA-LLM-meiaa10a | IamYash | "2024-04-15T00:52:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:28:37Z" | Entry not found |
metinovadilet/mistral-kyrgyz | metinovadilet | "2024-04-14T23:40:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | text-generation | "2024-04-14T23:28:54Z" | ---
license: apache-2.0
---
|
liamvbetts/t5-small-finetuned-2024-03-20 | liamvbetts | "2024-04-14T23:30:18Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:liamvbetts/t5-small-finetuned-2024-03-19",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-04-14T23:29:46Z" | ---
base_model: liamvbetts/t5-small-finetuned-2024-03-19
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2024-03-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2024-03-20
This model is a fine-tuned version of [liamvbetts/t5-small-finetuned-2024-03-19](https://huggingface.co/liamvbetts/t5-small-finetuned-2024-03-19) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9234
- Rouge1: 29.6968
- Rouge2: 15.5967
- Rougel: 25.7424
- Rougelsum: 25.9564
- Gen Len: 18.4459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0445 | 1.0 | 293 | 1.9234 | 29.6968 | 15.5967 | 25.7424 | 25.9564 | 18.4459 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Ostixe360/MoMv2-bf16 | Ostixe360 | "2024-04-14T23:31:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-14T23:30:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dracones/mixtral-8x22b-instruct-oh_exl2_4.5bpw | Dracones | "2024-04-14T23:43:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-14T23:30:54Z" | ---
language:
- en
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
base_model: mistral-community/Mixtral-8x22B-v0.1
tags:
- exl2
---
# mixtral-8x22b-instruct-oh - EXL2 4.5bpw
This is a 4.5bpw EXL2 quant of [fireworks-ai/mixtral-8x22b-instruct-oh](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
_TODO_
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 44,48 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="mixtral-8x22b-instruct-oh"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|
Cbrown9/Articulate-Visions-T1 | Cbrown9 | "2024-04-14T23:31:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-14T23:31:13Z" | Entry not found |
relu-ntnu/bart-large-cnn_v1_trained_on_100 | relu-ntnu | "2024-04-14T23:32:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:32:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neural-commons/neural-mem-cell-256-128-0.1-v0.0.1 | neural-commons | "2024-04-14T23:55:15Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T23:35:02Z" | Entry not found |